id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
211548478
|
pes2o/s2orc
|
v3-fos-license
|
CONTROLLING COMMON BEAN WHITE MOULD CAUSED BY Sclerotinia sclerotiorum
Bean white mould caused by Sclerotinia sclerotiorum (Lib.) de Bary is a widespread and destructive disease under protective and open field cultivation in Egypt and worldwide. The vegetative sclerotial fungal structures formed by such pathogen give capability to overwinter between seasons. Different strategies are used efficiently to control the disease and eradicate its causal pathogen. The study was carried out to control the disease under Egyptian conditions. Results revealed that variability in virulence among the tested isolates was correlated with their oxalic acid production. Large white lima cultivar was the most resistant while Cranberry beans was the most susceptible in both detached leaf technique or under field conditions. Among the nine Trichoderma spp. used as biological control agent, T. asperellum inhibited S. sclerotiorum growth in vitro and controlled disease incidence in the field more efficiently. Epidemiological studies also revealed that high soil temperature inhibited sclerotial formation even in presence of high soil moisture while dry soil alleviated the inhibitory effect of high temperature. Sowing the selected resistant cultivars on early September at 50 cm distance with eight days interval period irrigation and organic compost for fertilization gave the most significant control for disease incidence and severity under field conditions.
INTRODUCTION
Sclerotinia sclerotiorum (Lib.) de Bary is considered as a widespread and destructive pathogen, thrilling disease on more than 500 plant species causing white mould symptoms with fluffy white mycelium cottony appearance (Sharma et al., 2015). The pathogen severely affecting bean plants from December till early March, causing significant economic losses in bean fields.
After beans harvest, S. sclerotiorum survives in soil and crop debris as sclerotia, a dormant and resistant stage, on infected plant tissue typically are incorporated into the soil and can be endured fertile for up to 10 years (Lopes et al., 2010). Important role in disease cycles refers to the somatically structure (sclerotia), as they are the primary structures for their long-term survival and produce inocula for further infection. Sclerotia are germinate either vegetatively for local colonization or carpogenically to initiate the sexual cycle including the production of apothecia from which ascospores are released (Bolton et al., 2005) and in crops grown under low aeration and light penetration, white mould is even more aggressive.
Different variations between eight S. sclerotiorum isolates previously isolated from upper Egypt were differed in their culture morphology and sclerotial production apothecia (Abdel-Razik et al., 2012).
Oxalic acid plays an important role in pathogenesis of S. sclerotiorum infections and the role of fungal secreted oxalic acid in virulence of different isolates of S. sclerotiorum have been investigated (Noyes and Hancock, 1981;Li et al., 2008;Williams et al., 2011).
High soil temperatures (40 to 50°C) and low content of soil moisture depressed the durability of sclerotia of Sclerotinia minor and Sclerotinia sclerotiorum (Adams, 1987). Sclerotia of S. minor Jagger and S. sclerotiorum were infertile after three weeks in overflowed soil at mean temperatures ranging from 30 to 33°C (Matheron and Porchas, 2005). Under meteorological conditions favorable to S. sclerotiorum development, such as high humidity and mild temperatures from 15 to 20°C, common bean may afford losses of 30% or more, ascending to 100% in rainy seasons, when protective procedures are not considered (Singh and Schwartz, 2010).
The low content of organic matter in the new cultivated bean areas in Egypt have motivated some farmers to add organic stimulants to decrease disease incidence or carpogenic germination of Sclerotinia sclerotiorum in other crops (Asirifi et al., 1994). Liquid extracts of fermented agricultural debris, enhanced colonization of sclerotia by mycoparasitic fungi Trichoderma spp. (Huang et al., 1997). Agricultural mature composts and their water extract could play a role in decreasing disease Different agricultural practices can cause physical, biological, and chemical variations, such as modifications in soil structure. One or more of these changes may play a role in modifying soil microbial communities and decreasing the survival of soil-borne pathogen propagules (Napoleão et al., 2005).
The most efficient manner for controlling white mould is the physiological resistance of bean cultivars. Virulence of S. sclerotiorum depends on production of toxic compounds (oxalic acid), cell-wall degrading enzymes and growth regulatory factors during infection to the secretion of oxalic acid. Toxins can depress the physiological processes in plants. Moreover, lytic enzymes promote colonization of the plant cell through the function of cell wall degradation enzymes such as cellulases, hemicellulases, endopectinases, exopectinases, glycosidases, proteases and xylanases.
Host resistance responses may act in various appearances in plant tissues, depending on the closeness of the region colonized by the pathogen and on the compounds constituted, they may even play as stimulants for production of new enzymes (Ribera and Zuniga, 2012).
This work aims to study several measurements to control white rot disease on common bean and characterize the efficiency of each method under field conditions, including varietal resistance, biological control, environmental factors and agricultural practices.
Isolation, Purification and Identification of the Causal Pathogen
The causal organism was isolated from stems, branches, and pods exhibiting typical symptoms of white mould collected from different districts. Sclerotia associated with diseased samples of the causal organism were separated and superficially sterilized by agitation in 0.5% solution of NaClO (1:9 dilution of household bleach) for three minutes. Sterilized samples were rinsed several times in sterile distilled water and dried between two sterile filter papers then single sclerotia from each sample was plated onto surface of Petri dish containing potato dextrose agar (PDA) medium and incubated at 20 ±2°C for 5 days to obtain pure cultures. The process was repeated twice for each isolate (Hao et al., 2003). Pure culture of each isolate was obtained using the hyphal tip technique (Brown, 1924
Pathogenicity Test and Aggressiveness of Isolates using the Detached Leaf Assay
Pathogenicity test of the previously identified isolates was carried out under laboratory conditions using detached leaves technique (Kull et al., 2003). Sterilized plastic pots (20 cm in diameter) were filled with 3 kg previously formalin sterilized sand -loam field soil. Healthy surface sterilized four seeds of White kidney bean cultivar kindly obtained from Hort. Dept., Fac. Agric., Zagazig Univ., Egypt were sown in each pot. Pots were irrigated and fertilized as usual, under greenhouse conditions. Looking healthy plant middle leaves were gathered at flowering stage. Leaves were surface-sterilized in a solution containing 47.5% ethanol and 2.6% sodium hypochlorite for 5 sec. Sterilized leaves were placed in 15 cm Petri dishes with sterilized moisten filter paper. The lower surface for each leaf was inoculated with fungal growth agar disc 0.5 cm in diameter taken from the edge of 7 days old S. sclerotiorum culture. Three replicates (Petri dishes containing bean leaves) were used for each isolate and incubated at 20±2°C for five days. After incubation leaflets were visually rated three days after inoculation and the lesions diameter (percentage of affected necrotic leaflet area) was measured for each isolate to identify white mould incidence and aggressiveness for each isolate (Wegulo et al., 1998). The pathogen severity percentage ranked from absence to very high as follows: 0% (absence), 1-25% (weak), 26-50% (moderate), 51-75% (high), and 76-100% (very high) according to Hall and Phillips (1996).
Detection of oxalic acid by visual degree of color change
Oxalic acid production was visually investigated using PDA medium amended with bromophenol blue (Bb) in plates according to Steadman et al. (1994). The change of color from purple to yellow refer to a clue of oxalic acid production by the fungus. Oxalic acid secretion by an isolate was rated on a scale from no production (-) to maximum production (+++) depended on the visual degree of color modification observed on PDA+Bb medium plates.
Oxalic acid quantitation
Oxalic acid quantity production was determined by transfer one agar growth disc (8 mm diameter) of each isolate to 60 ml Erlenmeyer flasks containing 15 ml of potato dextrose broth (PDB) medium adjusted to pH 6. Four flasks for each tested isolate were incubated for 5 days at 25±2°C. Cultures growth were vacuumed and oxalic acid was determined in the supernatant of each isolate with catalytic kinetic spectrophotometric method, as described elsewhere (Xu and Zhang, 2000). Oxalic acid concentration was determined comparing with a standard curve and was assayed as mg oxalic acid/Liter PDB medium.
Evaluation of Pathogenesis of S. sclerotiorum on Different Cultivars by Detached Leaf Assay
Seeds of eight common bean cultivars (Giza 6, Bronco, Paulista, Pinto, White kidney bean, Cranberry bean, Black bean and Large white lima bean) kindly obtained from Horticulture Research Institute, Agricultural Research Center, Cairo were grown in the greenhouse for 21 days. Detached leaf assay previously mentioned in pathogenicity test evaluation was used herein. Lesion diameter in cm was calculated after inoculation by S. sclerotiorum, of New Nubaria isolate. For all tests, discs of 7mm diameter collected from the growing edge of the twelve pathogenic fungal isolates and the nine potential biocontrol agents were located on opposite sides of sterile Petri dishes (9 cm diameter) poured previously with potato dextrose agar (PDA) medium. PDA plates inoculated with S. sclerotiorum only served as control. Three plates were prepared for each of the pathogen/ potential biocontrol agent combinations. The dishes were incubated in the dark at 20 ±2°C for 5 days. The parameters were measured after 5 days as shown in Fig. 1 s1 (distance between the pathogen discs sowing point and furthest point of the colony) and s2 (distance between the pathogen plug sowing point and the edge of the colony) from where S. sclerotiorum and Trichoderma mycelia came into contact. Thus, the percentage of inhibition in the direct confrontation assay (ID) was calculated
Relative humidity
Five levels of relative humidity (32.5, 74, 85, 92.5 and 100% were prepared in Petri dishes according to Solomon (1951) and used to study their effect on mycelial covering growth and production of sclerotia under laboratory conditions as previously mentioned.
Soil temperature and moisture levels
Needed sclerotia for experiment was prepared according to Matheron and Porchas (2005). Sandy-loam soils were collected from dry fields to a depth of 10 cm and air dried then ground into fine particles. Two different levels of moisture (dry and wet soil) and five different degrees of soil temperature (0, 10, 20, 30 and 40°C) were prepared also according to Matheron and Porchas (2005). Jars contain soil and sclerotia were placed in growth chamber (150 x 150 x 80 cm) to control the temperature and moisture. Viability was estimated by the capability of sclerotia to germinate on the growth medium and subsequently produce daughter sclerotia. Experiment was carried out in split-split plot design.
Organic fertilizers
Some organic fertilizers (animal manure, poultry manure and compost) were obtained from commercial marked and tested for their effect on S. sclerotiorum under laboratory conditions. Scloretia of S. sclerotiorum was promoted as mentioned before.
Moisture of fertilizers was adjusted to 20% volume / volume. Cylindrical shape plastic containers 10 x 7 cm were filled with organic fertilizers. Each twenty sclerotia of new Nubaria isolate were put in nylon bag 3x6 cm. Nylon bags with sclerotia were distributed in the middle of plastic containers during filling with organic fertilizers. Organic fertilizer free field soil was served as control and equipped by the same method. Twelve replicates were used for each treatment. All treatments were incubated at 20 ±2°C for 4 weeks. Three replicates of each treatment were weekly taken and sclerotia, were removed. Sclerotial viability was estimated by the capability of sclerotia to germinate on growth medium plate as mentioned before.
Field Experiments
For all field experiments, random complete block design with three replicates were carried out under naturally infested at previously recorded white mould infection fields (New Salhia district, Sharqia Governorate). White kidney bean cultivar was investigated in this experiment. Agricultural practices were followed as normal except for the agricultural practice factor under study. Plants were evaluated for disease incidence and severity at flowering stage and complete pods formation. Percentage of white mould infected stems and pods were calculated.
An area of 1.0 m 2 in the plots was investigated separately at 90 days after emergence (DAE) for white mould evaluation. Disease incidence was calculated as the percentage of plants with symptoms. As well as, stems, branches, and pods of bean plants were investigated to the rate of their severity on a scale of 0, 1, 2, 3, and 4 representing 0, 1-25%, 26-50%, 51-75% and 76-100% according to Hall and Phillips (1996
Reaction of some bean cultivars
The previously mentioned 8 bean cultivars were tested for their susceptibility to white mould caused by naturally infested field with S. sclerotiorum in two successive growing seasons 2017 and 2018 in New Salhia, Sharqia Governorate. Cultivars were sown in separated plots (plot area = 10.5 m 2 ) each plot consists of 7 rows. Seeds were sown on rows at hill (distance between hills was 30.0 cm) each hill contains 2-3 seeds.
Agricultural practices
Organic fertilization, irrigation interval days, sowing dates and sowing distance were tested for their effect on bean white mould disease incidence under naturally infested field conditions in two successive growing seasons 2017 and 2018 in New Salhia, Sharqia Governorate.
Sowing dates
Four sowing dates (10 th September 20 th September, 1 st October and 10 th October) were tested for their effect on bean white mould disease incidence and severity.
Irrigation interval periods
Four irrigation interval periods (5, 6, 7 and 8 days) were studied for their effect on bean white mould disease incidence and severity. Care was taken in irrigation to prevent water from irrigation the non-aimed plots.
Planting distances
Seeds were sown in plots (3.5 m x 5 m) included 5 ridges, 60 cm apart. Seeds were hand sown on one side of the ridge in hills. Four planting distances (20, 30, 40 and 50 cm) were evaluated for their effect on bean white mould disease incidence and severity.
Organic fertilization
Soil in New Salhia, Sharqia Governorate was prepared as normally and organic fertilizers (animal manure, poultry manure and compost) were obtained from Elsalhia Company for Industry and Trading. Organic fertilizers added at a rate of 20 m 3 /faddan (0.05 m 3 / plot).
In vivo biological control experiments
Potential antimicrobial ability of Trichoderma spp., as one of the well-known bioagents, was evaluated on beans white mould disease. In vitro screening of nine Trichoderma isolates inferred that T. asperellum was the most efficient biocontrol agent against the white mould pathogen, S. sclerotiorum. Thus, further evaluation of the biocontrol capability of T. asperellum against white mould disease under greenhouse conditions during two successive seasons, was carried out.
The pathogen, S. sclerotiorum, was propagated on potato dextrose agar plates for one week and the freshly emerged sclerotia were harvested and used for inoculation. Inoculum of T. asperellum was propagated on gliotoxin fermentation broth (Anitha and Murugesan, 2005). Inoculum concentration in the collected filtrate was adjusted to 10 4 cfu/ml with the aid of haemocytometer and stored at 4 o C till further use. Five white kidney bean seeds were sown in 30-cm pots containing five kg light clay soil. Plants were irrigated and fertilized when needed. Inoculation was carried out on 45-days old plants by stem slashing technique, then three mature sclerotia were placed around the crown area and covered with wet cotton plugs. Following inoculation, plants were moved to greenhouse covered black mesh shade cloth to reduce sunlight intensity and to create conducive environment for white mould disease development and progress (Kraft and Pfleger, 2001). Symptoms development was monitored at two weeks after inoculation as previously mentioned in pathogenicity test. Biocontrol treatment was done using T. asperellum inoculum as spraying application. Bioagent inoculum was applied once every two weeks after symptoms development. Five additional pots were sprayed only with water to serve as control. Data were recorded after each biocontrol application by calculating percentage of disease severity, disease incidence, infected stems and infected pods. Percentage of infected stems and infected pods was calculated using the same formulae.
Statistical Analysis
The obtained data statistically analyzed by the analysis of variance (ANOVA) using MSTAT-C (1991). The least significant difference (LSD) test and Duncan's Multiple Range Lest (Duncan, 1955) were used to find out the significance of the means of various treatment (Gomez and Gomez, 1984). All analyses were performed at P= 5% level.
Isolation, Purification and Identification of the Causal Organism
The pure twelve isolates were identified based on their morphological characters as
Pathogenicity Test and Aggressiveness of Isolates using the Detached Leaf Technique
All the tested twelve isolates were pathogenic to common bean at different degrees as observed in vitro (detached leaves technique). Moreover, virulence assays indicated that the tested isolates have variable degrees of virulence on beans (Table 1). The isolates gave three main categories of responses on the host plant, high, moderate and weak pathogenic. The isolates named NEWN, HOSH, ELKA, and NEWS were highly pathogenic according to their necrotic lesion diameter, BELB and ELKO were weakly pathogenic, while the rest of isolates were moderately pathogenic. Variable virulence degrees of the tested isolates are correlated with their growth rate, sclerotial abundance and oxalic acid ranked and secretion as it will be discussed in the following experiments.
Oxalic Acid Production by S. sclerotiorum Isolates
Further discrimination between the collected isolates was done based on their extent of oxalic acid (OA) production and degree of virulence on detached bean leaves. Qualitative analysis of oxalic acid production on bromophenol blue medium sorted out the isolates into three main categories where +++ showed the highest level of OA production ( Table 1). The isolates named NEWS, ELKA, HOSH and NEWN display the highest potential of OA production while BELB and ELKO were the lowest, and the remaining isolates were moderate. Quantitative analysis of OA by the same isolates gave parallel results. Isolates located in the highest category "+++" produced the highest amount of oxalic acid in culture medium which ranged between 6.17 to 8.8 mg/L. It is worthy to mention that there is a correlation between the highest production of oxalic acid and pathogenic capabilities of tested isolates. Isolate produce high concentration of oxalic acid ranked highly pathogenic one. These findings were in agreement with This result demonstrates an intimate correlation between OA production and fungal virulence. Thus, it is plausible that such pathogen relies on Values with the same letters are not significant at probability = 0.05 *Oxalic acid secretion rate on a scale from no production (-) to maximum production (+++) OA for invasiveness and colonization of its host tissue. Therefore, the study concluded that the highly virulent isolates of S. sclerotiorum are able to grow faster and produce higher amounts of oxalic acid and sclerotia on growth medium. The production of oxalic acid was considered as a pathogenicity determinant in S. sclerotiorum as previously mentioned by research workers (Li et
Evaluation of S. sclerotiorum Pathogenesis on Different Cultivars by Detached Leaf Technique
Bean cultivars react differently to the invading pathogens based on the amount of resistance genes they have. The most virulent isolate of the pathogen (NEWN) was selected and inoculated on detached leaves of eight different bean cultivars. Data were recorded as lesion diameter for the detached leave technique. Cranberry Bean was found to be the most susceptible cultivar to the disease followed by Giza 6, Pinto Bean and Paulista while Large white lima and Bronco were the most resistant ones as they exhibited the smallest lesions (Fig. 2)
In Vitro Biological Control Experiments
Results revealed that the highest percentage of mycelial growth reduction was occurred by T. asperellum followed by T. album, T. hamatum while T. harzianum, T. koningii and T. viride display moderate inhibition (Table 2). Three uncharacterized species of Trichoderma termed Y, A, and B also gave significant reduction of pathogen's mycelial growth in vitro. These findings were in accordance with those reported by Sumida et al. (2018) who showed that T. asperellum, has valuable antagonism in dualplate confrontation assessments against nine different isolates of S. sclerotiorum.
Species of Trichoderma are efficient bioagents due to their direct antagonism via secretion of lytic enzymes such as chitinases, β-1,3-glucanases, proteases and by the secondary metabolites (Geraldine et al., 2013). Also, one of the main mechanisms of Trichoderma species is to affect fungal parasitism by production of cell wall-degraded enzymes. Degradation enzymes are i.e. chitinase, protease and β-1,3glucanases that destroy the cell wall of the fungus and allow biological control agents to penetrate the pathogen cell wall (Geraldine et al., 2013). Abdullah et al. (2008) showed that T. harzianum are able to form a ''hook-like'' and ''appressorial'' structures around the hyphae of S. sclerotiorum, which then kill the pathogen's mycelium through penetrating and colonization of the hyphae (Elad et al., 1982;Viterbo et al., 2002). Consequently, T. harzianum can be considered as an antagonist and a mycoparasite of S. sclerotiorum, as it can inhibit the myceliogenic and carpogenic germination of sclerotia (Abdullah et al., 2008).
Epidemiological Studies
It was mainly noticed that the higher relative humidity the faster fungal growth and sclerotial formation (Table 3). Therefore, it seems that the tested fungus prefers highly humid conditions for optimal growth and reproduction. No significant difference was observed among the tested isolates when exposed to different levels of relative humidity, which indicates that fungal preferable to ambient humidity is not essential for discrimination. Air relative humidity detrimental factor for white mould disease caused by S. sclerotiorum as the relative humidity be higher than 80% in the environment is one of the determining factors for S. sclerotiorum development to cause disease in soybean plants and also other crops (Hannusch and Boland, 1996).
Soil moisture and temperature, on the other hand, had more significant impact on fungal sclerotial germination (Table 4). High soil temperature was almost inhibitory for sclerotial germination even in presence of soil moisture. However, dry soil treatment alleviated the inhibitory effect of high temperature (40 o C) and let the examined sclerotia grow normally as compared to the other control treatments. Longer incubation periods were determined to the fungal germination even in wet soil. However, fungal germination was completely diminished in wet soil. This observation was similar to the results obtained by Matheron and Porchas (2005) who found that the sclerotial proportion of S. sclerotiorum that germinated in wet soil tended to decline as soil temperature increased from 15 to 40°C with no germination of sclerotia observed after 1 and 2 weeks, respectively, at 40°C. Plant fertilizers derived from organic matter including animal, chicken manure and compost are useful for plant growth and health as well. Biofertilizers application enhances plant growth parameters and suppress invading plant pathogens. Such effect could occur through variable mechanisms including biocontrol agents and inhibitory metabolites secreted from soil PGPRs (Bonanomi et al., 2007). In the current research trial, all organic fertilizers tested showed inhibitory effect against S. sclerotiorum to variable extent (Fig. 3). Compost, in particular, was the most suppressive for sclerotial germination in vitro, as germination percentage was dropped to 36.7%, while animal and chicken manure exhibit an average percentage of 45% for each at four weeks post inoculation comparing to 53.3% for the untreated controls. Germination percentage of sclerotia was randomly diminished with the time.
Reaction of some bean cultivars to white mould under field conditions
Varietal reactions to the tested pathogen in field experiment were parallel to those obtained from detached leaf assay (Table 5). Cranberry Bean, Giza 6, Pinto Bean and Paulista were the most susceptible cultivars as they recorded the highest disease severity percentages which ranged between 46.67 and 73.33%. Susceptibility of bean cultivars to S. sclerotiorum infections was similar in the two successive seasons of 2017 and 2018 except that infection percentage was declined in the second season. Most cultivars were completely resistant to seed infections except Giza 6 and Cranberry Bean. Table 6 indicate that the earlier sowing date the better disease escape. Plants sown on 10 th September exhibited no infections while those sown month later showed higher levels of disease severity and free seed yield. Therefore, selection of the suitable sowing date is considered among the agricultural practices to manage the disease incidence. Workneh and Yang (2000) announced that annual modifications in air temperature were more valuable to the incidence of white mould on soybean crop as opposed to the patterns of atmospheric humidity and precipitation. It is convenient to mention that inhibiting factors to the incidence of white mould on soybean crop differ from site to site and year to year as a result of the modifications observed in the air temperature and relative humidity regime at a locate site. This assumption supports our finding that early sowing dates provide unfavorable air temperature to the pathogen growth and prevent disease initiation. Irrigation frequency have also impacted disease levels. Eight days interval period significantly decreased the disease severity comparing to the other shorter irrigation periods. Low water levels dropped the disease severity down to 14% whereas five days interval showed higher disease severity (40.74%). Higher disease severity in case of short irrigation period (5 days) might be due to the increase of soil moisture content necessary for sclerotial germination and hence increased disease incidence and severity. Previous researches showed a great impact of irrigation frequency on severity of white mould disease. Thus, the irrigation strategy that achieves field capacity is recommended by research workers (Paula Júnior et al., 2006), especially when a cultivar with an intensive plant canopy is used (Napoleão et al., 2005).
Results in
When testing sowing distance, results revealed that plants sowing at 50 cm apart were safer and healthier as the disease severity was 13.33% while other plants cultured on close distance exhibited higher disease severity percentage (33.33%). Vieira et al. (2005) showed that there was an inhibition of both incidence and severity of white mould when six plants on the contrary of 12 plants per meter were sown. High plant density (12 plants/m) decreases air circulation and increase, the moisture in the canopy, contributing to increase the disease incidence and more severe white mould than low plant density (Tu, 1997). Therefore, no yield shortage using lower plant population in field highly infested with S. sclerotiorum is an important result and implies a lower investment with seeds as well.
The investigated compost exhibited the effective results in terms of disease inhibition.
Plants fertilized with compost had 27.78% disease severity percent while animal and chicken manure were conducive for the disease progression which resulted in higher disease severity levels. The importance of inhibitory effect of organic fertilizers on white mould disease of beans was previously discussed and confirmed by El-
In vivo biological control experiments
T. asperellum gave efficient level of disease reduction after the third spray which significantly decreased the disease severity below 3.70% and completely protected the seeds from infection (Table 7). Repeated application with the bioagent's inoculum significantly suppressed disease severity in field conditions. Recent results suggest that the performance of Trichoderma spp. can be influenced much more by the time it remains in contact with the pathogen than by the pressure of the disease itself (McLean et al., 2012).
Pinto da Silva et al. (2019)
assayed the biocontrol potential of 12 Trichoderma spp. and found that all Trichoderma spp. were able to reduce the severity of white mould disease. According to Zhang et al. (2016) explanation, the white mould biocontrol is facilitated by the synthesis and exudation of Trichoderma spp. compounds which provides growth promoting features and enhance plant growth and increase root mass. Yedidia et al. (2001) suggest that the effect on growth promotion caused by T. harzianum present in soil rhizosphere is due to an increase in root area provided, allowing the plant to exploit a larger area of soil and consequently more nutrient sources.
|
2020-02-13T09:22:12.672Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "dbdf1f061421b829f5b963599c2662ef3e822003",
"oa_license": null,
"oa_url": "https://zjar.journals.ekb.eg/article_70125_e443d0f15878ca74e30d432bd262c463.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ab5dcd2a524279a28ba057f51831c12f56f7037",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
202562372
|
pes2o/s2orc
|
v3-fos-license
|
Characteristics of sarcopenia after distal gastrectomy in elderly patients
Presence of preoperative sarcopenia is a risk factor for postoperative complications. However, there are few reports on the presence of sarcopenia and its characteristics following gastrectomy. Sarcopenia is closely related to quality of life in elderly people. To date, the main purpose of follow-up after gastrectomy is surveillance for early detection of recurrence and secondary cancer. However, henceforth, quality of life in elderly gastric cancer patients after gastrectomy must also be evaluated. The present study aimed to investigate sarcopenia during a 1-year postoperative course in elderly gastric cancer patients and examine their characteristics. The subjects were 50 patients aged ≥70 years who underwent laparoscopy-assisted distal gastrectomy for gastric cancer and who experienced no recurrence 1 year postoperatively. Height, weight, serum albumin levels, food intake amount, grip strength, gait speed, visceral fat area, and appendicular skeletal muscle mass index were measured preoperatively and 6 months and 1 year postoperatively. Sarcopenia, obesity, and visceral obesity were diagnosed. Compared with preoperatively, indicators other than height decreased 6 months postoperatively. Compared with 6 months postoperatively, body weight, amount of food intake, and visceral fat area increased by 1 year postoperatively, unlike appendicular skeletal muscle mass index. The frequency of sarcopenia increased 6 months postoperatively compared with preoperatively; this frequency remained almost unchanged 1 year postoperatively compared with 6 months postoperatively. Further, the frequency of visceral obesity increased 1 year postoperatively compared with 6 months postoperatively. Weight increased after > 6 months postoperatively; however, most of the weight increase was in terms of fat and not muscle. We emphasize the importance of considering postoperative sarcopenia and visceral obesity. In particular, sarcopenia and visceral obesity should be carefully monitored after increases in body mass index and food consumption.
Introduction
In recent years, sarcopenia has been defined as the condition in which muscle mass, muscle strength, and physical ability decline [1,2]. Sarcopenia is closely related to quality of life in elderly people. Following the onset of sarcopenia, fracture, cardiovascular morbidity, and a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 mortality rates reportedly increase [3][4][5]. In the field of surgery, the relevance of sarcopenia during the perioperative period has been evaluated. Studies have reported sarcopenia as a risk factor for operative complications and poor prognosis [6][7][8]. In cases of gastrectomy performed for gastric cancer, the presence of preoperative sarcopenia has been reported as a risk factor for postoperative complications [9,10]. However, only few studies have reported the presence of sarcopenia and its long-term characteristics following gastrectomy for gastric cancer. When gastrectomy is performed in gastric cancer patients, food intake and body weight may decrease in the short term. After the first 6 months postoperatively, weight loss stops and body weight begins to increase and gradually stabilizes after 1 year postoperatively [11]. During this postoperative course, the degree/extent of changes in muscle mass, fat mass, and physical ability remain unknown.
The average life expectancy is increasing, and the number of older gastric cancer patients is increasing. Approximately half of the cases of patients who undergo gastric cancer surgery are early cancer, and the 5-year survival rate after early gastric cancer surgery is > 95% in Japan [12]. Therefore, the number of elderly people who live for a long term after gastric cancer surgery is increasing. To date, the main purpose of follow-up after gastrectomy for gastric cancer was surveillance for early detection of recurrence and secondary cancer. However, we further also need to pay attention to quality of life in elderly gastric cancer patients after gastrectomy. Therefore, the present study aimed to investigate sarcopenia during a 1-year postoperative course in elderly gastric cancer patients and examine their characteristics.
Subjects
The subjects were 50 patients aged �70 years who underwent laparoscopy-assisted distal gastrectomy for the radical treatment of gastric cancer between November 2009 and October 2015 at National Hospital Organization Hamada Medical Center. Surgery was performed in accordance with the Japanese gastric cancer treatment guidelines 20104 (ver.4) [13]. Lymphadenectomy was performed for D1 or D2. The pathological stage was I for all the patients. Billroth I (an operation in which the pylorus is removed and the proximal stomach is directly anastomosed to the duodenum) was performed for reconstruction, and an automatic anastomotic device was used to create anastomoses in all the patients. The present study included only community-dwelling patients who could orally ingest and perform activities of daily living at 1 year postoperatively. Patients who underwent postoperative adjuvant chemotherapy or in whom recurrence or edema was noted were excluded. Height, weight, amount of dietary intake, grip strength, gait speed, body mass index (BMI), and corrected appendicular skeletal muscle mass index were measured by a bioelectrical impedance analyzer, and visceral fat area was measured using computed tomography images. Measurements were performed preoperatively, 6 months postoperatively, and 1 year postoperatively.
Measurement methods
The patients in our study had dinner on the night before the examination and then fasted. They visited the hospital on the morning of the examination under fasting conditions. After visiting the hospital, they could defecate and urinate; they were then rested for 30 minutes. Measurements were performed within 1 hour thereafter. The BMI was calculated as weight/ height 2 . InBody720 (Biospace Co. Ltd., Seoul, Korea) was used as the bioelectrical impedance analyzer to measure skeletal muscle mass. The patients were positioned in a standing posture, thereby facilitating close contact of a total of eight electrodes with both the hands and feet. Weak, noninvasive alternating current flows from these electrodes. The impedance of each limb and fuselage was measured, and the appendicular muscle mass of each limb was estimated from this impedance [14,15].
Based on skeletal muscle mass by site, the appendicular skeletal muscle mass index (SMI) was calculated using appendicular skeletal muscle mass/height 2 . A decrease in muscle mass was defined as an SMI of �6.57 kg/m 2 in men and of �4.94 kg/m 2 in women [16].
Slim Vision Ver3 (Cybernet Systems Co. Ltd., Tokyo, Japan), a software that calculates abdominal computed tomography images obtained at the navel level, was used for measuring the visceral fat area. Patients with a visceral fat area of �100 cm 2 were defined as having visceral obesity [17]. Gait speed was measured using a 10-m gait test. Patients were asked to walk at a normal gait speed for 14 m, and the time required for walking 10 m after walking 2 m from the starting point was measured. Measurements were performed twice, and the mean values of the two measurements were used. The cut-off value of the 10-m gait test was 0.8 m/s [2]. The dominant hand was used to measure grip strength. Measurements were performed twice while patients were in the standing position, and the mean value of the two measurements was used. The cut-off values of grip strength were 26 kg for men and 18 kg for women [2].
The amount of food intake was determined via an interview with a dietitian. During outpatient visits, the patients were comprehensively asked what they ingested in the past 24 h. The energy (kcal) of each food ingested was evaluated using Standard Tables of Food Composition in Japan, 2015 (Seventh Revised version) [18]. Subsequently, the total energy (kcal) of food intake in the past 24 h was evaluated, which is expressed as percentage values when the preoperative levels were adjusted to 100%.
Diagnostic criteria for sarcopenia, obesity, and visceral obesity
Sarcopenia was diagnosed when the 10-m gait speed or grip strength and the SMI were below the cut-off values [2,19]. The cut-off value for SMI was � 6.57 kg/m 2 in men and � 4.94 kg/ m 2 in women [16]. Obesity was diagnosed when BMI was > 25 kg/m 2 [19], whereas visceral obesity was diagnosed when the abdominal visceral fat area was > 100 cm 2 [17].
The present study was conducted in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board of National Hospital Organization Hamada Medical Center. The content of the study was explained orally and in writing to all the patients. Written informed consent was obtained from all patients.
Statistical processing
Measured values are denoted as mean ± standard deviation (mean ± SD). Bartlett's test was used for the test of variance among the three groups of continuous variables. Two-factor analysis of variance was used to test multiple groups for continuous variables, and Tukey's method was used for post hoc testing. Further, the Cochran's Q test was used to test multiple groups for nominal variables, and the Steel-Dwass method was used to test for post hoc testing. Pearson's correlation coefficient was used to test correlation. Statistical significance was defined as p < 0.05. EZR ver. 1.34 (Saitama Medical Center, Jichi Medical University) was used to perform all statistical analyses [20].
Results
The subjects were 50 patients [32 men and 18 women; mean age, 77 ± 6.7 years (mean ± standard deviation)]. The mean postoperative course at the time of the 6-month postoperative examination was 6.6 ± 0.82 months and that at time of the 1-year postoperative examination was 1.2 ± 0.95 years. The preoperative weight of the patients was significantly decreased at 6 months postoperatively and significantly increased at 1 year postoperatively. The preoperative BMI significantly decreased at 6 months postoperatively and significantly increased at 1 year postoperatively. No significant differences were noted for grip strength or serum albumin levels between 6 months and 1 year postoperatively. Gait speed significantly decreased at 6 months postoperatively ( Table 1).
The preoperative visceral fat area was significantly decreased at 6 months postoperatively and significantly increased at 1 year postoperatively. The preoperative SMI was significantly decreased at 6 months postoperatively and showed a slightly increasing trend during the 1-year postoperative period. A comparison of the SMI at 6 months and 1 year postoperatively did not reveal any significant difference (Table 1).
Dietary intake is expressed as percentage values when the preoperative levels were adjusted to 100%, which significantly decreased to 78% 6 months postoperatively and significantly increased at 1 year postoperatively (Table 1).
With regard to weight, gait speed, BMI, SMI, and visceral fat area, there were significant differences between men and women at the same measurement time (Table 1).
In men, the preoperative SMI was less than the cut-off value in 9.4% of the patients; their preoperative SMI values remained less than the cut-off value at 6 months postoperatively. In 25% of the patients, the SMI was below the cut-off value at 6 months postoperatively (Fig 1).
In women, the preoperative SMI was less than the cut-off value in 11% of the patients. Similar to that in men, the preoperative SMI in these patients was less than the cut-off value at 6 months postoperatively. The SMI dropped immediately below the cut-off value in 5.5% of the patients at 6 months postoperatively (Fig 1). Values are presented as mean ± standard deviation The amount of food intake is expressed as % when the preoperative levels were adjusted to 100%. † Two-factor analysis of variance revealed significant differences among the three groups (preoperatively, 6 months after gastrectomy, and 1 year after gastrectomy) (p < 0.05). † † Cochran's Q test revealed significant differences among the three groups (p < 0.05). a significant differences in values between preoperatively and 6 months after gastrectomy (p < 0.05). b significant differences in values between preoperatively and 1 year after gastrectomy (p < 0.05). c significant differences in values between 6 months after gastrectomy and 1 year after gastrectomy (p < 0.05).
In men, the gait speed was � 0.8 m/s in 34.3% of the patients. Among these, SMI was less than the cut-off value in 72.7% of the patients at 1 year postoperatively (Fig 2).
In women, the gait speed was � 0.8 m/s in 16.6% of the patients. The SMI in these patients was below the cut-off value at 1 year postoperatively (Fig 2).
An examination of the relationship between visceral fat area and SMI at 1 year postoperatively revealed no correlation in men (r = 0.048, p = 0.821) and women (r = 0.376, p = 0.150; Fig 3). The visceral fat area at 1 year postoperatively was above the cut-off value in 34.4% men. Among these, 45.5% of the patients had an SMI below the cut-off value (Fig 3). In women, the visceral fat area at 1 year postoperatively above the cut-off value of 100 cm 2 in 16.7% of the patients. Among these, 33.3% patients had an SMI below the cut-off value (Fig 3).
Preoperatively, 6% of the patients were diagnosed with sarcopenia and this increased to 20% by 6 months postoperatively. Further, 2% of the patients were diagnosed with obesity at 6 months postoperatively and this increased to 8% by 1 year postoperatively; however, this increase was not significant. At 6 months postoperatively, 12% of the patients were diagnosed with visceral obesity that increased to 28% by 1 year postoperatively, and 2% of the patients were diagnosed with sarcopenia and visceral obesity at 6 months postoperatively that increased to 12% by 1 year postoperatively ( Table 2).
Discussion
Research on sarcopenia has recently progressed, but its definition and diagnostic criteria have not yet been standardized. In the present study, the AWGS diagnostic criteria were used as reference [2]. The walking speed and grip strength were measured as defined in AWGS because the measured values (AWGS) of those Japanese individuals have been published. However, the method to measure muscle mass has not yet been standardized, and there are no standard values for Japanese people. Moreover, the mean and median values of muscle mass vary according to race, sex, age, etc. Therefore, it is difficult to use reference values used in Western and other Asian countries. In the present study, bioelectrical impedance analysis was used to measure muscle mass. When a bioelectrical impedance analyzer is used, even if the muscle mass of the same person is measured, there is a difference depending on the instrument and measurement method used.
Regarding the reference values used for the diagnosis of sarcopenia, the European Working Group on Sarcopenia in Older People recommends obtaining a reference value from a normal group (healthy young adults) rather than other elderly groups and setting the cut-off value as (the mean value − 2 standard deviations) [1]. Therefore, in the present study, the cut-off value for muscle mass used for diagnosing sarcopenia was determined using the mean value and standard deviation of muscle mass for healthy subjects aged 18-40 years who had undergone measurements at Hamada Medical Center [16].
Sarcopenic obesity is a state in which sarcopenia and obesity coexist and has recently been gaining attention in relation to metabolic syndrome [21]. It is a condition in which the risk of developing lifestyle diseases such as diabetes, hyperlipidemia, and hypertension is high as the body function decreases [22]. However, unlike diagnostic sarcopenia, the diagnostic criteria for sarcopenic obesity are not unified [22]. The characteristics of sarcopenic obesity include increasing body fat and decreasing skeletal muscle mass. To diagnose sarcopenic obesity, body Sarcopenia and visceral obesity † 2% (1) 2% (1) b 12% (6) b † Cochran's Q test revealed significant differences among the three groups (p<0.05). a significant differences in values between preoperatively and 6 months after gastrectomy (p < 0.05). b significant differences in values between 6 months after gastrectomy and 1 year after gastrectomy (p < 0.05).
fat and skeletal muscle mass must be considered [19]. As shown in Fig 3, there was no correlation between visceral fat area and SMI. Therefore, it is recommended that body fat and skeletal muscle mass should to be separately evaluated.
Although the diagnosis of obesity is often judged based on BMI, BMI does not reveal an increase in body fat or a decrease in skeletal muscle mass [19,23]. Further, when obesity is determined based on the measured value of BMI, BMI is only considered to have modest validity correlated with cardiovascular risk factors [24]. When visceral fat accumulation is determined from the measured value of waist circumference or waist/hip ration, abdominal adiposity is considered a strong measurement value correlated with the risk factors of cardiovascular and metabolic syndrome [24,25]. Further, visceral fat accumulation is associated with risk factors for atherosclerosis in nonobese Japanese people [26]. In this study, we investigated the frequency of sarcopenia, obesity, and visceral obesity.
Unlike the method of measuring muscle mass, the method for measuring visceral fat accumulation has not yet been standardized, and there are no standard values for Japanese people. Moreover, visceral fat has different mean and median values depending on race, sex, age, etc. The Guidelines for the Management of Obesity Disease 2016 recommends the use of computed tomography for the measurement of the visceral fat area at the umbilical level [17]. In the present study, patients with an abdominal visceral fat area of �100 cm 2 were defined as having visceral obesity; this is the cut-off value of visceral fat accumulation for diagnosing metabolic syndrome in Japan [17].
After distal gastrectomy, stomach volume as well as food consumption decreases, thereby resulting in weight loss during the perioperative period [10], which was also observed in our present study. In a state in which oral ingestion does not progress, energy intake is insufficient and an increase in muscle mass cannot be expected. Studies have examined the effectiveness of enteral nutrition as a countermeasure [27][28][29][30]; however, studies still report perioperative weight loss and muscle loss. To the best of our knowledge, no study has reported the effect of increasing only the protein intake to stop the decrease in muscle mass after gastrectomy. The increased protein intake along with the limited oral intake ensures that the absolute and relative intake levels of lipids and carbohydrates are reduced. The influence on living organisms in such a state is largely unknown.
In the present study, compared with preoperative values, the SMI decreased 6 months postoperatively. If the muscle mass decrease until 6 months postoperatively can be inhibited, secondary sarcopenia can be prevented. In recent years, the concept of enhanced recovery after surgery has spread to patients undergoing gastrectomy, and nutritional therapy and rehabilitation from the early postoperative period are recommended [17]. No differences regarding postoperative complications have been noted in comparison with conventional postoperative management, but effects such as reduction in hospitalization duration and medical costs have been noted [31,32]. At our hospital, we are actively engaged in the concept of enhanced recovery after surgery, but the decrease in muscle mass during the perioperative period could not be stopped in our patients.
During the perioperative period, as a result of stress associated with treatment, protein catabolism is promoted and protein assimilation is suppressed. This is a defense reaction of the body against stress, which remains difficult to control [33]. Furthermore, the ratio of endogenous and exogenous energy expended during the study period is unknown. For this reason, exogenous energy cannot be effectively used even if the body is extensively replenished with only exogenous energy [34]. Thus, it is necessary to accept some reductions in weight loss and muscle mass during the perioperative period.
In the present study, the frequency of sarcopenia did not change 1 year postoperatively compared with 6 months postoperatively, but an increasing visceral obesity was noted. At 6 months postoperatively, food intake was increased and the decrease in body weight was halted. A comparison of the findings at 6 months and 1 year postoperatively revealed a significant increase in visceral fat mass, but no significant increase in SMI was noted. In other words, the increase in weight over the period from 6 months to 1 year postoperatively is mostly accounted in terms of fat. Therefore, the state of having sarcopenia and visceral obesity was found to be increasing at 1 year postoperatively. These values cannot be judged based on BMI and amount of food intake, and it is necessary to measure skeletal muscle and abdominal visceral fat.
It has been reported that resistance training is effective for sarcopenia in elderly people [35,36]. Furthermore, the necessity of complex intervention with nutritional intervention, including amino acid supplements, has been indicated [37]. Such complex intervention may be effective for preventing sarcopenia in elderly people following gastrectomy. However, there are many unclear points concerning the safety and efficacy of complex intervention in patients who have undergone gastrectomy as food intake is less in such patients. Nevertheless, if the food intake is improved and weight gain is noted, complex intervention may be effective for preventing sarcopenia.
Conclusions
To date, the main purpose of follow-up after gastrectomy for gastric cancer has been surveillance for early detection of recurrence and secondary cancer. However, in recent years, attention has also been paid to cope with postgastrectomy symptoms and overcome nutritional issues [13]. Few studies have reported the presence of sarcopenia and its long-term characteristics following gastrectomy for gastric cancer. We believe that henceforth, the importance of evaluating postoperative sarcopenia and visceral obesity must be considered. In particular, sarcopenia and visceral obesity after increases in BMI and food consumption should be carefully monitored.
|
2019-09-13T13:07:17.488Z
|
2019-09-11T00:00:00.000
|
{
"year": 2019,
"sha1": "2412d711faf4c7b02e9093a16821b2ac32ad45e3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0222412&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4284e0f848a3ac99b812d8592b57b95fe51f9319",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252575360
|
pes2o/s2orc
|
v3-fos-license
|
An LC–MS/MS quantification method development and validation for the dabrafenib in biological matrices
An accurate, specific, and robust liquid chromatography–tandem mass spectrometry technique was developed and validated for the quantification of dabrafenib in plasma samples. Internal standard and drug components were subjected to extraction by utilizing liquid–liquid extraction method by utilizing ethyl acetate. high performance liquid chromatography of reverse phase was operated with Phenomenex (50×4.60 mm, 5.0 µm) C 18 analytical column, isocratic system of mobile solvent comprised acetonitrile and formic acid (0.1%) (85:15, %V/V ) . Mass triple quadrupole detection system was utilized for the analysis. Electrospray ionization in the positive ionization approach method operated in multiple reactions monitoring with ionic transition of m/z 520.10–176.98, m/z 465.09–244.10 for dabrafenib, sorafenib, respectively. Rectilinear plot was processed in concentration levels of 74–2,956 ng.ml −1 and the method validation was executed as per the United States Food and Drug Administration strategies for bio analytical methods. The recovery findings obtained were more than 92.5% and the accurateness was fall in between–1.53% and 2.94% of relative error and % relative standard deviation findings were <4.65%. The high sensitiveness, better accuracy, and precision with good recovery findings for the plasma samples of a developed method prove its applicability for pharmacokinetics and bioequivalence studies.
First, this drug is utilized singly for the management of metastatic melanoma or unresectable with B-RAF (V600E) mutations as an inhibitor of kinases. This was identified by US-FDA confirmed tests (Ascierto et al., 2012;FDA, 2013FDA, ,2018. Utilization of combined formulation was created a demonstrational strong response rate. Development in disease-related symptoms or complete existence has not at confirmed for this drug when combined with trametinib (Dhillon et al., 2007;Long et al., 2017).
Literature review on dabrafenib reveals that only two analytical approaches on LC-tandem mass spectroscopy (Merienne et al., 2017;Svante et al., 2017) were described for the assessment of dabrafenib in sample plasma. The reported works were in combination with the other components. Therefore, current work was aimed to develop an accurate, specific, and robust liquid chromatographytandem mass spectrometry (LC-MS/MS) technique for the assessment of dabrafenib in human's plasma, as a single drug.
Chemical reagents
The dabrafenib (purity: 99.68%) standard and sorafenib (purity: 99.87%) utilized as internal standard (IS) were obtained from the MSN Labs, Hyderabad, India. Acetonitrile (ACN) and methyl alcohol of LC purity were acquired from Merck, Mumbai, India. Deionization of water has been processed from Milli-Q waters system (Millipore, USA).
LC-MS/MS system and its settings
The LC-MS/MS system comprising an Agilent 1200 liquid chromatographic instrument with a dual pumps-SL and an Agilents/6164 triple quadrupole mass spectroscopic system with electro spraying ionizations source (CA). Chromatographic data was processed by a Mass Hunters version B.010.004 software. Through Phenominex (50 × 4.6mm id, 5 µm), C 18 analytical column, an isocratic solvent system comprised ACN and HCOOH (0.1%), (85:15, %v/v) was passed. The assessment of analytes was executed in a triple quadrupoles mass system retaining electro spraying ionizations method, functioning in multiple reactions monitoring, and transitions were m/z 520.10-176.98, m/z 465.09-244.10 for dabrafenib, sorafenib, respectively, in the positive ionizations mode (Raviraj et al., 2016;Vijaykumar and Raviraj, 2019). The MS/MS settings were fixed as: nebulize gas pressures, 45.0 psi, source temperature, 500ºC; capillaries voltage, 6.00 kV, and dryer gas (N 2 ) flow, 10 l/minute. The injection volume and auto-sampler temperatures were set to 10 µl and 5.0ºC, respectively. The flowing rates of 0.80 ml.minute −1 and collisional energies of 20 eV were utilized in the chromatographic elution.
Protocol quality control samples
1.0 mg/ml individual dabrafenib stock solution and IS were executed in ACN (diluent) separately. The stock solution of dabrafenib was then subjected for serial dilution with diluent to attain the operational standards. An IS operational solution at 250 ng.ml -1 executed by dilution of IS primary solutions with diluent. Processed standard samples were retained at −20ºC before the utilization.
Protocol for samples preparation
A 200 µl aliquots of plasmas samples were located in 10 ml plastic tubes, follows by adding 100 µl of IS operational solution was added in all the sample solutions excluding the blank sample. The resulting mixture was extracted with 5.0 ml of ethyl acetate after the sonication for 20 minutes. Next, the sample was subjected for centrifugation for 15.0 minutes at 5.0ºC and 5,000 rpm. The supernant organic solvent phase was relocated into fresh glass tube and nitrogen steam was applied to evaporate the same. Resultant dried residue was reconstructed with 100 µl of movable solvent and 10µl aliquots were infused to and LC-MS/ MS equipment for examination.
Analytical method validation
Developed analytical technique was validated as per the regulatory guidelines of US-FDA for different validation constraints to meet the respective guidelines (EMA, 2011;FDA, 2001).
Mass spectrometric instrument
In the development stage, fresh dabrafenib solution was injected for the optimization of the product and parent ion. Precursor ion at 520.10 m/z value was detected in the positive ionizations approach. Upon fragmentation of the precursor ion, fragments of m/z 176.98 and 94.04 were noticed. The daughter ion of dabrafenib at 176.98 m/z was identified with a maximum intensity value. Sorafenib is having similar physicochemical properties with the dabrafenib to select as an IS for this bioanalytical method development and for good recovery during the sample preparation and validation process. Multiple reactions monitoring (MRM) scan was executed for the identification of the product and parent ions for both drugs and transitions finalized as m/z 520.10 -176.98 for dabrafenib and m/z 465.09-244.10 for sorafenib. (Bhamare et al., 2019;Kulkarni et al., 2016aKulkarni et al., , 2016bPatel et al., 2017) Specificity Plasma blank and spiked plasma at lower limit of quantifiacation (LLOQ) level (74 ng/ml) of dabrafenib and IS were ran in the LC-MS/MS system and the outcomes were represented in Figure. 2. No intervention peaks were noticed for dabrafenib and sorafenib from sample plasmas. The drug and the IS were isolated from the system within 4 minutes and the retaining times of dabrafenib and sorafenib were 2.40 and 3.0 minutes, correspondingly (ICH, 2005;Vikingsson et al., 2017).
Sensitivity and linearity
The LLOQQC of drug component was fixed at 74 ng.ml −1 , because at this concentration level the signal/noises findings were >10.0, and the accurateness and precision findings were <2.82 %relative standard deviation (RSD). Rectilinear plots were processed for every batch between the concentration levels of 74.0-2,596.0 ng.ml −1 for dabrafenib in plasmas (Table 1). The equation of regression plot was calculated to form the average values of six replica calibration standards and was found as: y = 0.00034 x + 0.00018 for dabrafenib, where "x" represents the plasmas concentration (Nirav et al., 2017;Jaivik et al., 2017) and "y" indicates the peaks ratio, i.e., analytes/IS.
Recovery, accuracy, and precision
Inter-day, and intra-day accurateness, and precision findings were given in Table 2 and Figure. 3. Within a day precision finding existed in between 2.17% and 3.94% (RSD) for dabrafenib, whereas the accurateness findings were within -1.53%-2.94% of relative error. Likewise, for between the days experimental, precision was changed between 1.54% and 3.58% (RSD) for dabrafenib, whereas the accurateness was between −1.37% and 4.65% of relative error (Patel et al., 2011).
The dabrafenib average recovery findings were in the range of 95.35%-101.37% at 3-QC concentrations ( Table 3). The executed LLE for the sampling process proved that dabrafenib and sorafenib (97.49%) were recovered with higher percentage values from plasma (Titier et al., 1997). These resulted findings were proven the accuracy in terms of recovery and precision in terms of %RSD.
Matrix effect
The findings of matrix effect values were represented in Table 4. The respective peaks area ratio of the drug/IS solubilized in the plasma blank extract to those solubilized with movable solvent existed in between 95.62% and 102.34% for dabrafenib at lower quality control (LQC) level and 95.62%-102.76% at high quality control (HQC) level. These results proposed that interfering matrix component effect of the analytes were insignificant when exposed to the developed LC-MS/MS circumstances (Jaivik et al., 2017;Titier et al., 1997).
Stability tests
The dabrafenib stability of was established subsequently subjecting control sample solutions to dissimilar storage environments (FDA, 2001;Nirav et al., 2017). The subjected environ-ments comprise longer time stabilities after the storing at -20.0°C for 30.0 days, shorter time stabilities at the room conditions for a period of 8.0 hours, processed samples (extracts) stabilities after 24.0 hours at 4.0°C, and three completely freeze-thawed cycle (frozen at −20.0°C for 12.0 hour). The finding for stabilities for control sample solutions in the plasmas was represented in a Table 5. The assessed accurateness values for dabrafenib drug were existed in between 94.98% and 103.27%, which were acceptable as per the regulatory guidelines.
CONCLUSION
A specific, accurate, and new validated LC-MS/MS technique was produced for the quantitation of US-FDA approved dabrafenib anticancer drug in plasmas of humans. Within a day precision finding existed in between 2.17% and 3.94% (RSD) for dabrafenib, whereas the accurateness findings were within −1.53%-2.94% of relative error. Likewise, for between the days experimental, precision was changed in between 1.54% and 3.58% x, average recovery of un-extracted sample; y, average recovery of extract sample. The intra-day and inter-day accuracies were within −1.37% to 4.65% of relative error and the RSD of precision were less than 3.94%. The drug was sufficiently stable under different analytical conditions. liquid liquid extraction method was optimized for dabrafenib extracting from plasma with mean percent recoveries of 98.62% by utilizing the sorafenib as an IS. The sensitivity, good validation criteria, and high percent recoveries from plasma of the proposed method rendered it applicable for the bioequivalence and pharmacokinetic studies.
CONFLICT OF INTEREST
The authors report no financial or any other conflicts of interest in this work.
FUNDING
There is no funding to report.
AUTHOR CONTRIBUTIONS
All authors made substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; took part in drafting the article or revising it critically for important intellectual content; agreed to submit to the current journal; gave final approval of the version to be published; and agree to be accountable for all aspects of the work. All the authors are eligible to be an author as per the international committee of medical journal editors (ICMJE) requirements/guidelines.
ETHICAL APPROVALS
This study does not involve experiments on animals or human subjects.
DATA AVAILABILITY
All data generated and analyzed are included within this research article.
PUBLISHER'S NOTE
This journal remains neutral with regard to jurisdictional claims in published institutional affiliation.
|
2022-09-29T15:15:51.916Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a732c1e5dceaf8b1dc6c1d95fa98e2a618b0253d",
"oa_license": "CCBY",
"oa_url": "https://japsonline.com/admin/php/uploads/3800_pdf.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7099d19d04b4563ce1de175d6bd4fa4a76dfab9b",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
}
|
216418127
|
pes2o/s2orc
|
v3-fos-license
|
Study on the Temporal and Spatial Evolution of the Total-Factor Water Efficiency in the Yangtze-River Economic Belt
Based on the SBM-Malmquist model of unexpected output, the paper estimates and decomposes the total-factor water efficiency (TFWE) of 11 provinces and cities in Yangtze-river Economic Belt from 2000 to 2017. The study finds that: (i) TWFE in the Yangtze-river Economic Belt has been significantly improved during the sample time, which has an average value of 0.70, the highest value of 0.91 in 2017 and the lowest value of 0.56 in 2006; (ii) provinces or cities from the eastern region of Yangtze-river Economic Belt including Shanghai, Zhejiang and Jiangsu, tend to have higher efficiency than the central and western regions at the same time; (iii) the index of technical change (Tech) obtained by further decomposition of the efficiency is the main contributing factor of efficiency improvement.
Introduction
The 19th National Congress of the Communist Party of China pointed out that we should promote the development of Yangtze-river Economic Belt with the guidance of paying attention to the protection and development together. With the rapid development of economy and society, water resource plays a dominant role in production gradually, life and ecology [1,2]. As one of the most important regions to promote high-quality economic development in China, the Yangtze-river Economic Belt is now facing serious problems of pollution-induced water shortage and imbalanced distribution of water resources in time and space. Regional development includes many factors, such as population, space, economy and society, which often emphasizes economic and social benefits, while water resource represents an ecological or environmental factor, mainly underlining ecological and technological effects, only when the value of water resources and all factors are unified can we achieve the development of sustainability and coordination.
Total-factor water efficiency (TFWE) was first proposed by Hu and Wang [3], which emphasizes the relationship between various comprehensive inputs and economic output, including capital, labor and water resources, and there is a trend to set pollutants as unexpected output. TFWE reflects the result of water consumption, economic development, technological innovation and other common factors, which has attracted the attention of many scholars recently [4,5]. Take the whole country as the research scope, Caizhi Sun et al. [6] redefines the concept and connotation of green efficiency of water resources and divides into three dimensions, including economic, social and environmental connotation, and finds most of the regions with high values of water resource environmental efficiency are located in the eastern coastal areas. Take the Yangtze-river Economic Belt as the target, Xi Lu et al. [7] comes to the conclusion that the total factor productivity of water resources in the Yangtze-river Economic Belt has changed significantly, and their gap is narrowing and eventually it's all reaching the same steady-state level. Therefore, it is of great practical significance to study the total-factor water efficiency in the Yangtze-river Economic Belt in line with the requirements of the harmonious development of economy and ecology under the new normal economic situation.
Measurement Infrastructure
Many studies use the data envelopment analysis (DEA) to evaluate the TFWE, while the SBM model based on DEA model takes the unexpected output into account and solves the non-radial and nonangle problems, which is widely applied. Therefore, we use SBM model to calculate the efficiency: In Formula (1), each decision-making unit (DMU) contains m inputs, S 1 expected outputs and S 2 unexpected outputs. S -, S g , and S b correspond to their respective relaxation quantities, is the weight vector, and the objective function value is 01 and decreases strictly. Only when *=1, the DMU is valid.
Then we use Malmquist productivity index to decompose the change of efficiency into technical change (Tech) and technical efficiency change (Effch). Comprehensive productivity index: On the basis of Formula (2), technical efficiency change (Effch) is divided into pure technical efficiency change (Pech) and scale efficiency change (sech).
In Formula (3), efficiency improves when total factor productivity (TFP) is greater than 1. Among them, Tech reflects the degree of technological production boundary moving, Effch represents the effect of catch-up, Pech is the technological efficiency change index under the assumption of variable returns to scale, and Sech represents the impact of scale economy on efficiency.
Description of Variables
The variables included in our estimations are as follows. We select 2000-2017 as the sample time and Yangtze-river Economic Belt as the research object. The input indicators are capital, labor and water resources, the expected output indicator is regional GDP, and the unexpected output indicator is wastewater discharge. Capital investment uses the perpetual inventory method to determine fixed assets, of that the base period is converted into 2000 years. Labor input is the arithmetic average of the number of employees in the current and previous period. Water resources input is the total water consumption of each province or city over the years. GDP is adjusted by using the GDP reduction index. Fixed assets, employment, GDP and sewage discharge are all from China Statistical Yearbook and Provincial Statistical Yearbook, and total water consumption is from China Water Resources Bulletin.
Overall Analysis of the Efficiency
The following two points can be summarized in Table 1. On one hand, the efficiency shows an upward trend with significant changes after 2008. TWFE in the Yangtze-river Economic Belt has been significantly improved, which has an average value of 0.70, the highest value of 0.91 in 2017 and the lowest value of 0.56 in 2006. Before 2008, the efficiency fluctuated in a small range around 0.58, and the change was not obvious, which was related to the economic level and industrial structure at that time. Most provinces took the second industry or even the first industry as the leading industry, and the industry lacked reasonable allocation and systematic arrangement for water resource consumption, so the actual technical level of management was far from the demand. After 2008, with the substantial improvement of China's economic strength and the large-scale introduction and use of water-saving technology, the efficiency has been greatly improved, with the average value reaching 0.82, and showing a stable upward trend. Taking the year of 2008 as a turning point, the government reallocates water resources, which overcomes the distortion of resource allocation to some extent, and effectively improves the imbalance and inefficiency of regional distribution of water resources. In recent years, the strictest water resources management system has been introduced. All regions adhere to the "Three Red Lines" of water resource management, and coincidentally choose the requirements and objectives of water conservation first, which makes a qualitative leap in efficiency.
On the other hand, the efficiency shows significant differences in time and space. Shanghai, Zhejiang and Jiangsu are higher in each period, while Anhui, Guizhou and Jiangxi are lower. Water resource efficiency is closely related to economic geography. Considering the imbalance of economic development gap in different regions, we can obtain the conclusion of the regions of "East> West> Middle" in the Yangtze-river Economic Belt, and the eastern region is far higher than the central and western regions, while the efficiency difference in the central and western regions is not obvious. It can be inferred that the gap of water resource utilization efficiency in different regions is inseparable from the local geographical location, economic, scientific and technological level and policy support. The eastern region has a large per capita GDP, leading the country in comprehensive strength, while some provinces in the central and western regions are relatively rich in water resources reserves, the regional climate conditions and geographical environment are not suitable, coupled with backward technology of water resources development. We find that during the sample period, the efficiency of water resources utilization in Shanghai has reached 1, that is to say, the input and output have reached the optimal match, which shows that Shanghai has a high level of technology and water resources allocation capacity.
Decomposition Results Analysis
After further decomposition by Malmquist productivity index, we find that the average value of TFP in the sample period is 1.020 with the year of 2008 as the turning point, which totally shows an obvious upward trend. The average annual change rate of Tech is 5.0% and maintains stable growth, which is the main contributing factor of efficiency improvement. The average annual change rate of Effch is -2.9%, while the average value of Pech is 0.978 and the average value of Sech is 0.992. In other words, Pech is the main reason to hinder the growth of Effch, while Sech is relatively weak due to the influence of water resource endowment and other factors. From the perspective of provinces, Fig. 1 shows that there are 7 provinces with TFP greater than 1, among which Shanghai is the first and Jiangxi is the last. The four provinces with lower efficiency are mainly in the central and western regions. This is mainly because these provinces, including Jiangxi, Anhui and Yunnan, are constrained by economic environment and geographical location, whose degree of industrial transfer is relatively low and are limited by the management and operation of soft power such as technology introduction and talent training.
Conclusions and Policy Recommendations
Taking the total-factor water efficiency as the research focus, this paper chooses the input and output data of 11 provinces or cities in the Yangtze-river Economic Belt, the sample period in the analysis is from 2000 to 2017. The results of the paper show that the average of TFWE was 0.70 and it had an upward trend during the sample period. Most provinces or cities with high efficiency came from the eastern region, which were obviously higher than the central and western regions. After Further decomposition, we find that technical change makes the greatest contribution of efficiency improvement. In view of the current water efficiency progress in the Yangtze-river Economic Belt, we give following recommendations in policy making.
Firstly, the government should actively promote the orderly transfer of industries in the Yangtzeriver Economic Belt on the basis of the carrying capacity of water resources and environment in different regions, so as to form a reasonable and efficient spatial distribution pattern of industry and water resources utilization. Secondly, central authorities strengthen the macro supervision of water resources utilization and try to take the measures to improve transparency and identify responsibilities in every aspect, including appointing local government heads as river chiefs across the nation to clean up and protect its water resources. Meanwhile they should carry out river and lake water pollution control and water ecological restoration and emphasize the dual role of regional government's economic leverage and administrative means. Thirdly, regions of high water efficiency are supposed to drive their neighboring areas by cross-region spillover, at the same time, local government accelerates the improvement of technical efficiency of water resources utilization, especially encourages relevant enterprises to make technological innovation and improve management efficiency.
|
2020-03-19T10:30:38.593Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "85169675d0f00bbbae51a91508101e88c9fb62df",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/440/4/042021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "806479ac3ecd825a8a7c8d148876e011b80f0aca",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
}
|
222439864
|
pes2o/s2orc
|
v3-fos-license
|
THEORETICAL BASES OF FORMATION AND DEVELOPMENT OF AGRICULTURAL ORGANIC PRODUCTION IN UKRAINE IN MODERN ECONOMIC CONDITIONS
The article discusses the current state of the global market for organic products and the main trends of organic production in Ukraine. The analysis of the main indicators of the leading countries in the production and sale of organic products is carried out. Based on a comparison of the indicators of the Ukrainian market for organic products with world leaders, it was concluded that the Ukrainian market for organic products needs: the formation and implementation of a national management model, improving legislation and the structure of certification organizations, drawing up a program of financial, information and marketing support for domestic producers. © 2020 EA. All rights reserved.
Introduction
In modern economic conditions, the effective management of agricultural production depends on the degree of its balance and applied organizational and economic methods. The main goal of organic farming is to produce high quality food in precisely defined conditions (Stojic, Dimitrijevic, 2020). Public awareness has reached such a level of development when an increase in the volume of production is not the only criterion for the activity of agricultural production. The preservation of natural resources becomes more and more important. This is due to the constantly increasing anthropogenic pressures on the environment -soil cover, biological organisms, atmosphere and water resources, due to what there is the violation of natural balance.
The formed dilemma of the further development of agricultural production and the preservation of the natural environment as the basis for the life of future generations predetermined the search for alternative options for the development of the industry. Over the past three decades, leading foreign scientists and agricultural practitioners, in order to solve territorial environmental problems and improve food quality, have gradually switched to organic farming methods, turning this area of production into In particular, the agrarian sector of the Ukrainian economy greatly influences the formation of gross domestic product, thereby ensuring the country's food security. One of the strategic tasks of the state during the formation of food security is the greening of agricultural production, an important place in which is given to increasing the volume of organic production.
Ukraine has everything necessary for the formation of agriculture, focused on the production of organic products: long-term agricultural traditions, vast areas of agricultural land, as well as an insignificant level of intensification and chemicalization of the agricultural sector in comparison with industrialized countries. Taking into account the considerable resource potential of the country in the agricultural sector, it is of great importance to provide a mechanism that would contribute to the development of organic agricultural production in Ukraine and increase on the basis of this competitiveness of the national economy.
Materials and methods
The theoretical and methodological basis of the study is the dialectical method of cognition, a systematic approach to the study of economic phenomena and processes, scientific works of domestic and foreign scientists on the problems of theory and practice of ensuring the development of organic production. Special research methods are also used, in particular: abstract-logical -to generalize the components of the mechanism for ensuring the development of organic production, formulating conclusions; economic and statistical -while analyzing the current state and predicting the prospects for the development of organic production in Ukraine and the world; graphic -when constructing graphic images of the processes under study.
Results and discussions
Ecological and economic essence of agriculture focused on the production of organic products The feasibility of forming the theoretical foundations of economic relations that are determined in the process of interaction between human society and the natural environment, as well as the need to develop methods for regulating the rational use of natural resources, predetermined the emergence of a new scientific direction - environmental economics, which arose on the basic principles of the scientific theory of welfare and neoclassical economic theory.
The regulatory framework of environmental economics is the theory of external effects of economic production, which have a positive or negative effect on the opposite side. The theory of external effects is based on the fact that environmental pollution causes economic damage, and this damage can be materially estimated and, if necessary, monetary compensated. The English economist A. Pigou was one of the first to study the costs associated with external effects. The scientist proved that environmental pollution leads to an increase in external costs. At the same time, the main goal of any organization is to minimize production costs in order to increase profits, as a result -the desire to reduce environmental costs. In this case, environmental pollution is not considered as production costs and, accordingly, the cost of eliminating pollution is not included in the cost of production. With this approach, society, individual organizations, citizens will be forced to spend their additional material and financial resources on the elimination of environmental damage. Consequently, the total social costs and production costs will be formed from individual and external costs, expressed in monetary value (Pigou, 1924).
The representative of the neoinstitutional orientation in economic theory, R. Coase, believes that the root cause of external effects is the lack of clearly established ownership rights to natural resources and environmental objects. The author believed that if this shortcoming is eliminated, then optimality in the quality of the environment can be ensured in market conditions. In this case, the role of the state will be to establish such ownership rights (Coase, 1990).
Nevertheless, in spite of the achievements of scientists in the field of the theory of external effects, the main problems of taking into account external effects in the formation of the economic mechanism of environmental management have not yet been widely reflected in scientific research.
The further formation and development of social production dictates the need to take into account environmental factors and principles. It requires the search for new directions in the field of environmental management, based on maintaining the basic conditions that are important for human life and social production -clean air, water and soil resources, and neutralizing the possibility of depleting these resources. Objectively, there is a need of development of the concept of ecological-economic balance. Thus, the problems closest to those identified were those contained in the Concept of Sustainable Development of the World Community, presented at the United Nations conference in the early 90s, which was formed as an alternative to the prevailing stereotype of "consumer society" and the main economic development paradigms. It is believed that the concept of "sustainable development" was first mentioned in 1987 at an annual report presented by the World Commission on Environmental Protection, as "a development process in which existing social needs are met without the risk of likely harm to the process of satisfying the needs of future generations" (United Nations, 1992).
As theoretical studies show, the problems of sustainable development are often associated only with the state of the natural environment, not taking into account or underestimating the equally significant factors associated with sustainable development -political, social, economic, cultural, national-ethnic, etc. In the modern scientific environment, there is a position that is based on the principles of sustainable development. It is associated with the need to move from the consideration of the economic system in its pure form to the analysis of ecological and economic systems. Theoretical and methodological foundations of the sustainability of agricultural production in the ecological and economic aspect are investigated in the works of many economists. For example, A. Zhuchenko believes that a unilateral, mainly technogenic and chemical strategy for intensifying agricultural production, based on the application of ever-increasing costs of irreplaceable energy resources, has shown its failure to ensure sustainable, resource-energy-efficient and environmental development of agricultural production. As a result, the author proposed the use of a strategy of adaptive intensification, focused on the integrated use of chemical, technological and biological factors in order to increase the efficiency of agricultural production. This strategy includes: 1) elimination of environmental pollution and destruction when chemical fertilizers, plant protection products are applied, and gentle soil treatment is used; 2) bio-greening of technological processes of intensification; 3) reduction of energy costs; 4) production of quality and safe food and industrial raw materials (Zhuchenko, 1994).
In partnership with other United Nations Member States, Ukraine has undertaken the obligation to adapt and implement the global goals and objectives of the "Sustainable Development Goals", which were approved at the United Nations Summit on Sustainable Development (United Nations, 2015), taking into account national economic, environmental, social, legal and other specifics of the strategy of balanced (sustainable) development of Ukraine until 2030. In this regard, the President of Ukraine issued a Decree "On Sustainable Development Goals of Ukraine until 2030" of September 30, 2019 (President of Ukraine, 2019), which names the goals and outlines tasks for organic production through the prism of solving problems to overcome poverty, prevent hunger, ecology, nature management, environmental protection, use of land and other natural resources in agriculture, investment attraction, etc.
It is rightly emphasized in the legal literature on the strategic importance of cooperation between Ukraine and the European Union in such areas of cooperation as promoting modern and sustainable agricultural production, taking into account the need to protect the environment, in particular, disseminating the use of organic production methods and the use of biotechnology through the introduction of best practices in these areas (Urkevych, 2015). Based on foreign experience in organic agricultural production, some Ukrainian organizations in the agricultural sector are starting to turn agricultural production to alternative and innovative methods. The land use of these organizations is based on the use of an ecological fertilizer system that allows the use of organic and green fertilizers instead of chemical natural ones. Agrotechnical soil cultivation in this management system is considered as energy-saving. It is based on the combination of plowing and surface soil cultivation in accordance with the requirements of the climatic and territorial landscape conditions of the area, as well as the use of combined units. Over the past two decades, Ukrainian agriculture has been trying to introduce organic farming methods into agricultural production and creating specialized companies for the cultivation and processing of organic agricultural products in various regions of Ukraine.
It should be noted that a significant share of these companies was formed with the financing and support of a number of European countries -Germany, Switzerland, Denmark. For example, a foreign investor is IFC, which provided $ 95 million to agro holding "Kernel" for working capital replenishment (Fedchyshyn, Ignatenko, Shvydka, 2019).
Most of the products manufactured by these enterprises are supplied to the ecological markets of European countries, which make producers of ecologically clean products dependent on market conditions, hampering their orientation on the domestic market of ecological products.
Ukrainian land has always aroused interest from foreign investors as a means of production and an object of investment. The tendency to increase such interest has not changed for a long time, and in the near future there are no preconditions for reducing the interest of the land. Taking into account the fundamental importance of the land as a strategic asset for any country, the regulation of property relations and land use occupies a special place in all developed legal systems (Fedchyshyn, Ignatenko, Bondar, 2019).
It should be noted that according to the study "The world of agriculture. Statistics and emerging trends", conducted in 2017 by IFOAM and the Research Institute of Organic Agriculture (German: Forschungsinstitut für biologischen Landbau -FiBL), there were 181 countries engaged in organic farming (table 1) In 2017, there were 2.9 million organic producers in the world, compared to 200 thousand in 1999. Moreover, 69.8 million hectares of certified agricultural land were allocated for organic farming (11 million hectares in 1999). There are only 93 countries in the world where the production and marketing of organic products is legislatively fixed and regulated by legal acts. In other countries, due to the lack of a legislative framework for regulating issues in the organic agriculture industry, the production of organic products is limited to the choice of the manufacturer who refused to use mineral fertilizers and plant protection products.
The global market of organic products in 2017 amounted 90 billion euro. Moreover, an increase in the market can be traced every year, starting in 1999. The country with the largest market for organic products is the United States (40 billion euro), followed by Germany with a market size of 10 billion euro. The third and fourth places are occupied by France and China (7.9 billion euro and 7.6 billion euro, respectively) (figure 1) (Wilier, Lernoud, 2019). Organic distribution channels in the world vary from country to country. In the past, the countries involved in the retail trade, showed a steady growth in their volumes of organic markets. As an example of such countries we can name Austria, Denmark, Switzerland, United Kingdom (table 2) (Wilier, Lernoud, 2019). However, the financial crisis has shown that dependence on supermarkets is dangerous. Supermarkets, in turn, consolidated their position as a driving force in the market, so specialized sales channels are faced with huge competition.
It should be noted that there is a gradual increase in demand for organic agricultural products and in the domestic market of Ukraine. One of the most important channels of distribution and promotion of organic products in Ukraine are small specialized health food stores in major cities (for example, organic shops "Натур Бутик", organic grocery store network "Eco-Chic", etc.). Supermarkets are the most powerful organic distribution channel in Ukraine (Bezus, 2011). Supermarket "Good Wine" sells domestic and imported organic production, combining it in the "Good Food" section. Supermarket "Megamarket" represented to consumers separate sections with organic products. Certified organic products are also presented in supermarkets "Auchan", "Delight", "Billa", "Furshet", "Novus", etc., with special attention on organic dairy and meat products, cereals, flour, bakery products, jams, juices, eggs, honey, teas, vegetables, fruits, etc. (Boyko, 2011). Consumer demand and the emergence of organic agricultural products in supermarkets have led to a significant increase in sales, even though its share is less than 1% on store shelves (Kostin, 2011).
There are different views on the demand for these agricultural products: some experts claim that a segment of consumers, which are ready to pay a higher price for ecologically clean agricultural food (especially in large cities), has already emerged in the country, while others believe that such products have not yet been consumed. However, as we can see, there is a trend of increasing demand in the organic agricultural market in the world and increased interest on the part of business entities. Therefore, it can be argued that production of such products has increased (Chernishov, Levchenko, Mazurkevich, 2016).
At present, there are 69.8 million hectares of organic land worldwide. Only lands that have undergone a transitional period are considered. The region with the largest area of organic land is Oceania, with 35.9 million hectares certified for organic farming. This is followed by Europe with an area of 14.6 million hectares, Latin America -8 million hectares, Asia -about 6.1 million hectares, North America -about 3.2 million hectares and Africa -2.1 million hectares (figure 2) (Wilier, Lernoud, 2019). In Oceania, more than a half (51 %) of the world's organic land is concentrated. Europe is a region that has shown fairly solid organic land growth over the past few years. The largest share belongs to countries such as: Spain (2.1 million hectares), Italy (1.9 million hectares), France (1.7 million hectares). In this rating, Ukraine occupies the 20th place with an area of organic land of 411.2 thousand hectares (Lialina, Matviienko-Biliaieva, 2019). Europe accounts for 21% of the world's organic land, followed by Latin America (11%).
The increase in the total area of agricultural organic land is due to the transformation of existing arable land and gardens in accordance with the standards of organic agriculture, as well as through the development of new territories. For example, in Europe, out of 12.7 million hectares of organic land, 8 million have already passed transitional period, and the others are in the transition to organic production. This trend indicates that in the near future we can expect an increase in the supply of organic products on the market. It should be emphasized that Ukraine has all the necessary conditions for the production of organic products and their further development, which is able to meet not only domestic demand, but also to occupy a niche in the world market. Some steps have already been taken in this direction. The total area of agricultural land in Ukraine occupied by organic production in 2017 was 420 thousand hectares, which is 2. However, a considerable part of organic agricultural production (about 80%) is exported abroad due to the lack of development of domestic markets. The main export market for Ukrainian organic products is the European Union. The Netherlands, Germany, Switzerland, the Czech Republic, Poland, Italy, Greece, Moldova and Norway are the main countries to which organic products are exported from Ukraine. Middle Eastern countries, such as the United Arab Emirates, are beginning to become interested in Ukrainian certified organic products. Agriculture, Year 67, No. 3, 2020, (pp. 939-953), Belgrade
Economics of
The development of organic agriculture strongly depends on economic factors, mainly including demand, prices of organic products and the level of producers' support (Baer-Nawrocka, Blocisz, 2018).
The practice of farming, focused on the production of organic products, proves that organic farmers do not earn more income due to their higher production costs including labor, insurance and marketing charges (Uematsu, Mishra, 2012). Profitability of organic farms is very dependent on higher prices of production (Krause, Machek, 2018). According to Nieberg's and Offermann's research, it was easier for the organic farms to achieve higher prices for the crop production, but more difficult for the livestock production (Nieberg, Offermann, 2003). So, consumers' preferences are the fundamental factor in the success of the market for organic products. Numerous studies have found that health benefits are the main motives for buying organic food products ( Consumers mostly describe organic food as food that is ecologically acceptable, has a positive effect on health and has good sensory quality, while the main disadvantages are high price and insufficient representation on the market (Gajdić, Petljak, Mesić, 2018).
So, foreign markets of ecological food are mainly targeted at consumers who are able and willing to buy a quality product at a higher cost. In Ukraine a class of wealthy people has also formed, but it will be wrong to orient the organic food market only to wealthy people.
We believe that Ukrainian agricultural producers of organic products need an appropriate segment of the food market, aimed at consumers who care about maintaining their health and the health of their loved ones. Consumers of organic products can be children (baby and diet food); people with poor health; patients undergoing rehabilitation, spa treatment; people with food allergies; agritourists and other organic products.
The importance should be given to scientific research in the direction of forming a strategy for the transition of a particular segment of agricultural producers to the organic way of farming.
Methodological aspects of the formation of the concept of agricultural development, focused on the production of organic products
When forming a methodological approach to the development of the system of land relations in the direction of agricultural production of organic products, there is a real opportunity to introduce important adjustments to land relations at the local level. This is due to the unevenness of the factors of natural and economic environment. In addition to the political orientation of the authorities, there are still quite objective reasons that have a serious impact on the level of development of land relations at the local level. A modern feature of agricultural lands is not only a general decrease in their area, but also deterioration in the quality of their land, and a decrease in the soil-biological and economic fertility of the land.
In addition, today, a number of reasons can be identified that slow down the development of organic agricultural production in Ukraine: 1) difficulties with investing in projects for the development of production and processing of organic products; 2) lack of a market for organic products; 3) lack of qualified specialists in the field of greening land use and certification of organic products.
The current situation in the agricultural sector of Ukraine does not imply a quick and widespread rehabilitation of it. As a result, it is required at the state level to define clear strategic and tactical goals for the systematic development of agriculture oriented towards the organic production. It is necessary to justify specific ways to achieve these targets, clearly define measures of state support, and outline the sequence of stages of reforming the system of land relations with organic development guidelines.
It seems that the awareness of the importance and need for a gradual transfer of the agricultural land use system from traditionally developed to organic will give a new impetus to the development of the entire agricultural sector. A systematic analysis and assessment of the possibility of using the world experience in organic farming in conjunction with the established traditions of land use are a prerequisite for the strategic development and strengthening of the position of agriculture in the system of the national economy.
It should also be noted that due to the increasing growth in the consumption of organic products in the economically developed countries of the European Union, North America and Asian countries, and also taking into account the limited land resources suitable for the purpose of maintaining an organic land use system in these countries, it can be assumed that in subsequent years, developing countries will be able to take a leading place in the global production and export of organic food. Ukraine, with its significant potential in increasing the land area suitable for the production of organic products, the availability of labor resources in rural areas, can occupy its niche in the global organic food markets.
In this regard, it is necessary to make timely and comprehensive decisions in determining the nomenclature of organic products, the formation of mechanisms of state support for agriculture, focused on the production of organic products and the promotion of organic products on domestic and foreign markets.
The development of agriculture in Ukraine, focused on the production of organic products should be based on solving a list of interrelated priorities: − conducting land monitoring in order to determine the land potential suitable for the production of organic products; − justification of methodological foundations for the development of a mechanism for the formation and development of agriculture, focused on the production of organic products; − development and co-financing of programs aimed at the conservation and restoration of soil fertility of agricultural lands; − implementation of programs aimed at improving knowledge and developing skills in maintaining organic land use systems for agricultural producers of various organizational and legal forms of ownership in order to overcome the deficit of economic thinking and to establish an adequate level of education; − development of national standards for certification of agricultural organic products, as well as the creation of conditions for organic products to pass international environmental certification.
The fundamental objective of the organic land use system is the development of incentives for the production and sale of organic food. The emerging system of organic farming should include the following activities: development and adoption of the regulatory framework necessary for the effective functioning of the system of organic agricultural production and markets for organic products; introduction of the necessary amendments to the current tax legislation aimed at supporting and economic stimulation of the developing organic sector of agricultural production; development of a set of measures and the adoption of a state program to support agricultural producers of organic products; providing consulting and information support to organic producers and the formation of an environmental culture of consumers; organization of an environmental management system in national agricultural production; organization of a centralized marketing service promoting the organic production of Ukrainian agricultural producers in domestic and international markets.
The main condition for the effective functioning of the proposed system is the development of an economic mechanism for organizing agricultural production of organic products both in large agricultural organizations and in small organizational and legal forms of management.
Today, there are many parties willing to engage in organic production in Ukraine and invest in its development, but they need state support, especially during the conversion period. In Ukraine, there is no government strategy and program to support the Economics of Agriculture, Year 67, No. 3, 2020, (pp. 939-953), Belgrade development of organic farming, which hinders the formation of the organic agricultural market due to the uncertainty of investors, credit institutions, farmers themselves about the feasibility, effectiveness and absence of risks of such production. Therefore, first of all, it is necessary to introduce a state program for the development of production of organic agricultural products, which will anticipate the development of this sector of economy and create the necessary frameworks for the coordination and control of organic production, as well as contribute to the expansion of markets for organic products. The main financial and economic measures for implementing state support for the development of organic agricultural production in Ukraine should include: subsidizing interest rates on loans, subsidizing part of the costs of production and crop insurance for organic producers, preferential lending and taxation, as well as improving mechanisms for regulating regional markets.
Thus the production of organic agricultural products, as a promising form of economy in Ukraine, depends on the method and extent of government support. Such support for organic producers abroad has become an effective tool for stimulating the development of organic farming.
Conclusions
Having considered the major global trends in the development and management of organic production and considering Ukraine's accession to the World Trade Organization and the association with the European Union, we can conclude that the Ukrainian market for organic products runs the risk of facing the expansion of foreign producers, which operate in much more favorable financial and legal conditions. Thus, in order to ensure that the Ukrainian market for organic products does not die as soon as it starts functioning, it needs to form and implement a national management model that will take into account both the interests of developing the domestic market and the interests of exporting organic products. Improving the legislation and structure of certification and supervisory organizations, drawing up a program of financial, informational and marketing support for domestic producers of organic products are those measures without which the development of Ukrainian market for organic products in the face of fierce international competition is almost impossible.
|
2020-10-16T09:44:30.696Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e332f567d9037d6f07b04a1bb8a3d15a9b9911e0",
"oa_license": "CCBYSA",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0352-3462/2020/0352-34622003939F.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e332f567d9037d6f07b04a1bb8a3d15a9b9911e0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
256048645
|
pes2o/s2orc
|
v3-fos-license
|
Search for a compressed supersymmetric spectrum with a light gravitino
Presence of the light gravitino as dark matter candidate in a supersymmetric (SUSY) model opens up interesting collider signatures consisting of one or more hard photons together with multiple jets and missing transverse energy from the cascade decay. We investigate such signals at the 13 TeV LHC in presence of compressed SUSY spectra, consistent with the Higgs mass as well as collider and dark matter constraints. We analyse and compare the discovery potential in different benchmark scenarios consisting of both compressed and uncompressed SUSY spectra, considering different levels of compression and intermediate decay modes. Our conclusion is that compressed spectra upto 2.5 TeV are likely to be probed even before the high luminosity run of LHC. Kinematic variables are also suggested, which offer distinction between compressed and uncompressed spectra yielding similar event rates for photons + multi-jets + .
Introduction
The Large Hadron Collider (LHC) has already accumulated a substantial volume of data with √ s = 13 TeV. Although the discovery of a scalar resembling the Higgs boson [1][2][3][4][5][6] in the Standard Model (SM) has laid the foundation of a success story, the absence of any new physics signal is a source of exasperation to those in search of physics beyond the SM (BSM). This applies to the search for phenomenologically viable supersymmetric (SUSY) scenarios as well. The non-observation of any supersymmetric particle so far at the LHC has strengthened the limits on many such low scale SUSY models. While the large production cross-section of the coloured SUSY particles (sparticles) are already pushing the existing mass limits to the 2 TeV mark with the initial data at the 13 TeV run, the weakly interacting sparticles are still not that severely constrained [7,8]. With the LHC already operating close to its near maximum centre-of-mass energy, consistent improvements in luminosity is expected to help accumulate enough data which will help probe the coloured sector mass to almost 3 TeV with some improvements for the weakly interacting sector too.
This lack of evidence for any low scale SUSY events prompted the idea of a compressed sparticle spectrum [9][10][11][12][13][14][15][16][17][18][19][20][21], where the lightest SUSY particle (LSP) and the heavier sparticle states may be nearly degenerate. In such realizations of the mass spectra, the resulting final state jets and leptons from the decay cascades of the parent particles are expected to be very soft, including the overall missing transverse energy which is a manifestation of the available visible transverse momenta. As events with such soft final states would be JHEP09(2017)026 susceptible to low acceptance efficiencies in the detectors and therefore lead to much smaller event rates in the conventional SUSY search channels. In the absence of hard leptons or jets arising from the cascade, one has to rely on tagging the jets or photons originating from the initial state radiation (ISR) or final state radiation (FSR) to detect such events where the available missing transverse momenta is characterized by the stability of the LSP in the cascades. Usually, in most SUSY models, the lightest neutralino ( χ 0 1 ) is assumed to be the LSP. Thus, such signals allow a much lighter SUSY spectrum compared to the conventional channels with hard leptons, jets and large missing transverse momentum [22][23][24][25][26][27][28][29][30][31][32].
However, in the presence of a light gravitino ( G) in the spectrum, such as in gauge mediated SUSY breaking (GMSB) models [33][34][35][36][37][38][39][40], the χ 0 1 is quite often the next-to-lightest SUSY particle (NLSP), which decays into a G and a gauge/Higgs boson. Search strategy for such scenarios, therefore, is expected to be significantly different. In this case, one would always expect to find one or more hard leptons/jets/photons in the final state originating from the χ 0 1 decay, irrespective of whether the SUSY mass spectrum is compressed or not. Hence detecting events characterizing such a signal is expected to be much easier, with the preferred channel being the photon mode. Given the fact that the hard photon(s) can easily be tagged for these events in a relatively compressed spectrum of the SUSY particles with the NLSP, one need not rely on the radiated jets for signal identification, thereby improving the cut efficiency significantly. If one considers a fixed gravitino mass, the photon(s) originating from the χ 0 1 decay will be harder as m χ 0 1 becomes heavier. Hence these hard photon associated signals can be very effective to probe a heavy SUSY spectrum with a light gravitino as there would rarely be any SM events with such hard photons in the final state.
While the light gravitino scenario yields large transverse missing energy (E / T ) as well as hard photon(s) and jet(s), the question remains as to whether its presence obliterate the information on whether the MSSM part of the spectrum is compressed or not. In this work, we have demonstrated how such information can be extracted. Our study in this direction contains the following new observations: • A set of kinematic observables are identified involving hardness of the photon(s), the transverse momenta (p T ) of the leading jets and also the E / T , which clearly brings out the distinction between a compressed and an uncompressed spectrum with similar signal rates. We have studied different benchmarks with varied degree of compression in the spectrum in this context.
• The characteristic rates of the n-γ (where n ≥ 1) final state in a compressed spectrum scenario have been obtained and the underlying physics has been discussed.
• The circumstances under which, for example, a gluino in a compressed MSSM spectrum prefers to decay into a gluon and a gravitino rather than into jets and a neutralino have been identified. In this context, we have also found some remarkable effects of a eV-scale gravitino though such a particle can not explain the cold dark matter (DM) content of the universe.
JHEP09(2017)026
The experimental collaborations have considered light gravitino scenarios and derived bounds on the coloured sparticles [41][42][43][44][45][46][47][48]. The ATLAS collaboration recently published their analysis on a SUSY scenario with a light G with the 13 TeV data accumulated at an integrated luminosity of 13.3 fb −1 [48]. In this analysis, χ 0 1 is considered to be a binohiggsino mixed state decaying into γ G and(or) Z G resulting in the final state "n 1 γ + n 2 jets + E / T " where n 1 ≥ 1 and n 2 > 2. The 13 TeV data puts a stringent constraint on the sparticle masses excluding m g upto 1950 GeV subject to the lightest neutralino mass close to 1800 GeV [46][47][48], which is a significant improvement on the bounds obtained after the 8 TeV run with 20.3 fb −1 integrated luminosity [42,44]. We note that, in order to derive the limits from the collider data, the experimental collaboration considers signal events coming from gluino pair production only, while assuming the rest of the coloured sparticles viz. squarks to be much heavier to contribute to the signal. The robustness of the signal however does not differentiate whether such a heavy SUSY spectrum (leaving aside the gravitino) are closely spaced in mass or have a widely split mass spectrum, and whether it is just a single sparticle state that contributes to the signal or otherwise. We intend to impress through this work that such a signal would also be able to distinguish such alternate possibilities quite efficiently.
In an earlier work while assuming a similar compression in the sparticle spectrum [18] we had shown that in order to get a truly compressed 1 pMSSM spectrum consistent with a 125 GeV Higgs boson and the flavour and dark matter (DM) constraints, one has to have the χ 0 1 mass at or above 2 TeV with the entire coloured sector lying slightly above. Such a spectrum is now seemingly of interest given the present experimental bounds obtained in G LSP scenario. 2 In this work, we aim to extend our previous study by adding to the spectrum, a G LSP with mass, at most, in the eV-keV range. Rest of the pMSSM spectrum lies above the TeV range to be consistent with the experimental bounds. This is in contrast to existing studies done earlier for gravitino LSP which we compare by studying the prospects of uncompressed spectra having relatively larger mass gaps between the coloured sparticles and χ 0 1 , but with event rates similar to that of the compressed spectra. Since the kinematics of the decay products in the two cases are expected to be significantly different, we present some kinematical variables which clearly distinguish a compressed spectrum from an uncompressed one, in spite of comparable signal rates in both cases.
The paper is organised in the following way. In section 2 we discuss about the phenomenological aspects of a SUSY spectrum with gravitino LSP and then move on to study the variation of the branching ratios of squark, gluino and the lightest neutralino into gravitino associated and other relevant decay modes. In section 3 we present some sample benchmark points representative of our region of interest consisting of both compressed and uncompressed spectra that are consistent with the existing constraints. Subsequently, in section 4 we proceed to our collider analysis with these benchmark points and present the details of our simulation and obtained results. Finally, in section 5 we summarise our results and conclude.
JHEP09(2017)026
2 Compressed spectrum with a gravitino LSP The NLSP decaying into a gravitino and jets/leptons/photons give rise to very distinct signals at the LHC. Both the ATLAS and CMS collaborations have studied these signal regions for a hint of GMSB-like scenarios [41][42][43][44][45][46][47][48]. Note that, a pure GMSB like scenario is now under tension after the discovery of the 125 GeV Higgs boson [49][50][51]. It is very difficult to fit a light Higgs boson within this minimal framework, mostly because of small mixing in the scalar sector. As a consequence, the stop masses need to be pushed to several TeV in order to obtain the correct Higgs mass, thus rendering such scenarios uninteresting in the context of LHC. However, some variations of the pure GMSB scenario are capable of solving the Higgs mass issue and can still give visible signals within the LHC energy range [52,53]. Since we are only interested in the phenomenology of these models here, a detailed discussion on their theoretical aspects is beyond the scope of this paper.
Although the lightest neutralino ( χ 0 1 ) is the more popular DM candidate in SUSY theories, gravitino ( G) as the LSP has its own distinct phenomenology. The G is directly related to the effect of SUSY breaking via gauge mediation and all its couplings are inversely proportional to the Planck mass (∼ 10 18 GeV) and thus considerably suppressed. The hierarchy of the sparticle masses depend on the SUSY breaking mechanism and can result in G getting mass which is heavier, comparable or lighter than the other superpartners. Thus if it happens to be the LSP in the theory, G can also be a good DM candidate [54][55][56][57][58][59] making such scenarios of considerable interest in the context of the LHC. In addition, having G as a DM candidate also relaxes the DM constraints on the rest of the SUSY spectrum by a great deal, allowing them to be very heavy while being consistent with a light G DM. However, a very light G is mostly considered to be warm DM. Present cosmological observations require a light gravitino to have a mass close to a few keV [60,61] at least, if it has to explain the cold DM relic density. However, the kinematic characteristics of events when the NLSP decays into a gravitino are mostly independent of whether the gravitino is in the keV range or even lower in mass. Some special situations where the difference is of some consequence have been discussed in section 4.3. Of course, the presence of a gravitino much lighter than a keV will require the presence of some additional cold DM candidate.
Note that with G as the LSP decay branching ratios (BR) of the sparticles can be significantly modified since they can now decay directly into G instead of decaying into χ 0 1 , which may significantly alter their collider signals. The decay width (Γ) of a sparticle, scalar( f ) or gaugino( V ), decaying into their respective SM counterparts, chiral fermion(f ) or gauge boson(V ), and G is given by [62] Γ f → f G = 1 48π where M P l is the Planck scale. Thus it is evident that this decay mode starts to dominate once the sparticles become very heavy and the G becomes light. -m G plane.
Relevant branching ratios
In this section, we discuss the variation of the branching ratios (BR) of various sparticles into the LSP gravitino. Since in this analysis we aim to study the production of the coloured sparticles and their subsequent decays into the G via χ 0 1 , the decay modes of g, q and χ 0 1 are of our primary interest. While considering the decay modes, we focus on a simplified assumption that the decaying coloured sparticle is the next-to-next-lightest supersymmetric particle (NNLSP) with χ 0 1 as the NLSP and G as the LSP. The BR computation and spectrum generation was done using SPheno [63][64][65] for a phenomenological MSSM (pMSSM) like scenario with one additional parameter, i.e, the gravitino mass (m G ).
Variation of BR( g → g G)
In figure 1 we show the variation of two relevant gluino decay mode channels viz. g → g G and g → qq χ 0 1 where all the squarks are heavier, as a function of ∆m g χ 0 1 = m g − m χ 0 1 and m G . The gluino mass has been fixed to m g =2500 GeV while m χ 0 1 has been varied such that ∆m g χ 0 1 varies within 10-1500 GeV. Note that the χ 0 1 is considered to be dominantly binolike. In the absence of its two-body decay mode into squark-quark pairs, the gluino can only decay via g → g G or g → qq χ 0 1 . The other two-body decay mode g → g χ 0 1 being loop suppressed, remains mostly subdominant compared to these two decay modes. Hence, only the two relevant channels are shown in the figure. Note that, BR( g → qq χ 0 1 ) includes the sum of all the off-shell contributions obtained from the first two generation squarks which in this case lie about 100 GeV above m g . As the gravitino mass gets heavier, BR( g → g G) decreases since, the corresponding partial width is proportional to the inverse square of m G . Similarly, as m χ 0 1 keeps increasing, BR( g → qq χ 0 1 ) goes on decreasing. Note that, the BR for the 3-body decay mode can decrease further with increase in the corresponding squark masses. However, even for a keV G, BR( g → g G) can remain significantly large provided there is sufficient compression in the mass gap (∆m g χ 0 1 ∼10 GeV) as seen in figure 1. Next we look into the relevant decay modes of the first two generation squarks 3 when they are the NNLSP's. In this case, we assume that the gluino is heavier than the squarks, so that the dominant two-body decay modes available to the squarks are q L/R → q G and q L/R → q χ 0 1 . Unlike the previous case, here the gravitino decay branching ratio has competition from another two-body decay mode. Although the decay into G does not depend on the L and R-type of the squarks, BR( q L/R → q χ 0 1 ) is expected to be different depending on the composition of the χ 0 1 . For simplicity, we choose the χ 0 1 to be purely bino-like as before. The squark masses are fixed at m q = 2500 GeV and the NLSP mass, m χ 0 1 is varied as before such that ∆m q χ 0 1 = m q − m χ 0 1 varies in a wide range, 10-1500 GeV. The branching probabilities are shown in figure 2 where the plots on the left (right) shows the decay branching ratios of u L/R (d L/R ). As the coupling of q L with the SM-quark and bino-component of χ 0 1 is proportional to
JHEP09(2017)026
proportional to √ 2g tanθ W e q , where g, e q and I 3q represents SU(2) gauge coupling, electric charge of the SM-quark and its isospin respectively [62], we find a noticeable variation in decay probabilities of q L and q R for the same choice of mass spectrum. This implies that the right-handed squarks couple more strongly with the χ 0 1 compared to the lefthanded ones. As a result, although the partial decay widths of the squarks decaying into gravitino and quarks are identical for squarks of similar mass, the corresponding BR vary slightly depending on their handedness. This feature is evident in figure 2. The coupling strength of u R with χ 0 1 is larger by a factor of four compared to that of u L . The same coupling corresponding to d R is larger by a factor of two compared to that of d L . Hence the difference in the BR distributions is more manifest for the up-type squarks. The magnitude of the coupling strengths corresponding to u L and d L are exactly same and hence we have obtained similar distributions corresponding to those.
The BR distributions indicate that as we go on compressing the SUSY spectrum, the gravitino decay mode becomes more and more relevant but only if its mass is around or below the eV range. We, therefore, conclude that for a keV G, the decay mode g → g G may be of importance but only for the cases where the gluino mass lies very close to the NLSP neutralino mass. For the first two generation squarks and a keV G, the BR( q L/R → q G) is very small and the decay of the squarks into χ 0 1 dominates in the absence of a lighter gluino. As evident, the gravitino decay mode can be of significance for LHC studies if m G ∼ eV. However, such a light G is strongly disfavoured from DM constraints as mentioned before.
Variation of BR
The last two subsections point out the situations where the NLSP can be bypassed in the decay of strongly interacting superparticles. Such events tend to reduce the multiplicity of hard photons in SUSY-driven final states. In contrast, in the case where the SUSY cascades lead to a χ 0 1 NLSP, the χ 0 1 may further decay into gravitino along with a Z, γ or the Higgs boson (h) depending upon its composition. 4 The h-associated decay width is entirely dependent on the higgsino component of χ 0 1 while Γ( χ 0 1 → γ G) depends entirely on the bino and wino component of χ 0 1 whereas the Z-associated decay width has a partial dependence on all the components that make up the χ 0 1 . The functional dependence on the different composition strengths of χ 0 1 in its decay width can be summarised as [62]: where, N ij are the elements of the neutralino mixing matrix, θ W is the Weinberg mixing angle, α is the neutral Higgs mixing angle and β corresponds to the ratio of the up and down type Higgs vacuum expectation values (VEVs). Note that the partial decay widths are proportional to m 5 ) and hence if m G is too large, the total decay width 4 In principle, χ 0 1 may decay into the other neutral Higgs states also which we assume to be heavier.
JHEP09(2017)026
of χ 0 1 may become too small such that it will not decay within the detector. Although the decay width is also dependent upon m χ 0 1 , one finds that for a 2500 GeV χ 0 1 , and a MeV G the neutralino becomes long-lived. In figure 3 we show the variation of the three relevant BRs with the composition of the χ 0 1 . Here we have varied M 1 , M 2 and µ in the range [2 : 2.5] TeV with the condition µ > M 2 > M 1 such that χ 0 1 is bino-like most of the time with different admixtures of wino and higgsino components. The other relevant mixing parameter tanβ is kept fixed at 10. The red, green and blue colours correspond to indicates the binofraction in the composition of χ 0 1 . Similarly, |N 12 | 2 and |N 2 13 | + |N 14 | 2 represent the wino and higgsino components respectively. As can be clearly seen from the plots, obtaining 100% BR( χ 0 1 → γ G) is not possible even if the bino and(or) wino components are close to 1, since the Z-mode is always present. However, the h-associated decay channel can be easily suppressed with a relatively larger µ. Motivated by this behaviour of the BRs, we choose to work with a signal consisting of at least one photon for our collider analysis. In our case, the χ 0 1 being dominantly bino-like, it decays mostly into a γ and a G. However, the Z G decay mode has a substantial BR (∼ 25%). The higgsino admixture in χ 0 1 being small, the h G decay mode is not considered in this work. However it is worth noting that this particular channel can be the dominant mode for a higgsino-dominated NLSP and could also be an interesting mode of study, which we leave for future work.
Benchmark points
For our analysis we choose a few benchmark points that would represent the salient features of a compressed sparticle spectrum with varying compression strengths while also categorically defining a few points that are more in line with current SUSY searches with G LSP by the CMS and ATLAS collaborations at the LHC. We insure that our benchmark choices are consistent with all existing experimental constraints. We consider both compressed and uncompressed spectra, with bino-like χ 0 1 as the NLSP and a keV gravitino as the LSP and warm dark matter candidate. For one of the benchmarks, we also show JHEP09 (2017) Table 1. Low energy input parameters and the relevant sparticle masses, (in GeV), for the compressed (C i , i = 1,...,6) and uncompressed (U1, U2) benchmarks. Here, ∆M i = m i − m χ 0 1 where m i represents the mass of the heaviest coloured sparticle ( g/ q k , (k = 1,2)) and m χ 0 1 , the mass of the NLSP. For all benchmarks, the gravitino mass, m G = 1 keV. the effect of an eV mass gravitino LSP. The final benchmarks used in this study are shown in table 1.
The mass spectrum and decays of the sparticles are computed using SPheno-v3.3.6 [63][64][65]. We restrict the light CP-even Higgs mass to be in the range 122-128 GeV, i.e, within 3-σ range of the measured Higgs mass [1][2][3][4] and including theoretical uncertainty of ∼ 4 GeV. Note that when the mass spectrum is compressed, all squark/gluino (which are nearly degenerate in mass) production channels contribute significantly to the signal. For all the benchmark points, the squarks and gluino decay directly or via cascades to the binolike χ 0 1 NLSP. The χ 0 1 then dominantly decays to a photon and gravitino and, to a lesser JHEP09(2017)026 extent, a Z boson and gravitino. This leads to either a mono-photon or a diphoton signal with jets and E / T which defines our signal. To evade constraints from photon(s) searches at the LHC for simplified models [41][42][43][44][45][46][47][48], we require the sparticles in a compressed spectrum such as ours, to be much heavier than the existing experimental limits. We have checked this for our spectra represented by the benchmark points, with the NLSP mass lying in the range 2.4−2.6 TeV with varied masses and hierarchy of the coloured sparticles with respect to the NLSP. Amongst them, C6 is the utmost compressed spectra, with a mass gap, ∆M i ∼ 6 GeV between the coloured sparticles and the NLSP of mass 2462 GeV, followed by C2, C5 where the mass gap is in the range of 40-50 GeV and the NLSP masses are 2428 and 2526 GeV respectively. We have also considered benchmarks C1, C3 and C4 such that the mass gap between the coloured sparticles and NLSP are slightly higher and lie in the range of 100-200 GeV.
We also choose various possible mass hierarchical arrangements of the squarks and gluino to accommodate different cascades contributing to the signal. For example, C1 and C3 have different squark-gluino mass hierarchical stuctures in the strong sector. This leads to different jet distributions in the two cases. C2 and C5, on the other hand, are similar in the arrangement of the sparticles, however placed within 50 GeV from the NLSP, which represents a much more compressed scenario. Finally we consider two uncompressed spectra U1, U2 with NLSP mass ∼ 700 GeV and ∼ 1200 GeV and gluinos with mass ∼1.4 TeV and ∼1 TeV above the NLSP respectively. Since the photons arise from the NLSP decays, a heavier NLSP gives rise to a harder photon, having better chances of passing the analysis cuts. Thus the difference in the signal cross-sections differ on account of the difference in hardness of the photons and the resulting cut efficiencies in these two cases.
Benchmark points U1, U2 are in fact replications of the simplified scenarios that are considered by experimental collaborations to put limits on SUSY particle masses. For both these benchmark points, we have kept the squarks very heavy (∼ 4 − 5 TeV) so that the gluino pair production is the only dominant contributing channel. However, we have only focussed on uncompressed spectra with event rates comparable to those of the compressed spectra. Since the large mass gap between the gluino and NLSP allow for multiple hard jets to be produced as opposed to the compressed case, we further exploit this feature to differentiate compressed from uncompressed scenarios with comparable event rates during signal analysis.
Collider analysis
We look for multi-jet signals associated with very hard photon(s) and missing transverse energy (E / T ) in the context of SUSY with gravitino as the LSP. For such GMSB kind of models with a keV gravitino, a very clear signature arises from the decay of the NLSP neutralino into a photon and a gravitino. If the NLSP-LSP mass difference is large enough, two hard photons would appear in the final state at the end of a SUSY cascade. The lightest neutralino, if bino-like, decays dominantly into a photon and gravitino (∼75%) while a small fraction decays into Z boson and gravitino (∼ 25%). For cases with χ 0 1 having a significant JHEP09(2017)026 higgsino component, we get comparable branching fractions for its decay into Z boson or a Higgs boson, besides photons, along with G. For simplicity, we have considered a bino-like χ 0 1 as the NLSP. Note that the signal strength consisting of very hard photons in the final state can be affected by the composition of the NLSP as we have discussed before. The χ 0 1 decay into a Z G however still remains relevant for the bino-like χ 0 1 and as a result, gives rise to a monophoton signal at the LHC along with the diphoton channel, associated with large missing transverse energy. The existing LHC constraints in such scenarios have already pushed the χ 0 1 -q-g mass bounds above 1.5 TeV which automatically result in a large χ 0 1 -G mass gap. This gives rise to very high p T photons in the final states, which are very easy to detect and also highly effective to suppress the SM background events.
In this work, we consider six benchmark points for compressed spectra (C1 -C6) such that the entire coloured sector (apart from t 2 and b 2 ) lie within 200 GeV of the χ 0 1 (m χ 0 1 ∼ 2.4 -2.6 TeV). We then estimate signal rates of final state events with at least one or more hard photons arising from all possible squark-gluino pair production modes. We also study a couple of uncompressed spectra (U1,U2) such that both the compressed and uncompressed spectra produce similar event rates for our signal. In these spectra, the NLSP mass is around 700 and 1200 GeV respectively and the gluino is the lightest coloured sparticle having a large (∼ 1-1.4 TeV) mass gap with the NLSP. The squarks are chosen to be heavier (4)(5) and are essentially decoupled from rest of the spectrum. The large mass gap between the NLSP and the coloured sector ensures multiple hard jets from their decay cascades besides the hard photons. Thus with different mass gaps and squark-gluino hierarchy among the compressed and uncompressed spectra, the jet profiles are expected to be significantly different for the benchmark points. Following the existing ATLAS analysis [48], which provides the most stringent constraint on the SUSY spectrum with a light gravitino LSP, we determine the signal event rates for our choice of benchmark points. Since we have also chosen compressed and uncompressed spectra such that the final state event rates are equal or comparable after analysis, it is a priori difficult to determine which scenario such a signal reflects. Keeping this in mind, we propose a set of kinematic variables, besides the usual kinematic ones like E / T and M Ef f , which highlight the distinctive features of compression in a SUSY spectra over an uncompressed one with G as the LSP, although both have comparable signal rates.
Simulation set up and analysis
We consider the pair production and associated production processes of all coloured sparticles at √ s = 13 TeV LHC. Parton level events are generated using Madgraph5 (v2.2.3) [66,67] for the following processes with upto two extra partons at the matrix element level: p p → q * q, q g, q q, q * q * , q * g, g g We reject any intermediate resonances at the matrix element level, which may arise in the decay cascades of the sparticles from two or more different processes, to avoid double counting of Feynman diagrams to the processes. The parton level events are then showered using Pythia (v6) [68]. To correctly model the hard ISR jets and reduce double counting of JHEP09(2017)026 jets coming from the showers as well as the matrix element partons, MLM matching [69,70] of the shower jets and the matrix element jets have been performed using the shower-k T algorithm with p T ordered showers by choosing a matching scale (QCUT) 120 GeV [71]. The default dynamic factorisation and renormalization scales [72] have been used in Madgraph whereas the PDF chosen is CTEQ6L [73]. After the showering, hadronisation and fragmentation effects performed by Pythia, subsequent detector simulation of the hadron level events are carried out by the fast simulator Delphes-v3.3.3 [74][75][76]. The jets are reconstructed using Fastjet [77] with a minimum p T of 20 GeV in a cone of ∆R = 0.4 using the anti -k t algorithm [78]. The charged leptons (e, µ) are reconstructed in a cone of ∆R = 0.2 with the maximum amount of energy deposit allowed in the cone limited to 10% of the p T of the lepton. Photons are reconstructed in a cone of ∆R = 0.4, with the maximum energy deposit in the cone as per ATLAS selection criteria [48].
For background estimation, we focus on the most dominant SM backgrounds for photon(s) + jets + E / T signal at 13 TeV LHC, such as: γ+ ≤ 4 jets, γγ+ ≤ 3 jets, W γ+ ≤ 3 jets, Zγ+ ≤ 3 jets and ttγ + jets. The sort of extremely hard p T photons that we expect in our signal events, are unlikely to be present in SM processes in abundance and the hard photons will arise mostly from the tails of the p γ T distributions. Hence in order to obtain a statistically exhaustive event sample, we choose a hard p Tγ > 200 GeV cut as a preselection for the parton level events for the leading photon while generating the background events. For MLM matching of the jets, the matching scale was chosen in the range 30-50 GeV as applicable for electroweak SM processes.
Some other SM processes, such as QCD, tt+jets, W +jets, Z+jets, in spite of having no direct sources of hard photons, may also contribute to the background owing to their large production cross-sections coupled with mistagging of jets or leptons leading to fake photons. However, the cumulative effect of hard p γ T as well as E / T and M Ef f requirement renders these contributions negligible.
Primary event selection criteria. We identify the charged leptons (e, µ), photons and jets as per the following selection criteria (A0) for signal and background events alike: • Leptons ( = e, µ) are selected with p T > 25 GeV, |η e | < 2.37 and |η µ | < 2.70 and excluding the transitional pseudorapidity window 1.37 < |η | < 1.52 between the ECAL barrel and end cap of the calorimeter.
• All reconstructed jets have a large azimuthal separation with / E T , given by ∆φ( jet, / E T ) > 0.4 to reduce fake contributions to missing transverse energy arising from hadronic energy mismeasurements.
• The jets are separated from other jets by ∆R jj > 0. 4 With these choices of final state selection criteria we now proceed to select the events for our analysis.
Signal region: ≥ 1 γ + > 2 jets + E / T . We look into final states with at least 1 photon, multiple jets and large E / T . Amongst the existing analyses for the same final state carried by the experimental collaborations, the ATLAS analysis imposes a more stringent constraint on the new physics parameter space and hence we have implemented the same set of cuts as enlisted below for our analysis: • A1: the final state events comprise of at least one photon and the leading photon (γ 1 ) must have p γ 1 T > 400 GeV.
• A2: there should be no charged leptons in the final state (N =0) but at least 2 hard jets (N j > 2).
• A3: the leading and sub-leading jets must be well separated from E / T , such that ∆φ(j, E / T ) > 0.4.
• A5: as the light gravitinos would carry away a large missing transverse momenta, we demand that E / T > 400 GeV.
• A6: we further demand effective mass, is the scalar sum of p T of all jets and G T = Σ j p T (γ j ) is the scalar sum of p T of all photons in the event.
In Table 3. Required luminosity (L) to obtain 3σ and 5σ statistical Significance (S) of the signal at the 13 TeV run of the LHC corresponding to the benchmark points.
since it is the most compressed spectra among all. Naturally, one would expect jet multiplicity to be smaller in this case compared to the others. As a result, the requirement N j > 2 reduces the corresponding signal cross-section by a significant amount, whereas, for the uncompressed spectra, U1 and U2, this cut has no bearing. The hard photon(s) in the signal events and the presence of direct source of E / T ensure that the E / T and M Ef f cuts are easily satisfied by the selected events.
For the corresponding background events, we use the observed number of background events at ATLAS, which is 1, for the same final state studied at an integrated luminosity of 13.3 fb −1 at 13 TeV [48]. The statistical signal significance is computed using where s and b represent the remaining number of signal and background events after implementing all the cuts. In table 3, we have shown the required integrated luminosity to obtain a 3σ and 5σ statistical significance for our signal corresponding to all the benchmark points. The required luminosity for 3σ and 5σ statistical significance varies depending on the relative compression and heaviness of the spectra. As evident, C2 has the best discovery prospects and is likely to be probed very soon. C6 on the other hand, despite of having a similar squark-gluon spectra and a very similar production cross-section to that of C2, requires a much larger luminosity (∼ 112 fb −1 ) to be probed. This is because the high amount of compression in the spectra reduces the cut efficiency significantly due to the jet multiplicity requirement. The required integrated luminosity for C1 and C5 is very similar although C5 has a relatively lighter coloured sector and thus a larger production cross-section compared to C1, as can be seen from table 2. However, the photon and jet selection criteria reduces the C5 cross-section making it comparable to that of C1. The situation is different for U1 which despite of having the lightest gluino, requires the largest JHEP09(2017)026 luminosity (∼ 326 fb −1 ) among all the benchmark points in order to be probed. The reason is two-fold. Firstly, the production cross-section in this case (and also for U2) is comprised of just the gluino-pair since the squarks are far too heavy to contribute. Secondly, the χ 0 1 being ∼ 700 GeV, the photons arising from χ 0 1 decay are relatively on the softer side and hence the photon selection criteria further reduces the signal cross-section. A similar squark-gluon spectra in presence of a heavier χ 0 1 (U2) therefore is likely to be probed with a much smaller luminosity (∼ 139 fb −1 ) than U1. Thus it is evident from table 2 and 3, that given the present experimental constraints, a compressed spectra, unless it is too highly compressed such that the cut efficiency is reduced significantly, can improve the squark-gluino mass limits by a significant amount. For example, C2 can be probed with slightly little more luminosity than 13.3 fb −1 but with a coloured spectra that lies in the vicinity of 2.5 TeV. This clearly suggests that a compressed spectra becomes much more quickly disfavoured over an uncompressed spectra with a gravitino LSP contrary to the case where a compressed SUSY spectrum appears as a saviour of low mass SUSY with a neutralino LSP. This is because of the hard photons that themselves act as a clear criterion to distinguish the signal over the SM background.
Distinction of compressed and uncompressed spectra
Given the inclusive hard photon + E / T signals, supposedly due to a light gravitino, can one ascertain whether the MSSM part of the spectrum is compressed or uncompressed? With this question in view, it is worthwhile to compare signals of both types with various degree of compression in presence of a light (∼ keV) gravitino as the LSP. We show that the kind of compressed spectra we have used enhances the existing exclusion limit on the coloured sparticles. We consider different squark-gluino mass hierarchy represented by our choice of some sample benchmark points presented in table 1. The G being almost massless in comparison to the χ 0 1 in consideration, the photons generated from the χ 0 1 decay into G are always expected to be very hard for both the compressed and uncompressed scenarios. This feature can be used to enhance the significance of the signal irrespective of the associated jets in the event. We provide a framework where one can use the properties of these jets in a novel way to distinguish between the two different scenarios in consideration even if they produce a similar event rate at the LHC. For illustration, let us consider the benchmark points, C5, C4 and U2 all of which result in nearly identical event rates for our signal and thus it is difficult to identify whether it is a signature of a compressed or an uncompressed spectra. It would be nice to have some kinematic variables which could be used to distinguish among the different kind of spectra. Subsequently, we have proposed few such variables which show distinctive features in their distributions depending on the relative hardness and multiplicity of the final state photon(s) and jets.
An uncompressed spectrum, such as U2 is characterized by a large mass gap between the strong sector sparticles and the NLSP ( χ 0 1 ). This ensures a large number of high p T jets from the cascades as compared to C5 and C4. The difference in jet multiplicity in the two cases is clearly visible in figure 4 where we have presented both the jet and photon multiplicity distributions for some sample compressed and uncompressed spectra. The hard photons in the event are originated from the χ 0 1 decay and since for all our benchmark points the χ 0 1 is sufficiently heavy, the photon multiplicity peaks at a similar region for both the compressed and uncompressed spectra. However, the jets in the case of U2 are generated from the three body decay of the gluino into a pair of quarks and χ 0 1 . As evident from figure 1, for the choices of the sparticle masses of U2, the other decay mode is highly suppressed. Hence one would naturally expect to obtain a large number of jets in the final state as shown in figure 4. C5 having a high degree of compression (∆M i = 48 GeV) in the parameter space results in least number of jets in the final state. C4, on the other hand, has a more relaxed compression (∆M i = 198 GeV) that gives rise to slightly harder cascade jets passing through the jet selection criteria resulting in a harder distribution than C5.
JHEP09(2017)026
The relative difference in the compression factor (∆M i ) among the three benchmark points are also visible in the jet p T distributions shown in figure 5. As expected, the leading (figure 5(a)) and subleading (figure 5(b)) jet p T distributions predominantly show a harder peak for U2 as compared to C4, C5. However, hard jets may also arise from the χ 0 1 decaying to a Z boson and gravitino (BR ∼ 25%) as the Z decays dominantly into two jets. The Z boson is expected to be highly boosted and thus one can easily obtain additional hard jets from its decay. These jets populate a small fraction of the total number of events and thus for a compressed spectra one of these jets can turn out to be the hardest jet in the event. This feature can be observed by the subdominant peak at ∼ 1000 GeV for the leading jet p T distribution in figure 5. Figure 5(c) and (d) show the leading and subleading photon p T distributions respectively for C4, C5 and U2. The χ 0 1 mass in C4, C5 being ∼ 2.5 TeV, the photons produced from their decay are much harder than the leading jets in the spectra as opposed to the uncompressed spectra (U2) and hence, the peak in the photon p T distribution is signif-JHEP09(2017)026 icantly shifted to lower values. Thus while the total hadronic energy, H T (figure 6(a)) peaks at a higher value for the uncompressed case owing to a large number of hard jets, G T (figure 6(b)) which is the scalar sum of all photon p T , peaks at a lower value for the uncompressed case than the compressed cases.
Among other kinematic variables, one can also look into the E / T and M Ef f distributions to distinguish the compressed and uncompressed scenarios as shown in figure 6(c) and (d) respectively. Since the photons are almost always harder for the compressed spectra compared to the uncompressed cases, we have observed that the E / T , required to balance the total visible transverse energy, is much harder for the former. Effective mass, M Ef f defined as the sum of H T , G T and E / T , also shows some small difference in the peak value for both cases. In U2, G T and E / T are softer than that for C4, C5 but H T is much harder resulting in the M Ef f peaking at similar values for the both cases. However, since the photons are considerably harder than the jets in all cases, the effect being more pronounced for the compressed over the uncompressed case, the M Ef f distribution falls faster for U2 than C4 and C5 as can be seen from figure 6(b) and 6(d) respectively. Taking cue from the kinematic distributions in figure 5 and figure 6, we now proceed to formulate two observables
JHEP09(2017)026
which capture the essence of the jet and photon transverse momenta behaviour in a way as to distinctly distinguish between the compressed and uncompressed scenarios. As seen in figure 7, for the compressed case, r 1 (figure 7(a)) peaks at rather small values (∼ 0.1 ) than the uncompressed case (∼ 1.0) since the leading jet p T is almost always softer than the leading photon for compressed spectra whereas for the uncompressed case there are hard jets with p T values comparable to the leading photon p T . However for the compressed spectra, the collimated hard jet from the highly boosted Z boson produced in the decay of the χ 0 1 , lead to a subdominant peak at ∼ 0.7 in r 1 . The observable r 2 (figure 7(b)) constructed with the sub-leading jet and leading photon p T , peaks at lower values (∼ 0.1) for C4 and C5 since the sub-leading jet, coming from the cascades or ISR in the JHEP09(2017)026 compressed case is expected to be much softer than the photon. For U2, r 2 peaks at ∼ 0.5 since the sub-leading jet also coming from the cascade is softer than the hardest photon. Thus we find that the above ratios seem to enhance the two major distinctive features between a compressed and an uncompressed scenario, namely the high/low p T for the photon/jet for the compressed as compared to the low/high p T of the photon/jet for the uncompressed case.
We further note that the jet multiplicity is another variable which shows a difference in the distributions for compressed spectra C4 and C5 when compared to that of the uncompressed spectra U2 ( figure 4(a)). Although the choice of our signal region involves N j > 2, the compressed spectra, C4 and C5, still retain a sufficient fraction of events with higher number of jets. In contrast, the uncompressed spectra U2 has larger number of hard jets for all events, and thereby remains mostly unaffected by this selection criterion. We therefore define a modified ratio (scaled by the jet multiplicities) as r 1 = N j r 1 and r 2 = N j r 2 .
Notably the new variables r 1 and r 2 are able to significantly enhance the differences between a compressed and uncompressed spectra. Since the scale factor, N j , is always greater for the uncompressed spectra U2 than for the compressed spectra C4 and C5, we find the peak values of r 1 (∼ 4.0) and r 2 (∼ 2.5) of the uncompressed spectra are shifted further away from that of compressed ones (r 1 ∼ 0.2-0.5 and r 2 ∼ 0.1-0.3). Quite importantly the visible overlap seen in r 1 for the sub-dominant peak is now completely disentangled in the new variable r 1 as seen in figure 7(c). This is significant in the sense that when the event samples would retain a much harder criterion for the leading jet then the events for U2, C4 and C5 would all feature the overlap observed for the sub-dominant peak while the difference for low r 1 might be washed away for this particular choice of event selection.
Besides enhancing the differences between the compressed and uncompressed spectra, the differential distributions in r i and r i can also be used to highlight the differences amongst the different compressed spectra themselves, depending on the level of compression in mass. For example, C4, has a larger mass separation ∆M i than C5, and shows a peak in the jet multiplicity at N j = 3 while for C5, the peak value of the differential cross section is at N j = 2. Thus a larger fraction of events survive after analysis for C4 than C5. Again, since C5 is relatively more compressed than C4, the jets from C4 are considerably harder than the latter. However the NLSP mass for C4 is larger than C5, since to probe lower values of compression, we require a heavier NLSP to meet current LHC bounds. This results in the photons being harder for C4 than for C5. The combined effect of the two seem to be more prominent for both r 1 and r 1 , where the leading jet is either the ISR jet or cascade jet in case of C4. For r 2 this effect seems neutralised, owing to the sub-leading jets for both cases, being much softer than the leading photon p T . However the scale factor N j shifts the peak value of r 2 , thus efficiently distinguishing amongst the two compressed spectra of varying degree of compression.
JHEP09(2017)026
Figure 7. Normalized distributions of different kinematic variables r 1 , r 2 , r 1 and r 2 to distinguish compressed and uncompressed scenarios for some of the benchmark points representing various compressed (C4), more compressed (C5) and uncompressed (U2) spectra after implementing the selection and analysis cuts A0-A6.
eV gravitino
As pointed out earlier that the kinematic characteristics of events when the NLSP decays into a gravitino are independent of whether the G is in the keV or eV range. Therefore, for an NLSP decaying into a G and a SM particle, the G is practically massless. However, as discussed in section 2.1, a lighter gravitino has a stronger coupling strength to the sparticles. Thus the decay of the sparticles into a SM particle and gravitino dominates over its decay to the NLSP. For a gravitino of mass 1 eV, we find that the gluino/squark almost always directly decays to the gravitino rather than to the NLSP. The branching fractions also depend on the mass gap between the coloured sparticles and the NLSP. These features are highlighted in figures 1 and 2 where both compressed and uncompressed mass gaps are shown.
Therefore, an eV G does affect the overall event rates of the signal in the photon channel when compared to the keV G case. An immediate consequence which has gone unnoticed for such light eV G case would be a new competing signal which can become more relevant than the more popular photonic channel. This can be easily understood by taking a look at the resulting BR( g → g G) for some of our benchmark points in presence of an eV gravitino. As indicated by figure 1, this branching ratio is supposed to go up if the spectrum is more compressed. For the same benchmark points as in table 1, now in the presence of an eV G, we have observed that BR( g → g G) ∼ 13%, 41% and 99% for U1 (∆m g χ 0 1 = 1403 GeV), U2 (∆m g χ 0 1 = 911 GeV) and C1 (∆m g χ 0 1 = 78 GeV) respectively. As a consequence, C1 with an eV gravitino, is unlikely to yield a good event rate in the photonic channel since the gluino avoids decaying into the NLSP altogether. However, a small fraction of the squarks may still decay into the NLSP, ∼ 4% and ∼ 24% precisely for left and right squarks respectively. Hence, one would still expect a photon signal for such a scenario, but a much weaker one as presented in table 4.
As expected, the photon signal weakens considerably when compared to one with a keV gravitino and requires an integrated luminosity ∼ 1000 fb −1 for observation at the LHC. However, much stronger signal would be obtained in the "n-jet+E / T " (n ≥2) channel as the final state would have at least two very hard (p T 's exceeding more than a TeV) jets and an equally hard E / T signal for the eV-gravitino case. The conventional multi-jet search [31] rely upon the usual E / T , M Ef f , E / T √ H T and ∆φ(j, E / T ) cuts and in some cases, razor variables [32] to reduce the SM backgrounds. We have checked that with these cuts, a 3σ significance can be achieved for C1 in the "n-jet+E / T " (n ≥2) final state at an integrated luminosity of ∼ 1000 fb −1 . However, in the presence of an eV gravitino, one can demand harder p T requirements of the jets and harder E / T , M Ef f along with the other conventional cuts to increase signal significance further. We have checked that one can easily bring down the required luminosity to ∼ 728 fb −1 for a 3σ significance, which is a big improvement over the results obtained for the photon-associated final state. Thus the multi-jet channel is the more favorable one in order to explore an eV gravitino in presence of a ∼ TeV compressed colour sector. However, as mentioned earlier, such a light gravitino may not be a viable dark matter candidate and would necessarily require the presence of other candidates to satisfy the constraints.
Summary and conclusion
In this work, we have explored the compressed SUSY scenario in the presence of a light gravitino LSP within the framework of phenomenological MSSM. The question asked is: since the light gravitino produced in the (neutralino) NLSP decays generates as much E / T for compressed spectra as for uncompressed ones, are the former discernible?
JHEP09(2017)026
The existing collider studies for such scenarios mostly account for the uncompressed parameter regions, and in some cases the NNLSP-NLSP compressed regions. However, compression in the entire coloured sector of the sparticle spectrum can result in significantly different exclusion limits on the masses of squark, gluino and the lightest neutralino. The presence of a light gravitino in the spectrum affects the branching ratios of the coloured sparticles into χ 0 1 . We have studied the interplay of these relevant branching ratios for varying G mass and different amount of compression in the rest of the sparticle spectrum for a bino-like χ 0 1 . Dictated by the DM constraints, we have mostly concentrated on the keV G scenario and have performed a detailed collider simulation and cut-based analysis for ≥ 1 photon + > 2 jets + E / T final states arising from the squark-gluino pair production channels in the context of the LHC. In our case, the squarks and the gluinos dominantly decay into the χ 0 1 which further decay into a G along with a γ or a Z resulting in the above mentioned final state. Hard p T photon requirement can be used along with other kinematic cuts to suppress the SM background very effectively. We have followed the existing ATLAS analysis for the same final state with the help of some benchmark points. We have shown that with the existing experimental data, the exclusion limits on the coloured sparticle masses can increase by ∼ 500 GeV for a highly compressed sparticle spectra. It is understood that similar signal event rates can be obtained from both uncompressed and compressed spectra depending on the choices of masses of squark, gluino and the lightest neutralino. However, the difference in the compression will be reflected in the kinematic distributions of the final state jets and photons. We have exploited this fact to construct some variables which can be used to good effect to differentiate between the two scenarios. We have also studied the collider prospects of SUSY spectra in the presence of sub-keV gravitinos. It turns out that in such cases, the G-associated decay modes of the heavy (∼ 2.5 TeV) coloured sparticles start to become relevant in the presence of high compression between the NNLSP and NLSP. Then the most suitable final state to look for such spectra would be multi-jets + E / T . However, the existing DM constraints strongly disfavour presence of such light gravitino in the spectrum. [31] ATLAS collaboration, Further searches for squarks and gluinos in final states with jets and missing transverse momentum at √ s =13 TeV with the ATLAS detector, ATLAS-CONF-2016-078 (2016).
[32] CMS Collaboration, An inclusive search for new phenomena in final states with one or more jets and missing transverse momentum at 13 TeV with the AlphaT variable, CMS-SUS-16-016 (2016).
|
2023-01-21T14:25:37.334Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "419f0492893f979559759788d91d9de05802926b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2017)026.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "419f0492893f979559759788d91d9de05802926b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
245538976
|
pes2o/s2orc
|
v3-fos-license
|
Anchor and Krackow‐“8” Suture for the Fixation of Distal Pole Fractures of the Patella: Comparison to Kirschner Wire
Objective The study aim was to evaluate the clinical outcomes, functional outcomes, and postoperative complications of anchor and Krackow‐“8” suture fixation (AS) and K‐wire fixation in patients with distal pole patellar fractures. Methods Twenty‐eight patients with distal pole patella fractures between January 2011 and December 2014 were reviewed retrospectively. The anchor and Krackow‐“8” suture fixation (AS group) was applied in 10 patients and 18 patients underwent K‐wire fixation (K‐wire group). The average age of patients was 46.000 ± 19.476 years in the AS group and 47.556 ± 15.704 years in the K‐wire group, with comparable demographic characteristics. All patients underwent regular follow‐up the operative data and postoperative functional and clinical outcomes were recorded. Complications were recorded by clinical and radiographic assessment. Bostman patellar fracture functional score was used to evaluate knee function after patellar fracture. Results A total of 28 eligible patients were included in this study. The mean follow‐up was similar for the AS and the K‐wire groups (P > 0.05). The incision length of AS group was significantly smaller than that of K‐wire group (P < 0.05). The incision length of AS group was significantly smaller than that of K‐wire group (P < 0.05). The final follow‐up on the range of motion of the knee: the average extension lag was similar in two groups (P > 0.05); flexion and flexion–extension angle was slightly better in the AS group than in the K‐wire group. The Bostman patella fracture functional score of AS group were better than K‐wire group at 3 and 6 months after operation. Four kinds of postoperative complications in two groups, one patient (10%) in the AS group and two patients (11.1%) in the K‐wire group had infections. Two (11.1%) cases of nonunion in group K and three patients (16.7%) required re‐operation: one due to infection and two due to early implant failure. In the AS group, all distal pole fractures of the patella showed bony union, without loosening, falling, pulling out and nonunion of the fractures 6 months after operation. Conclusions Anchor and Krackow‐“8” suture fixation is an easily executed surgical procedure that can significantly reduce incision length and achieve better surgical outcomes than traditional procedures with regard to postoperative complications, knee function and without requiring a second operation. This technique is an effective operation method for the treatment of inferior patellar pole fractures.
Introduction
I n most fracture classification systems, distal pole avulsion fractures of the patella fall into a separate category [1][2][3] .
Such fractures account for 9.3% to 22.4% of all patellar fractures and are treated surgically if displaced or associated with complete disruption of the extensor mechanism 4 . Patients with distal pole fractures of the patella have a disrupted extensor mechanism, which results in considerable functional disability. These fractures, particularly multifragmentary fractures, are difficult to treat, and reconstruction with preservation of the inferior patellar pole with a normal height of the patella is sometimes impossible to achieve with standard techniques. The ideal method should comply with three crucial demands: it should aid in reduction of the fracture, provide stable fixation and enable early rehabilitation 5 .
Various methods have been introduced to fix distal patellar pole fractures, including tension band wiring, separate vertical wiring, the use of a basket plate, wiring through cannulated screws, and partial patellectomy, and each of these techniques has characteristic advantages and disadvantages. Currently, clinical and biomechanical studies have provided definitive evidence that resection disrupts the extensor mechanism by decreasing the lever arm at the knee joint 6,7 . Operative fixation of displaced patella fractures has now become the standard of care for these injuries 8 . The modified anterior tension band technique using Kirschner wire (K-wire) is one of the most common methods used for the fixation of inferior patellar pole fractures. Although the Kwire and tension band technique remains popular, patients frequently complain of discomfort secondary to prominent hardware, leading to high rates of removal of hardware (ROH). Thus, revision surgery with K-wire removal becomes necessary in up to 65% of cases 9 .
A novel technique that employs the application of two or three anchors to the patella and reattachment of the distal fragments together with the patellar tendon has recently been described 10 . Anchor suturing (AS) had previously been reported as a technique for patellar tendon rupture fixation and was compared to transosseous tunnel suturing [11][12][13] . All these studies agree that the strength of anchors, however, is inferior to that of intraosseous sutures but is apparently sufficient for fracture fixation and early mobilization 14 .
This study aims to quantify clinical and postoperative functional outcomes and to identify postoperative complications in a cohort of patients who were treated with nonabsorbable braided suture fixation for distal pole fractures of the patella. These patients were then compared to a control group of patients who were treated for distal pole fractures of the patella using Kirschner wire. We hypothesized that there would be no observable difference in outcomes between the two groups.
Patients and Methods
T he data used in this study were collected as part of a larger project that included all patients who were admitted to our department for the treatment of patellar fractures between January 2011 and December 2014. Our hospital's electronic medical record system was searched for patients who were admitted and operatively treated for patellar fractures. We searched the ICD-9 codes for patellar fractures (822.0) and for patellar fracture-related surgical procedure codes (77.86, 78.56, 79.36.04 and sub-codes). A list of 251 patients was generated, of whom 223 underwent surgery.
Patients' demographic and operation information was obtained by manually reviewing their files. Since the intervals between patients' follow-up visits were not constant, we were able to describe the range of knee motion at the last outpatient visit to our medical institute. Radiographs were reviewed by a single author using the picture archiving and communication system. Pre-operative radiographs were reviewed to determine fracture type, and postoperative radiographs were reviewed for the type of fixation used and signs of fracture union. Fractures were described according to the OTA 15 system and the more commonly used descriptive classification. The latter classification includes seven fracture patterns: nondisplaced, transverse, distal or proximal pole, multi-fragmented nondisplaced, multi-fragmented displaced, vertical and osteochondral 16 .
Inclusion and Exclusion Criteria
According to different methods of fracture fixation, all patients had been divided K-wire group and AS group. Informed consent was obtained from all patients.
Inclusion criteria: (i) diagnosed as distal pole avulsion fractures of the patella on CT or X-rays; (ii) accepted anchor and Krackow-"8" suture fixation (AS) and K-wire fixation; (iii) complete follow-up data; and (iv) retrospective study.
Excluded criteria: (i) patients with other types of patella fractures; (ii) open fractures; (iii) no history of surgery in the affected; and (iv) patients who were unable to understand the items on the questionnaire.
Operation Techniques
Anesthesia and Position All patients followed a standardized anesthesia regimen. With general anesthesia, the endotracheal tube should be attached to the opposite side so as not to interfere with the surgical area. After satisfactory anesthesia, the patient was placed in the supine position, surgical incision was prepared and draped and applied sterile tape.
Approach and Exposure
An anterior midline incision was made over the patella to the tibial tuberosity and incision was about 6cm. Along the incision line, the skin, subcutaneous tissue and deep fascia were cut in turn and the flap was removed upward to expose the patella.
Fixation or Placement of Prosthesis
Cleaned the blood clots in the broken end of the fracture and the joint capsule, check whether the articular surface is flat, and use the reduction forceps to clamp the reduction. Two suture anchors (TwinFix Ti 5.0 mm, Smith Nephew, London, UK) and Krackow-"8" suture were inserted in the proximal patellar fracture fragment (Fig. 1). Three equal portions of proximal patellar fracture, with anchors placed in the decile points. Four equal portions distal to the fracture, with the Ultrabraid suture running through the corresponding point. The use of Tauting wire to replace fracture reduction. Ultrabraid sutures were sutured using "8" sutures over the patellar surface. The distal patellar fragment and the patellar tendon are sutured using Krakow technique sutures.
Postoperative Management
All patients were immobilized with a straight knee brace or cast, and postoperative functional exercise was directed by the surgeon. Appropriate knee flexion and extension exercise was performed 2 days postoperatively, active joint flexion and extension exercises 2 days postoperatively. Weeks 1 to 6 post-surgery, gradual transition from partial to full weight bearing was performed under proper protection, and then physical therapy is performed gradually to restore full range of motion of the knee.
Incision Length
The incision length was recorded for a general assessment of surgical complexity and degree of surgical trauma.
Range of Motion of Knee Joint
Measured the range of motion of the patient's knee at the last follow-up, including extension degree, flexion degree and flexion-extension, which can be measured using a goniometer. Prolonged fixation of the knee joint will lead to the reduction of range of motion, which is related to the time and effect of postoperative rehabilitation.
Complications
Potential postoperative complications, as an indicator of surgical safety, include infection, nonunion, implant failure, and reoperation. All complications were recorded. The occurrence of complications affects the safety and feasibility of the operation.
Statistical Analysis
The data were analyzed using SPSS for Windows version 17.0 (IBM, Armonk, NY, USA). Means and standard deviations (SDs) were used to describe continuous variables, and categorical variables are presented as numbers (percentages). Univariate analyses were performed using variance analysis for categorical data and Levene's-test for continuous variables. The level of significance was set at P < 0.05.
Operative Data
The demographic data of the 28 patients included in this study are presented in Table 1. All patients were followed up and responded to the questionnaire. Average follow-up was 7.900 AE 2.283 months (range, 6 -12 months) in the AS group and 10.278 AE 4.212 months (range, 6 -12 months, P > 0.05) in the K-wire group. Mean time from injury to surgery (h) was 9.300 AE 3.234 h (range, 6 -13 h) in the AS group and 9.111 AE 4.549 h (range, 4 -14 h) in the K-wire group (P > 0.05).
Range of Motion of Knee Joint
The patients' functional scores are reported in Table 1.
Ranges of knee motion as examined at latest follow up in the clinic were similar between groups with an average extension lag of 3.000 AE 3.496 (range, 0 -10) degrees in the AS group compared with 4.167 AE 3.930 (range, 0 -10) degrees in the K-wire group, and the difference did not reach significance (P = 0.442 21 -27) points in the K-wire group (P < 0.05). (Fig. 2).
Complications
There were four postoperative complications due to surgical technique ( Table 2). Three patients (10.7%) developed wound infection. However, one patient in the AS group had only one infection, whereas two patients in the K-wire group had one infection. In the one case in the AS group, the wound was red and swollen, and the healing time was delayed because of open fracture. The wound healed after one debridement, and the other nine cases healed well. In contrast, two of the K-wire patients (11.1%) had wound infection: one patient required only one debridement, and the other had a concomitant deep infection and early implant failure requiring complete serial debridements and re-operation. Three patients (16.7%) required re-operation in the Kwire group: one due to infection and two due to early implant failure. In the AS group, all distal pole fractures of the patella showed bony union, without loosening, falling, pulling out and nonunion of the fractures 6 months after operation. The complication rate was lower in the AS group than that in the K-wire group (P < 0.05). A typical case is shown in Figs 3-5.
Discussion
D istal pole fractures of the patella are rare and are usually the result of direct trauma, with or without quadriceps muscle contraction. Although there are numerous reports on the subject, the choice of treatment for fractures of the inferior pole of the patella remains controversial. The complexity of the fracture usually precludes reconstruction; hence, some surgeons perform partial patellectomy as a last resort. However, partial patellectomy may result in an abnormal height of the patella and a significant loss of range of motion 17 .
Previous Techniques
Patients who suffer from distal pole fractures of the patella have limited options for fracture fixation. Prior studies on patella fracture fixation have reported reoperation rates of between 20% and 50% following the use of Kirschner wire [18][19][20][21] . A recent study with a 6.5-year mean follow-up by LeBrun reported a rate of 56% in their cohort 22 . All hardware removals occurred in the K-wire group, accounting for 93% of all reoperations in the cohort; the remaining reoperations were indicated due to treatment failure 23 . Furthermore, the study showed that patients requiring reoperation had a significantly restricted range of motion in their affected knee, which remained significant after the exclusion of patients receiving pole resection 23 .
Novel Tension Band
However, anchor suture fixation is clinically acceptable and has been biomechanically verified 24,25 . In our study, patients undergoing anchor suture fixation experienced fewer hardware-related postoperative complications and achieved higher Bostman scores at one-year follow-up. The reoperation rate was four times higher for patients receiving K-wire fixation as compared with anchor suture fixation. This result is consistent with other reports, which demonstrated an increased reoperation rate for patients receiving metal implants.Although a previous study investigated the use of #5 Ethibond anchor suture fixation over K-wire fixation 18 , it did not employ quantitative methods to assess patient outcome. Our study made use of accepted patient outcome metrics to quantify any differences between the two groups, as well as a basic chart review. The aforementioned studies using rigorous outcome metrics, such as LeBrun et al. and Tian et al. did not include a cohort that was treated with sutures. Our study reports the outcomes of applying the AS technique for distal pole patellar fractures on the largest cohort reported thus far, for a longer follow-up time (14 months) and is the first to compare the AS technique to K-wire operative techniques employed in this setting. Our results showed that AS had comparable functional outcomes, range of motion, complications and re-operation rates compared to K-wire (Tables 1 and 2). Mean incision length was significantly shorter for AS compared to K-wire (5.35 cm and 11.722 cm, respectively; P < 0.05). These outcomes should be interpreted in light of the shorter follow-up time for the AS group compared to the K-wire group; these follow-up differences are important when comparing a novel technique to a traditional one.
The complication rate was high for both the AS and K-wire approaches. Surgical site infection was the most common complication (10.7%) and required re-operation in all three cases. The currently reported infection rates are 0 -5% for patellar fractures and 0 $ 11% for open fractures 9,26 . Anand et al. reported no complications for the AS technique. We cannot account for the relatively high rate of infection in the present study. On the other hand, we had no case of symptomatic implant requiring re-operation to remove it, as is commonly reported for other novel distal pole fixation techniques, such as the basket plate 7 . The rates of reoperation are generally lower for K-wire and AS compared to traditional techniques, most likely because the low profile of the construct does not irritate the patellar tendon 22 . Implant failure occurred in three patients in the K-wire group shortly after surgery. In these cases, the anchor was pulled out of the main patellar fragment, and revision surgery and partial patellectomy were required. This complication has not been reported for anchor suturing in patellar fractures to date.
Limitations
This study has several limitations, which are related to its being a retrospective evaluation of a novel technique; such studies are often prone to author bias. Another weak point is the difference in follow-up time between the intervention and control groups. We also used subjective outcome measure tools and could not provide long-term physical examination findings of actual knee strength, ranges of knee motion or patellofemoral signs. The primary reason for the latter drawback was to increase compliance by not requesting the patient to return for evaluation. We note that the percentage of patients available for similar follow-up purposes in other studies was as low as 50%. Additionally, we feel that the relatively high patient satisfaction reflected by the subjective questionnaire is well correlated with good range of knee motion at the latest follow up.
Conclusion
The application of AS for patellar inferior pole fracture fixation is a recently introduced and reported, novel surgical technique. We report the results of this technique on the largest cohort to date and compare the findings to those obtained using the traditional surgically treatment technique involving K-wire. We conclude that AS is an easily executed surgical procedure that can significantly reduce incision length and achieve better surgical outcomes compared to the traditional technique with regard to postoperative complications, knee function and without the need for a second operation. The potential disadvantages of this technique are the high rates of postoperative infection (10%) and potential early hardware failure in the form of anchor pull out from the main patellar fragment. AS also entails higher costs than traditional techniques, an issue that is beyond the scope of this investigation. Further clinical trials that are free of the drawbacks of the present investigation are warranted to guide therapeutic decisions and confirm our belief that AS is a viable option for distal pole patellar fracture fixation.
|
2021-12-30T06:22:20.352Z
|
2021-12-29T00:00:00.000
|
{
"year": 2021,
"sha1": "67c33414eef3189b8f5835185e71edc3d7f3ca62",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.13124",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1102345b518a8a89b217bc0746970606f969d56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256545706
|
pes2o/s2orc
|
v3-fos-license
|
Is Carbon Capture and Storage (CCS) Really So Expensive? An Analysis of Cascading Costs and CO2 Emissions Reduction of Industrial CCS Implementation on the Construction of a Bridge
Carbon capture and storage (CCS) is an essential technology to mitigate global CO2 emissions from power and industry sectors. Despite the increasing recognition of its importance to achieve the net-zero target, current CCS deployment is far behind targeted ambitions. A key reason is that CCS is often perceived as too expensive. The costs of CCS have however traditionally been looked at from the industrial plant perspective, which does not necessarily reflect the end user’s one. This paper addresses the incomplete view by investigating the impact of implementing CCS in industrial facilities on the overall costs and CO2 emissions of end-user products and services. As an example, we examine the extent to which an increase in costs of raw materials (cement and steel) due to CCS impacts the costs of building a bridge. Results show that although CCS significantly increases cement and steel costs, the subsequent increment in the overall bridge construction cost remains marginal (∼1%). This 1% cost increase, however, enables a deep reduction in CO2 emissions (∼51%) associated with the bridge construction. Although more research is needed in this area, this work is the first step to a better understanding of the real cost and benefits of CCS.
27
corresponds to the cradle-to-gate CO 2 emissions associated with bridge construction (t CO 2 ); 28 is the amount of cement (t cement ); , 29 are upstream CO 2 emissions related to the raw material extraction and their transport to the production facility (t CO 2 /t steel );
41
are CO 2 emissions of steel production (t CO 2 /t steel ); , 42 are transport emissions of steel to bridge construction site (t CO 2 /t steel ). ,
43
It is worth noting that HRC is converted to several products and forms of steel (e.g., wire, rod, and 44 structural steel) utilizing some tasks that emit CO 2 [1]. These emissions ( ) were added to the , 45 CO 2 emitted by the steel production plant as follows: where,
48
are CO 2 emissions of the steel product (t CO 2 /t steel ); , 49 are CO 2 emissions of the HRC-steel plant (t CO 2 /t steel ); , 50 is the amount of steel obtained from one tonne of HRC (t HRC /t steel ).
51
It is assumed that one tonne of HRC is converted into one of any steel products, i.e., . = 1 52 S4 53 The upstream emissions ( were aggregated by taking into account all the emissions related to ) 54 raw materials extraction and their transport to primary/intermediate production facilities as follows: where,
57
are CO 2 emissions related to raw materials extraction in the upstream supply chain; , 58 are transport emissions related to raw materials in the upstream supply chain.
, 59 60 CO 2 avoided -47% [7] Conversion of HRC into steel (kg CO2 /t steel ) 300 300 [1] S6 74 75 was calculated directly based on . The delivery cost, , was obtained based on the transport cost 119 model. The fixed and plant costs are obtained using their remaining percentage of share in the 120 concrete cost. While estimating , the transport costs from the cement plant to concrete facility 121 were included in the cost of cement. Except for cement cost, all other cost components remain 122 unchanged without and with CCS implementation.
124
The steel cost ( was obtained by summing the production cost of steel and delivery costs as ) 125 shown below, where,
128
is the production cost of HRC ( /t HRC ); € 129 is the relative cost factor represented as the ratio of the steel product price ( /t steel ) and the HRC € 130 price ( /t HRC ); € 131 is the steel delivery cost from the steel plant to the construction site ( /t steel ). €
132
The HRC produced in the steel mill plant is converted into several products of steel (e.g., wire, rod, 133 and structural steel) by utilizing some additional tasks. A relative cost factor ( is used to represent ) 134 the differences in each steel product cost based on production costs without CCS [1]. Note that 135 and was used for converting HRC into wire/rod forms of steel and structural steel, = 1 = 1.23 136 respectively [1]. The production cost of HRC with and without was obtained from the literature [7].
138
The cost data for cement and steel plants without and with CCS implementation were retrieved 139 from the literature [3,7] and are provided in Tables S7 and S8. The total production costs were 140 obtained based on annualised CAPEX and operating costs as follows: Production cost ( € t product ) = annualised CAPEX ( € t product ) + fixed OPEX ( € t product ) + variable OPEX ( € t product )
143
The annualised CAPEX and fixed OPEX costs from previous studies were directly updated to € 2018 144 using Chemical Engineering Plant Cost Index (CEPCI). The variable operating costs include raw 145 material costs, energy costs, and other miscellaneous costs. In the cement plant, the variable 146 operating costs are incurred due to the consumption of raw meal, coal, electricity, ammonia, and 147 other miscellaneous expenses. The variable operating costs in the steel plant are due to the 148 consumption of iron ore, coal, natural gas, scrap and ferroalloys, fluxes, and other consumables. 149 While some of these cost components were directly updated to € 2018 based on CEPCI, other 150 components such as iron ore, coal, natural gas, and electricity typically have a wide range of price 151 fluctuations over years. To provide a more accurate estimate, the cost contributions from coal and 152 electricity consumption in the cement plant were calculated based on annual coal and electricity 153 consumption and their prices in 2018 (provided in Table S9). Similarly, iron ore, coal, and natural 154 gas costs in the steel plant were estimated based on their annual consumption and unit prices in 155 2018. The annual consumption of raw materials is provided in Tables S1 and S2. For CCS scenarios, 156 CO 2 transport and storage costs (e.g., 10 €2018/tCO2) are also included in the variable operating costs.
S2.1. Cement and subsequent concrete production 170
The CO 2 emissions and cost estimation presented in Tables S11and S12 are expressed per tonne of 171 cement and per m 3 concrete, respectively. Moreover, the calculations are based on 340 kg of cement 172 is required to produce 1 m 3 of concrete [6].
S2.2. Steel and subsequent steel products production 181
The CO 2 emissions and cost estimation presented in Table S13 are expressed per tonne of HRC or 182 steel. S11 183 184
|
2023-02-04T06:17:21.849Z
|
2023-02-02T00:00:00.000
|
{
"year": 2023,
"sha1": "b7305597e7f6697db8d7b442c5a5e3cf5843b2c1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a29dcfa5c309ab1fbd5c2115f6cc2d6286fae182",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5001692
|
pes2o/s2orc
|
v3-fos-license
|
Non-canonical Opioid Signaling Inhibits Itch Transmission in the Spinal Cord of Mice
SUMMARY Chronic itch or pruritus is a debilitating disorder that is refractory to conventional anti-histamine treatment. Kappa opioid receptor (KOR) agonists have been used to treat chronic itch, but the underlying mechanism remains elusive. Here, we find that KOR and gastrin-releasing peptide receptor (GRPR) overlap in the spinal cord, and KOR activation attenuated GRPR-mediated histamine-independent acute and chronic itch in mice. Notably, canonical KOR-mediated Gαi signaling is not required for desensitizing GRPR function. In vivo and in vitro studies suggest that KOR activation results in the translocation of Ca2+-independent protein kinase C (PKC)δ from the cytosol to the plasma membrane, which in turn phosphorylates and inhibits GRPR activity. A blockade of phospholipase C (PLC) in HEK293 cells prevented KOR-agonist-induced PKCδ translocation and GRPR phosphorylation, suggesting a role of PLC signaling in KOR-mediated GRPR desensitization. These data suggest that a KOR-PLC-PKCδ-GRPR signaling pathway in the spinal cord may underlie KOR-agonists-induced anti-pruritus therapies.
Correspondence chenz@wustl.edu
In Brief Munanairi et al. show that the kappa opioid receptor (KOR) agonists inhibit nonhistaminergic itch transmission by attenuating the function of the gastrinreleasing peptide receptor (GRPR), an itch receptor in the spinal cord. KOR activation causes the translocation of PKCd from plasma to membrane, which phosphorylates GRPR to dampen itch transmission.
INTRODUCTION
Chronic itch or pruritus may arise from dysfunction of skin, immune, nervous system, or internal organ metabolism, such as liver and kidney diseases (Ikoma et al., 2006;Paus et al., 2006). Despite recent progress in identifying signaling molecules as potential targets for anti-pruritus therapies (Bautista et al., 2014;Liu and Ji, 2013), much less is known about the central targets for itch (Barry et al., 2018;Bautista et al., 2014). The mu and kappa opioid receptor systems appear to have opposing roles in a wide range of physiological processes (Pan, 1998), including itch transmission (Ballantyne et al., 1988). Most opioids are pruritogens, and morphine-induced pruritus could be a serious unwanted effect of epidural analgesia (Ballantyne et al., 1988;Reich and Szepietowski, 2010). On the other hand, the inhibitory effect of kappa opioid receptor (KOR) agonists, e.g., butorphanol or nalfurafine (TRK-820), on a wide range of itch behaviors has made them attractive drug candidates for treating patients with uremic, cholestatic, and opioid-induced pruritus (Cowan et al., 2015;Kumagai et al., 2010;Lawhorn et al., 1991;Phan et al., 2012;Togashi et al., 2002;Wikströ m et al., 2005). KORagonist-based anti-pruritic therapies, however, may have unwanted side effects, such as insomnia, somnolence, and constipation (Land et al., 2008;Phan et al., 2012). Despite a potential for KOR agonists in anti-itch application, the underlying mechanisms remain poorly understood.
In this study, we investigated whether spinal KOR activation attenuates itch transmission by blocking GRPR signaling in mice. Using several complementary approaches, we have demonstrated that KOR activation inhibits GRPR signaling via a Ca 2+ -independent phospholipase C (PLC)-protein kinase C (PKC)d pathway. Our studies may help design spinal KOR-GRPR cross-signaling-based therapeutic strategies to alleviate chronic itch.
Spinal KOR Activation Inhibits Nonhistaminergic Itch
To determine the effect of spinal activation of KOR on itch, scratching behavior was quantified in C57BL/6J mice after intrathecal (i.t.) injection of U-50,488, a selective KOR agonist (Simonin et al., 1998). Consistent with a previous study (Inan and Cowan, 2004), U-50,488 significantly attenuated scratching behavior induced by chloroquine (CQ), an anti-malaria drug with generalized pruritus (Ajayi et al., 1989). By contrast, U-50,488 had no effect on histamine-induced scratching ( Figure 1A). Consistently, U-50,488 markedly reduced i.t. GIS, whereas scratching behavior elicited by neuromedin B (NMB), a bombesin-related peptide, which is involved in histaminergic itch (Wan et al., 2017;Zhao et al., 2014b), was not affected ( Figure 1A). The attenuated effect of U-50,488 on GIS and CQ scratching was absent in Oprk1 À/À mice ( Figure S1A), indicating that U-50,488 reduced scratching via a KOR-specific manner. There is no difference in scratching behavior induced by GRP or CQ in male versus female mice (data not shown). Furthermore, we observed a significant decrease in CQ-induced scratching after i.t. injection of U-50,488 in female mice ( Figure S1B), suggesting a lack of sexual dimorphic properties in itch transmission (Chakrabarti et al., 2010). Finally, we examined the effect of spinal KOR activation on GRPR-dependent chronic itch models . i.t. injection of U-50,488 significantly reduced spontaneous scratching in BRAF Nav1.8 mice, a genetic model for chronic itch ( Figure 1B). U-50,488 also attenuated chronic itch induced by 2,4-dinitrofluorobenzene (DNFB), a model for allergic contact dermatitis (ACD), and in mice with dry skin itch induced by an acetone-ether-water (AEW) treatment Figures 1C and 1D). These observations demonstrate that spinal KOR activation inhibits GRPR-dependent acute and chronic itch.
KOR Inhibits GRPR in a Cell-Autonomous Manner
The finding that KOR activation inhibited GIS prompted us to examine whether KOR inhibits GIS indirectly through inhibitory neural circuits or directly in GRPR neurons, which are primarily excitatory interneurons (Wang et al., 2013). To differentiate between these two possibilities, we first examined whether KOR and GRPR are co-expressed in the spinal cord using duallabeled RNAscope in situ hybridization (ISH) (Wang et al., 2012). Oprk1 mRNA was detected in $50% (104/205) of Grpr neurons in the superficial dorsal horn ( Figures 1E and 1F).
The co-expression of KOR and GRPR raised the possibility that KOR activation may cross-inhibit GRPR in a cell-autonomous manner rather than through activation of inhibitory neural circuits. GRPR transduces itch via the PLCb/IP 3 /Ca 2+ signaling pathway (Liu et al., 2011;Zhao et al., 2014a). To examine this, the dorsal horn of the spinal cord was dissected and dissociated and neurons were cultured for calcium imaging (Figure 2A; Video S1). To determine whether U-50,488 inhibits GRP-induced, GRPR-mediated intracellular Ca 2+ mobilization, a two-step protocol was employed, whereby dissociated dorsal horn GRPR + neurons can be identified by an application of GRP (20 nM) and then re-sensitized after a 30-min wash-out period ( Figure 2C; Table S1; Zhao et al., 2014a). The ratio of the second GRPinduced response to the first response was used for quantitation, thereby avoiding inconsistencies that may result from GRPR + neuronal heterogeneity. U-50,488 (up to 20 mM) alone did not induce Ca 2+ responses in GRPR + neurons (data not shown). However, incubation of U-50,488 (10 mM) attenuated Ca 2+ responses of GRPR + neurons to GRP (Figures 2B,2D,and 2F;Video S2), and this inhibitory effect was reversed by norbinaltorphimine (norBNI), a selective KOR antagonist (Portoghese et al., 1987;Figures 2E and 2F). About 42% of GRPR + neurons showed complete inhibition (52/124) by KOR activation, 26% showed partial inhibition (32/124), and 32% were resistant to U-50,488 application (40/124; Table S2). The finding that the percentage of non-responders was slightly lower than Grpr + /Oprk1 À neurons, as observed by RNAscope ISH (Figures 1E and 1F), could be due to several factors, such as younger age of mice used for calcium imaging study and/or different sensitivities associated with each approach. CQ itch is significantly reduced, but not abolished, in Grpr KO mice (Sun and Chen, 2007), suggesting the involvement of a GRPR-independent pathway. To test whether U-50,488 may inhibit CQ itch via GRPR-independent pathway, we examined its effect on CQ itch using Grpr KO mice and found that i.t. U-50,488 did not further reduce CQinduced scratching ( Figure S2A). This result suggests that spinal KOR activation inhibits CQ itch predominantly via GRPR-cellautonomous mechanism.
Next, we investigated whether KOR activation inhibits GRPR signaling via the canonical opioid-mediated G ai signaling pathway (Al-Hasani and Bruchas, 2011). Unexpectedly, pertussis toxin (PTX) (200 ng/mL), a G ai inhibitor, did not block the inhibitory effect of U-50,488 ( Figure 2G), suggesting that a G ai -independent pathway is involved in spinal KOR activationmediated itch inhibition. Of the 23 GRPR neurons treated with PTX, GRP-induced calcium responses in 16/23 (70%) were completely inhibited by U-50,488, 2/23 (9%) showed partial inhibition, whereas 5/23 (21%) were resistant to U-50,488 treatment. As expected, PTX reversed U-50,488-mediated inhibition of cyclic AMP (cAMP) synthesis in HEK293 cells ( Figure S3A). To test whether KOR may differentially couple to G as in GRPR-KOR cells, we measured cAMP accumulation and found that neither U-50,488, GRP, nor their co-application induced cAMP accumulation ( Figure S3B), implying that it is unlikely that a G as -dependent pathway is involved in KOR-GRPR cross-signaling.
According to the canonical pathway, activation of KOR recruits G-protein-coupled receptor kinases (GRK). Arrestin will then bind to the phosphorylated KOR, resulting in acute desensitization (Bruchas and Chavkin, 2010). In contrast, U-50,488-mediated desensitization lasts for at least two days in mice ( Figure S1C). Furthermore, U-50,488 attenuated GIS in Arrb2 À/À mice ( Figure S3C), consistent with previous studies (Bohn et al., 2000;Morgenweck et al., 2015). Although we cannot completely exclude the involvement of arrestin signaling, due to possible genetic compensation in Arrb2 À/À mice, the long-lasting effect of KOR-mediated inhibitory action on itch transmission supports the notion that KOR activation (E) 10-min pretreatment with 10 mM norBNI blocked the U-50,488 inhibitory effect on GRP-induced Ca 2+ responses. (F) Quantified data comparing peak intracellular concentration evoked by GRP after pretreatment with U-50,488 (red) or norBNI+U-50,488 (purple; ***p < 0.001; one way ANOVA followed by Tukey's multiple comparison test; n = 11-27). (G) PTX (200 ng/mL) had no effect on U-50,488 inhibitory effect on GRP-induced Ca 2+ responses (**p < 0.01; NS, not significant; one way ANOVA followed by Tukey's multiple comparison test; n = 26-49). Data are represented as mean ± SEM. See also Figures S2 and S3. attenuates itch through b-arrestin2 signaling-independent pathway.
Spinal KOR Activation Inhibits GRPR Function via a PKC-Dependent Mechanism
Previous in vitro studies show that GRPR is a substrate of PKC that phosphorylates and desensitizes GRPR (Ally et al., 2003). To explore the possibility that PKC activation inhibits itch, scratching behavior was examined in mice pre-injected with i.t. phorbol myristate acetate (PMA), a PKC activator (Way et al., 2000), which markedly attenuated CQ-induced scratching and GIS, mimicking the U-50,488 effect ( Figure 3A). Interestingly, bisindolymaleimide (BIM), a selective inhibitor for PKCa, b1, b2, g, d, and ε isoforms (Toullec et al., 1991), blocked the effect of U-50,488 on GIS ( Figure 3B). Furthermore, PMA completely blocked spontaneous scratching behavior of BRAF Nav1.8 mice, ACD, and dry skin chronic itch mouse models ( Figure 3C). These findings raised a possibility that KOR activation suppresses itch via PKC-mediated inhibition of GRPR function.
Activation of KOR Induces GRPR Phosphorylation via PKC
Whole-cell phosphorylation assays were performed to further elucidate the role of PKC in KOR-activation-induced inhibition of GRPR signaling. In HEK293 cells expressing FLAG-KOR and Myc-GRPR, GRPR phosphorylation increased 13-fold after a 2-min incubation in U-50,488 (10 mM; Figures 4A, 4B, and S7A). Consistent with behavior and calcium-imaging results, PKC inhibition by BIM (5 mM) blocked KOR-activation-induced GRPR phosphorylation ( Figures 4C and S7B). Rapid GRPR phosphorylation was also observed within 2 min after treatment with PMA (1 mM) and decreased after 15 min (Figures 4D and S7C), in accordance with previous findings (Ally et al., 2003). Phosphorylation assays showed that U-50,488-mediated KOR activation induces rapid and robust, GRP-independent phosphorylation of GRPR, which may cause desensitization of GRPR activity.
KOR Activation Attenuates Itch via PKCd
The PKC family consists of a variety of isoforms that can be classified into three sub-families: conventional (a, b1, b2, and g; Ca 2+ and diacylglycerol [DAG] dependent); novel (d, ε, h, and q; DAG dependent); and atypical (z and I/l; Nishizuka, 1995;Steinberg, 2008). Given that U-50,488 failed to induce Ca 2+ responses in GRPR neurons, we postulated that Ca 2+ -independent PKC isoforms (PKCd or ε) may be involved in mediating KOR-dependent PKC activation. To identify the PKC isoform involved, we performed spinal PKC-isoform-specific small interfering RNA (siRNA) knockdown studies (Liu et al., 2011). Remarkably, siRNA knockdown of Prkcd not only blocked U-50,488 inhibition of GIS ( Figure 5A) but also enhanced CQ-induced itch, even in the presence of U-50,488 ( Figure 5B). Treatment of control siRNA did not affect U-50,488 inhibitory effect on GIS and CQ itch ( Figures S4A and S4B). qRT-PCR of the lumbar spinal cord confirmed specific knockdown of Prkcd, but not Prkca ( Figure 5C). Consistently, U-50,488 lost effect on GRP-induced calcium responses of dorsal horn neurons isolated from Prkcd À/À mice ( Figure S5B). We also performed siRNA knockdown of Prkca. However, U-50,488 attenuated CQ-induced itch after siRNA knockdown of Prkca, suggesting that PKCa does not mediate KOR activation inhibition of itch ( Figures S4C and S4D).
Next, we examined the role of PKCd in KOR-activation-mediated itch inhibition using Prkcd À/À mice and their wild-type (WT) littermates (Leitges et al., 2001) and did not find differences in GIS between Prkcd À/À and WT littermates. As predicted, the inhibitory effect of U-50,488 on GIS is lost in Prkcd À/À mice relative to their WT littermates ( Figure 5D). To examine whether PKCd is co-expressed with GRPR in the superficial dorsal horn, we generated a Grpr iCre /Ai9 reporter mouse line by crossing Grpr iCre mice with a tdTomato Ai9 line. Double immunohistochemistry (IHC) studies were conducted, and PKCd was detected in Grpr iCre -tdTomato + neurons (arrows indicate overlap in expression; Figure 5E). Further, we labeled spinal GRPR neurons with enhanced yellow fluorescent protein (eYFP) by injection of AAV5-Ef1a-DIO-eYFP virus into the dorsal spinal cord of Grpr iCre mice. Consistently, we detected numerous PKCd in Grpr iCre ; AAV-DIO-eYFP neurons (Figures 5F and S5A).
KOR Activation Induces PKCd Translocation to the Plasma Membrane
To further evaluate the role of PKCd in KOR-activation-induced inhibition of GRPR signaling, we examined PKC translocation from the cytosol to the plasma membrane, a hallmark of PKC activation (Mochly-Rosen et al., 1990). Using GFP-tagged PKC, the dynamics of PKC translocation in response to different stimuli can be monitored in live cells and in real time (Oancea et al., 1998;Wang et al., 1999). HEK293 cells expressing KOR and GRPR were transfected with PKCd-EGFP or PKCa-EGFP. Confocal live-cell imaging was then performed to characterize spatiotemporal properties of PKCd-EGFP or PKCa-EGFP after application of U-50,488 or PMA. PKCd and PKCa were present in the cytosol without stimulation ( Figure 6A; 0 min). Application of U-50,488 (10 mM) prompted translocation of PKCd, but not PKCa, to the cell membrane. PKCd-EGFP translocation from the cytosol to the plasma membrane was apparent as early as 5 min and reached a maximum after 30 min of incubation in U-50,488 ( Figure 6A; Video S3). After U-50,488 treatment, translocation to the plasma membrane increased significantly (from 15% ± 4% to 62% ± 8%; Figure 6B). As expected, direct activation of PKC with PMA (100 nM) induced translocation of both PKCd-EGFP and PKCa-EGFP from the cytosol to the membrane ( Figures 6C and 6D; Video S4).
To evaluate that the U-50,488-induced PKCd translocation observed in HEK293 cells mimics events in vivo, the fraction of PKCd-positive dorsal horn neurons was quantified 30 min after i.t. injection of U-50,488 in mice. We found that the fraction of dorsal horn neurons with plasma-membrane-bound PKCd nearly doubled after U-50,488 injection (from 40% ± 1% to 73% ± 2%; Figures 6E-6G). This confirmed that KOR activation stimulates PKCd activity manifested by its translocation to the plasma membrane, which subsequently phosphorylates and desensitizes GRPR signaling.
KOR Activation Stimulates PKCd via PLC
To elucidate the mechanism by which KOR activation stimulates PKCd, we tested a myriad of inhibitors on U-50,488-induced PKCd translocation in HEK293 cells expressing KOR and GRPR. Pre-incubation of U73122 (10 mM), a PLC inhibitor, for 10 min blocked U-50,488-induced PKCd-EGFP translocation to the plasma membrane. In contrast, U-50,488 treatment increased the membrane translocation of PKCd-EGFP from 12% ± 4% to 64% ± 5% in the presence of U73343 (10 mM), an inactive analog of U73122 (Figures 7A and 7B). Furthermore, U-50,488 treatment increased the translocation of PKCd-EGFP from 11% ± 2% to 47% ± 5% and 15% ± 3% to 70% ± 9% after a prior pre-incubation in gallein (100 mM), a G bg inhibitor, and PTX (200 ng/mL), respectively ( Figures 7C and 7D), suggesting that PLC mediates PKCd activation by KOR in a G bg -and G ai -independent process. Whole-cell phosphorylation assays were also used to show that KOR activation induces GRPR phosphorylation via PLC. U73122, but not U73343, blocked GRPR phosphorylation ( Figures 7E, 7F, and S7D). Together, these results suggest that KOR activation stimulates PLC, resulting in the translocation of PKCd from the cytosol to the plasma membrane, where it phosphorylates GRPR ( Figure 7G).
To investigate whether this pathway is similarly engaged by other KOR agonists, we tested butorphanol, a mixed KOR agonist/MOR antagonist (Abeliovich et al., 1993), which has been used to treat various types of intractable pruritus in human studies (Dawn and Yosipovitch, 2006;Dunteman et al., 1996). We found that i.t. injection of butorphanol (2 nmol) significantly reduced CQ-induced scratching behaviors in WT, but not in Oprk1 À/À mice ( Figure S2B), suggesting that butorphanol inhibits CQ itch via a KOR-dependent mechanism. Furthermore, i.t. in- jection of butorphanol (2 nmol) did not further reduce CQ-induced scratching in Grpr KO mice ( Figure S2C). Consistently, butorphanol also lost effect in Prkcd À/À mice ( Figure S2D). These studies provide clinically relevant evidence supporting that spinal KOR-agonists-mediated itch inhibition is dependent on the KOR-GRPR cross-talk, but not other mechanisms. In HEK293 cells expressing KOR and GRPR, butorphanol induced PKCd-EGFP, but not PKCa-EGFP, translocation from the cytosol to the plasma membrane, mimicking the U-50,488 effect ( Figure S6). Consistent with U-50,488 and butorphanol results, dynorphin, an endogenous ligand for KOR (Chavkin et al., 1982), also attenuated CQ-induced itch ( Figure S2E). However, mice lacking dynorphin (Pdyn À/À ) exhibited normal acute and chronic itch ( Figures S2F and S2G). These data demonstrate that spinal KOR activation by different agonists suppresses itch transmission via PLC-PKCd pathway and endogenous dynorphin is not required for itch modulation under either normal physiological or chronic itch conditions.
DISCUSSION
Using a multidisciplinary and spinal-cord-specific approach, we show that a Ca 2+ -independent KOR-PLC-PKCd-GRPR pathway is activated in response to KOR agonists, resulting in an attenuation of GRPR function, which is required for development of chronic itch in mice (Sun and Chen, 2007;Zhao et al., 2013). Consistent with previous findings showing that GRPR is minimally required for histaminergic itch (Akiyama et al., 2014;Sun et al., 2009;Zhao et al., 2013), we show that spinal KOR activation does not impact histaminergic itch, including NMBmediated scratching behavior (Wan et al., 2017;Zhao et al., 2014b). Our data suggest that the anti-histaminergic itch effect elicited by systemic KOR agonists is likely attributable to peripheral KOR (Bigliardi-Qi et al., 2007;Chuang et al., 1995;Suzuki et al., 2001;Togashi et al., 2002). Taken together, these findings illustrate a spinal mechanism by which KOR agonists attenuate itch transmission.
Spinal KOR activation reduces itch transmission by inhibiting the function of GRPR in a cell-autonomous manner in KOR-GRPR neurons. In addition to GRPR excitatory neurons, KOR is also expressed in non-GRPR GABAergic neurons in the spinal cord (Xu et al., 2004). Activation of KOR in non-GRPR neurons by KOR agonists could induce thermal analgesia (Nakazawa et al., 1990;Porreca et al., 1984;Xu et al., 2004), likely via the G ai signaling pathway (Al-Hasani and Bruchas, 2011;Grudt and Williams, 1993;Randi c et al., 1995). Because of itch and pain antagonism and their distinct neuronal outputs, it is unlikely that inhibition of non-GRPR KOR neurons in the spinal cord would contribute to itch inhibition. Importantly, KOR-agonist-induced anti-itch effect is lost in Grpr KO mice, suggesting that GRPR is required for mediating anti-itch effects. The remaining KOR-agonist-resistant scratching effect induced by CQ in Grpr KO mice is likely mediated by glutamatergic transmission in Grpr À/À neurons that are also required for histaminergic transmission (Akiyama et al., 2014;Wan et al., 2017). Taken together, depending on GRPR and neurotrans-mitter expression, activation of KOR neurons in the spinal cord gives rise to two distinct behavioral outputs: GABAergic KOR neurons compute anti-nociceptive output, whereas excitatory KOR-GRPR neurons convey anti-pruriceptive information ( Figure 7H). The dual role of KOR is reminiscent of MOR1 and 5HT1A, which is predominantly implicated in anti-nociceptive signaling, with only a small percentage communicating with GRPR, in contrast to KOR-GRPR, to induce or facilitate itch (Liu et al., 2011;Zhao et al., 2014a). Thus, depending on agonists and type of GPCRs, GRPR neurons could serve as a gate to control the output of itch information.
The observation that PTX fails to reverse the effect of U-50,488 demonstrates that KOR signaling via a non-canonic G ai -independent pathway, is specific to KOR-GRPR excitatory neurons. The finding that KOR-activation-mediated signal transduction is independent of G bg protein as well as G as protein suggests that KOR may activate PLC independent of G protein. How could KOR activate PKCd, but not PLCb, which acts downstream of GRPR (Liu et al., 2011), in GRPR neurons? It is possible that KOR activation selectively recruits PKCd to phosphorylate GRPR, thereby dampening PLCb signaling via a competitive manner ( Figure 7G). It is also likely that the presence of a GRPR-KOR heteromeric complex switches the downstream signaling kinase cascade of canonic KOR pathway to occlude PLCb activation.
One remarkable finding is that Ca 2+ -independent PKCd activation via membrane translocation provides a major mechanism by which GRPR is phosphorylated and desensitized. Importantly, behavioral studies indicate that PKCd-mediated desensitization of GRPR is long lasting. Previous in vitro studies suggested that the classic GRPR-PLC-IP 3 Ca 2+ signaling activates kinases other than PKC (Kroog et al., 1995), and GRPR agonist-induced Ca 2+ -dependent desensitization is transient and lasts less than 2 min (Zhao et al., 2014a). It is possible that distinct phosphorylation sites at the C-terminal domain of GRPR may contribute to the duration of desensitization (Ally et al., 2003). Whether PKCd desensitizes agonist-unoccupied GRPR via direct phosphorylation or indirectly via other kinases awaits further studies (Kelly et al., 2008). Direct examination of spinal GRPR phosphorylation levels after KOR activation requires an antibody that can specifically detect phosphorylated GRPR in vivo, which contains multiple potential phosphorylation sites on its C terminus (Ally et al., 2003). Several lines of evidence fail to support the view that the endogenous dynorphin is an important modulator of itch transmission (Kardon et al., 2014): first, consistent with previous studies using either Pdyn KO mice or ablation of spinal Dyn + neurons (Duan et al., 2014;Kardon et al., 2014), Pdyn KO mice showed normal acute and chronic itch behaviors. Second, KOR agonists attenuate GRPR function via a long-lasting phosphorylation rather than acute desensitization process, the former of which rarely occurs under normal physiological condition. These observations are in support of the notion that KOR-GRPR cross-signaling induced by exogenous KOR agonists reflects an artificial process. While there is no evidence that the endogenous dynorphin modulates itch in normal physiological context, the possibility that a dramatic down-regulation of Pdyn in chronic itch condition (data not shown) may suggest an attenuation of inhibitory circuit cannot be excluded.
In summary, we demonstrate a non-canonical opioid signaling mechanism by which GRPR activity is attenuated by KOR-mediated cross-signaling in the spinal cord of mice. The finding of the inhibitory effect of KOR activation and its downstream signaling components on distinct types of chronic itch (BRAF Nav1.8 , ACD, and AEW) suggests a possibility for a broader application of KOR-GRPR-based anti-itch strategy to the treatment of chronic itch with various etiologies.
Itch Behavior
Mice were individually put into observation boxes and videotaped. The videos were played back on a computer and quantified by an observer who was blinded to the treatment or mice genotype. A scratch is defined as a bout of scratching that occurs after the mouse lifts its hind paw to the moment the hind paw is returned to the ground or mouth (Sun and Chen, 2007). For the dry-skin model, mice were painted twice daily with a mixture of acetone and diethyl ether (1:1) followed by water. Scratching behavior directed at the neck was counted in the morning before treatment (Miyamoto et al., 2002;Zhao et al., 2013). For ACD model, mice were sensitized by applying 100 mL of 0.15% DNFB on abdominal skin. One week later, mice were challenged by nape application of 50 mL of 0.15% DNFB every 2 or 3 days. Scratching responses were measured 24 hr after applying DNFB .
RNAscope In Situ Hybridization and Immunohistochemistry
RNAscope ISH and IHC staining were performed as described (Wang et al., 2012;Zhao et al., 2014a). Spinal sections were processed according to the manufacturer's instructions in the RNAscope Fluorescent Multiplex Assay v2 manual for fixed frozen tissue (Advanced Cell Diagnostics).
siRNA Studies
Prkcd siRNA (Sigma) was delivered to the lumbar region of the spinal cord via i.t. injection as described (Liu et al., 2011;Zhao et al., 2014a). Mice were injected twice daily for 3 consecutive days, and behavior was performed 24 hr after the last injection.
Dissociation of Dorsal Horn Neurons and Calcium Imaging
Primary culture of spinal dorsal horn neurons was prepared from 5-to 7-dayold C57BL/6J mice and seeded onto 12-mm coverslips coated with poly-Dlysine. Calcium imaging was performed 3-5 days after seeding as described previously (Zhao et al., 2014a).
Whole-Cell Phosphorylation Assay
HEK293 cells expressing FLAG-KOR and Myc-GRPR were incubated in 10 mM U-50,488 or 1 mM PMA at 37 C and lysed as described (Liu et al., 2011). Proteins were incubated with mouse anti-Myc antibody (Sigma) overnight. The complex was precipitated, resolved on polyacrylamide gels, and transferred to polyvinylidene fluoride (PVDF) membranes (Millipore). Proteins were detected by immunoblotting with mouse anti-phosphoserine antibody (1:2,500; Sigma) overnight, and the blot was developed by enhanced chemiluminescence (Thermo Scientific).
PKC Translocation Assay
KOR-GRPR HEK293 cells, transiently expressing PKCd-EGFP or PKCa-EGFP (kindly provided by Dr. Peter M. Blumbergtaken) were seeded in 29-mm glass bottom dishes (In Vitro Scientific). After 24 hr, the subcellular distribution of EGFP-fused protein was analyzed on a Leica TCS SPE confocal microscope.
|
2018-04-27T04:32:23.810Z
|
2018-04-17T00:00:00.000
|
{
"year": 2018,
"sha1": "d1056e3fe1029cd007544ddd4e466795f39c13c8",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2211124718304480/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c64a5761ec1fe4050f40ec06dc6c321d2e52c5c8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
119425975
|
pes2o/s2orc
|
v3-fos-license
|
Kazantsev dynamo in turbulent compressible flows
We consider the kinematic fluctuation dynamo problem in a flow that is random, white-in-time, with both solenoidal and potential components. This model is a generalization of the well-studied Kazantsev model. If both the solenoidal and potential parts have the same scaling exponent, then, as the compressibility of the flow increases, the growth rate decreases but remains positive. If the scaling exponents for the solenoidal and potential parts differ, in particular if they correspond to typical Kolmogorov and Burgers values, we again find that an increase in compressibility slows down the growth rate but does not turn it off. The slow down is, however, weaker and the critical magnetic Reynolds number is lower than when both the solenoidal and potential components display the Kolmogorov scaling. Intriguingly, we find that there exist cases, when the potential part is smoother than the solenoidal part, for which an increase in compressibility increases the growth rate. We also find that the critical value of the scaling exponent above which a dynamo is seen is unity irrespective of the compressibility. Finally, we realize that the dimension $d = 3$ is special, since for all other values of $d$ the critical exponent is higher and depends on the compressibility.
I. INTRODUCTION
Astrophysical objects typically have magnetic fields over many ranges of scales. The growth and saturation of these magnetic fields are the subject of dynamo theory. Most astrophysical flows are turbulent, hence the astrophysically relevant magnetic fields are generated by turbulent dynamos. Broadly speaking, we understand two different kinds of dynamo mechanism in turbulent fluids: (a) one that generates magnetic fields whose characteristic length scales are significantly larger than the energy-containing length scales of the flow -large-scale dynamos; and (b) one that generates small-scale magnetic fields -the fluctuation dynamo. The small-scale tangled magnetic fields in the interstellar medium (ISM) or the Sun are supposed to be generated by the fluctuation dynamo. If we are interested in only the initial growth of the magnetic field but not its saturation, then we can ignore the Lorentz force by which the magnetic field acts back on the flow and reduce a non-linear problem to a linear one -the kinematic dynamo problem. A pioneering work on the kinematic fluctuation dynamo was by Kazantsev (1968), who approximated the velocity field by a random function of space and time. In particular, the velocity field is assumed to be statistically stationary, homogeneous and isotropic with zero divergence (under the latter assumption the velocity is called solenoidal or incompressible). It is also assumed to be short-correlated in time, more specifically white-in-time, and its correlation function in space is assumed to have a powerlaw behavior with an exponent 0 ξ 2. By virtue of the white-in-time nature of the velocity field, it is possible to write down a closed equation for the equal-time two-point correlation function of the magnetic field. Consequently, it is possible to show that a dynamo exists -the magnetic energy grows exponentially in time -iff the exponent ξ > 1 ( Vergassola, 1996). This result is an example of an anti-dynamo theorem. The Kazantsev model has played an important role in our understanding of the kinematic fluctuation dynamo excited by turbulent flowsà la Kolmogorov, see e.g. the review by Brandenburg & Subramanian (2005) and references therein. The same model for the velocity field has been extensively used to study the scaling behavior and intermittency of advected passive scalar fields -the Kraichnan model (Kraichnan, 1968) of passive-scalar turbulence, see e.g. Falkovich et al. (2001) for a review.
In this paper we are interested in a generalization of the Kazantsev model to compressible turbulence, which is relevant for the fluctuation dynamo in the ISM. To the best of our knowledge, the earliest attempt to generalize the Kazantsev model to compressible flows was by Kazantsev, Ruzmaikin & Sokolov (1985), who modelled the flow not by a white-in-time process but by random weakly-damped sound waves, with energy concentrated on a single wave number, to find that the growth rate of the dynamo is proportional to the fourth power of the Mach number. More recently, Rogachevskii & Kleeorin (1997) generalized the Kazantsev model to compressible flows, preserving its white-in-time nature, but adding a potential (irrotational) component to the velocity field. Both the solenoidal and the potential components are assumed to have the same scaling exponent, 0 ξ 2. In this model, Rogachevskii & Kleeorin (1997) found that the inclusion of compressible modes always makes it more difficult to excite the dynamo. Later Schekochihin & Kulsrud (2001) and Schekochihin et al. (2002) studied the same problem for the case of smooth velocity fields, ξ = 2, which corresponds to the limit of large magnetic Prandtl numbers. Schober et al. (2012) also tried to extend the Kazantsev model with the purpose of understanding the magnetic-field growth in ISM. However, for a tensor function to be the correlation of an isotropic vector field, its Fourier transform must be positive semi-definite (Monin & Yaglom, 1975), and the correlation tensor that Schober et al. (2012) prescribed for the velocity field violates such condition.
Recent direct numerical simulations of compressible fluid turbulence (Wang et al., 2013(Wang et al., , 2018 have shown that the spectra of the flow can be separated into a solenoidal and a potential part, where the solenoidal part scales with an exponent quite close to the classical Kolmogorov result for incompressible turbulence -the energy spectrum of the solenoidal part scales with exponent −5/3 in the inertial range -and the scaling exponent for the potential part is close to that of the turbulent Burgers equation -the energy spectrum of the potential part scales with an exponent of −2 in the inertial range. How will the Kazantsev dynamo problem change if we use two independent scaling exponents for the solenoidal and the potential parts of the energy spectrum of the velocity field? This is the main task we set for ourselves in this paper. We end this introduction by a warning to our reader: because of the popularity of the Kazantsev model as an analytically tractable model for the turbulent dynamo, the literature on it is large and diverse. We have so far cited only those papers that are directly relevant to our work. For a detailed introduction we suggest several reviews (Brandenburg & Subramanian, 2005;Falkovich et al., 2001;Tobias, Cattaneo & Boldyrev, 2012;Zeldovich, Ruzmaȋkin & Sokolov, 1990) and references therein.
II. MODEL
We model the plasma as a conducting fluid at scales where the magnetohydrodynamics (MHD) equations are valid. Consequently the magnetic field, B(x, t), evolves according to the induction equation (Chandrasekhar, 1961) where J = ∇ × B is the current, κ the magnetic diffusivity, and u(x, t) the velocity. Furthermore, Maxwell's equations imply ∇ · B = 0. Two dimensionless numbers characterize the MHD equations: the Reynolds number Re ≡ V L/ν and the magnetic Reynolds number Re m ≡ V L/κ, where V is a typical large-scale velocity, L is the correlation length of the velocity field, and ν the kinematic viscosity of the fluid. The ratio of Re m and Re is the magnetic Prandtl number Pr ≡ ν/κ. Instead of solving the momentum equation for the velocity, as is usual in MHD, the Kazantsev model assumes that the velocity u is a statistically stationary, homogeneous, isotropic and parity invariant, Gaussian random field with zero mean and correlation where d is the spatial dimension of the flow. In this paper, we use d = 3 except in Section IV, where results for general d are presented. The velocity field is also assumed to be Galilean invariant, hence it is useful to introduce the secondorder structure function S ij (r) = D ij (0) − D ij (r), which is a Galilean invariant quantity that describes the statistics of the velocity increments. In view of statistical isotropy and parity invariance, S ij (r) takes the form 1 (Monin & Yaglom, 1975;Robertson, 1940) wherer i = r i /r and S N (r) and S L (r) are called the normal (also known as transverse or lateral) and longitudinal secondorder structure functions, respectively.
A. Solenoidal random flows Kazantsev (1968) considered the three-dimensional solenoidal case (∇ · u = 0) and the limits of vanishing Pr and infinite Re and Re m , in which the velocity structure functions are scale invariant: The positive constant D determines the magnitude of the fluctuations of the velocity increments, while the scaling exponent ξ varies between 0 and 2, the latter value describing a spatially smooth velocity field. The above choice for the structure functions correspond to a kinetic-energy spectrum of the form E(k) ∝ k −1−ξ . We remind the reader that, owing to the δ-correlation in time, some care should be taken in comparing the flow u defined in equation (2) with a time-correlated flow. Consider indeed a flow with the same statistical properties as u but with nonzero correlation time and structure functions that scale as r α . The two flows yield the same laws of Lagrangian dispersion provided ξ = 1 + α/2 (Falkovich et al., 2001). In particular, the Kolmogorov phenomenology of fully developed turbulence is reproduced by the δ-correlated flow if ξ = 4/3.
B. Compressible random flows
One way to include the effect of compressibility into the Kazantsev model is to modify, in d-dimensions, S L (r) and S N (r) in the following manner (Celani, Lanotte & Mazzino, 1999;Elperin, Kleeorin & Rogachevskii, 1995;Gawȩdzki & Vergassola, 2000): by introducing a new parameter ℘ which is the degree of compressibility of the velocity field and varies between 0 (solenoidal, or incompressible, flow) and 1 (potential, or irrotational, flow). In this model both the solenoidal and the potential parts of the flow have the same scaling exponent.
A. Closed equation for the correlation function of the magnetic field
Given the velocity field the task is to calculate the equations obeyed by the correlation functions of the magnetic field, where the averages are calculated over the statistics of the velocity field. The magnetic field is assumed to have the same spatial statistical symmetries as the velocity field. Its correlations are thus written as where a single scalar function C L (r, t) is sufficient to define C ij (r, t) thanks to the solenoidality of the magnetic field (Monin & Yaglom, 1975;Robertson, 1940). By virtue of the Gaussian and white-in-time nature of the velocity field, it is possible to write a closed equation for the evolution of C ij (r, t) in a straightforward manner, by using the induction equation (1) and then averaging over the statistics of the velocity field. To obtain a closed equation, however, we need to calculate averages of a triple product: that of the velocity field, of the magnetic field and of its spatial derivative. This has been obtained by many different methods. A well established approach employs a result known as "Gaussian integration by parts": for a Gaussian, not-necessarily white-in-time, vector-valued noise z(t), with components z j (t), and its arbitrary functional F (z), see e.g. Zinn-Justin (1996, Section 4.2) for a proof. At the next step we need to integrate equation (1) formally to obtain the magnetic field as a function of the velocity field and then to calculate the necessary functional derivatives. Then we have to take the limit s → t. As the noise correlation becomes singular in this limit, we need to replace the Dirac delta function by a regularised even function and then take the limits. This regularisation is equivalent to using the Stratonovich prescription for the noise. This method has been used extensively in dynamo theory (see, e.g., Mitra & Brandenburg, 2012;Schekochihin et al., 2002;Schekochihin & Kulsrud , 2001;Seshasayanan & Alexakis, 2016;Vergassola, 1996), to calculate similar or higher-order correlation functions. Hence we skip the details of the derivation and directly write down the result (see also Schekochihin et al., 2002, and references therein):
B. Schrödinger formulation
One of Kazantsev's main results is the formulation of the turbulent dynamo effect as the trapping of a quantum particle in a one-dimensional potential whose shape is determined by the functional form of the velocity structure functions. This formulation leads to an appealing interpretation of the dynamo effect.
Equation (8) is a partial differential equation with one space and one time dimension. By the substitution equation (8) is turned into an imaginary-time Schrödinger equation with space-dependent mass for the function ψ(r, t) (Schekochihin et al., 2002): with and The function ψ(r, t) has the same time dependence as C L (r, t).
We therefore seek solutions of the form ψ(r, t) = ψ γ (r)e −γt , where ψ γ (r) are the eigenstates of the Schrödinger operator in equation (10) and γ the associate energies. The magnetic correlation grows in time if there exist negative-energy (γ < 0) eigenstates. If so, the energy of the ground state, γ 0 , yields the asymptotic growth rate |γ 0 |. 2 From equation (10), γ can be written as: Since m(r) > 0 for all r (Schekochihin et al., 2002), γ is negative if and only if the effective potential U eff (r) ≡ m(r)U (r) admits negative energies. Thus the problem of the growth of the magnetic correlations is mapped into that of the existence of negative-energy states for the one-dimensional potential U eff (r) -the energy for the ground state of the potential U eff (r) is equal to the growth rate of the fastest growing mode in the kinematic dynamo problem.
C. Solenoidal random flows
In the original Kazantsev model, the flow is assumed to be solenoidal and the longitudinal and normal structure functions have the same scaling exponent as given in equation (4). The effective potential is obtained by substituting from equation (4) to equation (8) and simplifying (e.g., Vergassola, 1996): where in the second step we have introduced the diffusive scale, r κ ≡ (κ/D) 1/ξ , at which the diffusive and the advective terms in the induction equation (1) balance each other. For r r κ , U eff (r) ∼ 2/r 2 and is therefore repulsive. For r r κ , U eff (r) ∼ −(3ξ 2 /4 + 3ξ/2 − 2)/r 2 ; hence the effective potential is negative at large r. By examining the shape of U eff (r), it is possible to conclude that the critical value of ξ for the kinematic dynamo effect is ξ crit = 1 (Kazantsev, 1968, see also Vergassola (1996)). For ξ < 1, U eff (r) is indeed everywhere greater than a potential of the form −c/r 2 with c < 1/4, which does not have any negative-energy eigenstates (Landau & Lifshitz, 1958). Hence the same holds for U eff (r), and consequently no dynamo exists. For ξ > 1, the effective potential behaves at large distances as U eff (r) ∼ −c/r 2 with c > 1/4. A potential with such a large-r behavior has a discrete spectrum containing an infinite number of negativeenergy levels (Landau & Lifshitz, 1958), so the magnetic correlation can grow exponentially.
The incompressible case was generalized to d dimensions by Gruzinov et al. (1996) and Vergassola (1996). In particular, for d = 2, U eff (r) is repulsive everywhere and hence there cannot be dynamo effect, in accordance with Zel'dovich's anti-dynamo theorem (Zel'dovich, 1957).
D. Compressible random flows
Next we consider the model of compressible flow given by equation (5), where both the solenoidal and the potential components of the structure functions scale with the same exponent. Upon substitution of equation (5) into equations (11) and (12), the mass and potential functions are found to have the form and U (r) = a 0 + a 1 r ξ + a 2 r 2ξ 4r 2 [2κ + Dr ξ (d − 1)(℘ξ + 1)] , where the coefficients a 0 , a 1 , a 2 are given in appendix A. Let us examine the d = 3 case. At small distances (r r κ ), the asymptotic behavior of U eff (r) is unchanged compared to the incompressible case. For r r κ , By studying the form of U eff (r) as in Section III.B, it can be shown that the critical value of ξ for the dynamo effect is not affected by the degree of compressibility, i.e. ξ crit = 1 for all values of ℘ (Rogachevskii & Kleeorin, 1997). Indeed, for ξ < 1, U eff (r) behaves asymptotically as −c/r 2 with c < 1/4, and both a 0 and a 1 are positive for all values of ℘. Hence U eff (r) > −c/r 2 with c < 1/4 for all r, and it does not admit negative-energy eigenstates. For ξ > 1, U eff (r) ∼ −c/r 2 with c > 1/4, and an infinite number of states with negative energy can exist. Even though ξ crit does not vary with ℘, for ξ > 1 the magnetic growth is affected by the degree of compressibility of the flow. For ξ = 2, the growth rate is |γ 0 | = (15/2−5℘)D, while at large separations and long times the correlation C L (r, t) behaves, up to logarithmic corrections, as r −5/2 exp(|γ 0 |t) (Schekochihin et al., 2002). For 1 < ξ < 2, to our knowledge no analytical expression for the growth rate |γ 0 | is available. We calculate it numerically by applying to equation (10) a variation-iteration method that is similar to techniques used to find the largest eigenvalue of very large matrices. This method is described in detail in appendix B. The result is shown in Figure 1, where we plot |γ 0 |t κ versus ξ for several values of ℘; here t κ ≡ r 2 κ /κ is the time scale associated with magnetic diffusion. We find that, irrespective of ℘, the growth rate is positive for all values of ξ > 1 and increases with ξ, i.e., as the spatial regularity of the flow improves. For a fixed ξ, the effect of compressibility is always to decrease the growth rate, i.e., to make the dynamo weaker. Rogachevskii & Kleeorin (1997) tried to estimate the growth rate by asymptotic matching; they also found the same qualitative effect of compressibility.
We find out the behavior of C L (r, t) for large r by replacing m(r) and U (r) in equation (10) with their asymptotic expressions. The resulting equation for ψ γ (r) is a Bessel differential equation, whose non-diverging solution is where β = b 2/(℘ξ + 1) and K b is the modified Bessel functions of the second kind of order b = 1/(2 − ξ). At large r this solution behaves as exp[−β |γ|t κ (r/r κ ) 1−ξ/2 ]. Furthermore, the long-time behavior of the magnetic correlation is determined by the ground state ψ γ0 (r). From equation (9) we therefore conclude that, for r r κ and t t κ , C L (r, t) ∝ r −2−ξ/2 exp −β |γ 0 |t κ r r κ 1−ξ/2 e |γ0|t .
(19) As far as the spatial dependence of the magnetic correlation is concerned, we thus recover the stretched-exponential be-havior of the incompressible case; the degree of compressibility of the flow only affects |γ 0 |, and hence the rate at which the stretched exponential function decays in space, but not the stretching exponent.
E. Solenoidal-potential decomposition
The results of the previous section indicate that the spatial regularity of the flow favours the magnetic growth, whereas compressibility hinders it. It has now been realized that for turbulence at moderate Mach numbers the solenoidal and potential parts of the flow have different scaling exponents; more precisely, the solenoidal part of the kinetic-energy spectrum displays the Kolmogorov scaling k −5/3 , while the potential one displays the Burgers scaling k −2 (Wang et al., 2013(Wang et al., , 2018. It is therefore interesting to investigate the interplay between spatial regularity and compressibility in the case where the potential part of the flow is more regular than the solenoidal one. We thus consider the following velocity structure functions (see Monin & Yaglom (1975, p. 57) for the solenoidal-potential decomposition of an isotropic random field): where ξ S and ξ P are the scaling exponents of the solenoidal (℘ = 0) and potential (℘ = 1) components, respectively, and D P and D S are positive coefficients. Varying the degree of compressibility thus interpolates linearly between a solenoidal flow with scaling exponent ξ S and a potential flow with exponent ξ P . The mass and the potential functions can be calculated by inserting S L (r) and S N (r) from equations (20) and (21) into equations (11) and (12): The product of m(r) and U (r) yields the effective potential.
In the diffusive range, r r κ , U eff (r) ∼ 2/r 2 independently of the values of ξ S and ξ P ; the dynamics is indeed dominated by magnetic diffusion in this range. If ξ S > ξ P (ξ S < ξ P ) the asymptotic behavior of U eff (r) is the same as that of a solenoidal flow with exponent ξ S (a potential flow with an exponent ξ P ). As noted in Section III.D, the critical value of the scaling exponent equals unity, ξ crit = 1, in both cases. Therefore, the critical scaling exponent for the dynamo effect does not depend on whether or not the solenoidal and potential components scale differently, and if at least one of the two exponents is greater than unity, a dynamo is obtained.
The magnetic growth rate, by contrast, is expected to vary with the spatial regularity of the potential component. Since for a generic ℘ the solenoidal and potential components of the velocity field now scale differently, two scales, r (S) κ ≡ (κ/D S ) 1/ξS and r (P) κ ≡ (κ/D P ) 1/ξP , can be formed by balancing the diffusion term in equation (1) with either the solenoidal or the potential contribution to the advection term. In the calculation of the magnetic growth rate, we set r κ . The relative weight of the solenoidal and the potential components is thus determined solely by ℘. Furthermore, such choice allows us to define a single diffusive time scale t κ , which is used to rescale the growth rate. We do not know any analytical method allowing us to exactly calculate the growth rate as a function of ξ and ℘. Hence we use the variation-iteration method to calculate it numerically. In Fig. 2 we plot the non-dimensionalized growth rate as a function of ℘ for ξ S = 4/3 and for several different values of ξ P . According to the discussion at the end of Section II.A, the scaling behavior of the kinetic-energy spectrum observed in Wang et al. (2013Wang et al. ( , 2018, i.e., k −5/3 for the solenoidal component and k −2 for the potential one, corresponds to a whitein-time flow with ξ S = 4/3 and ξ P = 3/2 (green square symbols in Fig. 2). The case ξ S = ξ P = 4/3 corresponds to the case where both the solenoidal and the potential components scale with Komogorov exponent (red * symbols in Fig. 2). In both of these cases the growth rate monotonically decreases as a function of ℘. But the growth rate for the case ξ S = 4/3 and ξ P = 3/2 is higher for all values of ℘. Remarkably, for greater values of ξ P , we find that |γ 0 | varies non-monotonically or even increases with ℘ (inset of Fig. 2). Hence compressibility can increase the magnetic growth if the potential component of the flow is sufficiently regular. This is a new qualitative result.
F. Critical magnetic Reynolds number
The magnetic Reynolds number Re m measures the relative intensity of the stretching of magnetic lines by the flow and Ohmic dissipation. Therefore, for the dynamo effect to take place, Re m must be sufficiently high and exceed a critical value Re crit m . We consider first the case in which both the solenoidal and the potential components of the velocity field display the same scaling exponent (ξ S = ξ P = ξ). In the original version of the Kazantsev model, Re m is infinite, since the structure functions of the velocity field are assumed to increase indefinitely with the scale separation. It is possible, however, to modify the form of S L (r) and S N (r) in such a way as to introduce a finite correlation length L beyond which the structure functions saturate to a constant value (Artamonova & Sokolov, 1986;Novikov, Ruzmaȋkin & Sokolov, 1983;Ruzmaȋkin & Sokolov, 1981;Ruzmaikin, Sokoloff & Shurukov, 1989;Schekochihin et al., 2002). This allows the definition of the magnetic Reynolds number as Re m ≡ L/r κ = L(D/κ) 1/ξ . Here we follow Monin & Yaglom (1975, p. 55) (see also Arponen & Horvai, 2007;Falkovich et al., 2001;Vincenzi, 2002) and introduce L in the model by modifying the kinetic-energy spectrum as follows: (the reader is referred to appendix C for more details). The specific form of E(k) is immaterial and only affects numerical details. What is important is that the power-law behavior E(k) ∼ k −1−ξ is recovered for k L −1 , while E(k) is regular for k L −1 . The structure functions for finite Re m can then be calculated from E(k) (see appendix C) and can be inserted into equation (11) and (12) to obtain the effective potential U eff (r), and hence the magnetic growth rate via the variation-iteration method.
Within the Schrödinger formulation of the Kazantsev model, the existence of a critical magnetic Reynolds number can be interpreted thus: a finite value of L generates a potential barrier at spatial scales of the order of L and greater than it (Novikov, Ruzmaȋkin & Sokolov, 1983;Schekochihin et al., 2002;Vincenzi, 2002). As L (and hence Re m ) is decreased, the potential well raises, until negative-energy states cease to exist and the dynamo disappears. Figure 3 shows the rescaled magnetic growth rate as a function of Re m (a) for fixed ξ = 4/3 and different values of ℘ and (b) for fixed ℘ = 0 and different values of ξ. We find yet another instance in which the degree of compressibility and the spatial regularity of the flow have an opposite effect on the dynamo effect: compressibility increases Re crit m , whereas the smoothness of the flow makes it decrease.
It is thus interesting to examine the interplay between these two effects by considering, as in Sect. III.E, a kinetic-energy spectrum such that the solenoidal component of the velocity field displays the Kolmogorov scaling, while the potential one the Burgers scaling: Here A S and A P are positive coefficients. The structure functions corresponding to the above spectrum can be found in appendix C.
Once more we calculate the magnetic growth rate numerically by using the variation-iteration method. The scales r κ is used in the definitions. The magnetic growth rate as a function of Re m is shown in Fig. 4 for the case ξ S = 4/3, ξ P = 3/2 and for representative values of the degree of compressibility. Compared to the case ξ S = ξ P = 4/3, the critical magnetic Reynolds number is always lower when the potential component of the flow displays the Burgers scaling. In this latter case, Re crit m depends weakly on ℘, which suggests that the higher spatial regularity compensates the effect of the increase in compressibility.
IV. KINEMATIC DYNAMO IN d DIMENSIONS
In this section, we examine the dependence of the Kazantsev model (with single scaling exponent) on the space dimension. In equation (2), the dimension d may indeed be regarded as a parameter that can also take values different from 2 or 3. So far in this paper we have limited ourselves to the case d = 3. Gruzinov et al. (1996) showed that if the flow is smooth (ξ = 2) and incompressible (℘ = 0), there is dynamo effect only when the spatial dimension is in the interval 2.103 d 8.765. Arponen & Horvai (2007) extended this result to a rough flow (0 < ξ < 2) and showed that the curve ξ crit vs d is convex and has its minimum in d = 3. Therefore, the range of spatial dimensions over which the dynamo effect can take place shrinks as the flow becomes rougher or, in other words, a higher degree of spatial regularity of the flow is necessary if d = 3.
To study the effect of compressibility on ξ crit for a general dimension d, we once again examine the large-r form of U eff (r). By repeating the argument used in Sections III.B and III.D, we obtain that ξ crit is the root of the following equation, such that 0 ξ crit 2 (the coefficient a 2 is defined in equations (16) and (A3)). Figure 5 shows ξ crit as a function of d for different values of ℘. Clearly, the three-dimensional case is peculiar inasmuch as it is the only one for which ξ crit does not depend upon ℘, and the critical exponent is the lowest for d = 3. For all other values of d, increasing ℘ broadens the range of spatial dimensions over which the magnetic correlation grows, and lowers the critical exponent for the appearance of the dynamo effect. No exponential growth of the magnetic field is found for d = 2 irrespective of the value of ℘.
V. CONCLUSIONS
In this paper we have used a model that is a generalization of the Kazantsev model to flows that have both solenoidal and potential components. We find, in agreement with earlier results, that the critical value of the exponent above which a dynamo is seen is unity in all cases. If both the solenoidal and the potential parts have the same scaling exponent, then, as the compressibility of the flow increases, the growth rate of the dynamo decreases but does not go to zero. In other words, even when the flow has only a potential component, the dynamo is not turned off but merely slowed down. This qualitative result was already known from the approximate analytical work of Rogachevskii & Kleeorin (1997); we have now provided numerical results that support the same conclusion. We also show that the flow compressibility increases the critical magnetic Reynolds number for the dynamo effect, whereas spatial regularity lowers it. More importantly, we have considered the more realistic case where the solenoidal and the po- tential parts have different scaling exponents. If we consider the case where these scaling exponents for the solenoidal and potential parts correspond to typical Kolmogorov and Burgers values, then we again reach the same qualitative conclusion -an increase in compressibility slows down the growth rate of the dynamo but does not turn it off. The slow down due to compressibility is, however, weaker than in the case in which both the solenoidal and the potential components of the flow display a Kolmogorov scaling. Intriguingly, we find that there exist cases, if the potential part is smoother than the solenoidal part, where an increase in compressibility can actually increase the dynamo growth rate. Unfortunately, this result remains a curiosity because we are not aware of any realistic flow that corresponds to such a case. If the potential component of the flow displays the Burgers scaling, we also show that the critical Reynolds number is lower than when both the solenoidal and the potential components display the Kolmogorov scaling. Finally, by considering the same model in general d dimensions, we realize that the dimension d = 3 is special; for all other values of d the critical value of the exponent depends on the degree of compressibility of the flow, in particular, increasing the compressibility lowers the critical value of the exponent.
The Kazantsev model is a rare example of flow allowing an analytical study of the turbulent dynamo effect. For this reason, it has attracted a lot of attention in the literature, and several extensions of the model have been proposed to include certain properties of a turbulent velocity field that were not present in the original version. One should of course be careful in translating lessons learned from Kazantsev-type models to real astrophysical flows or to direct numerical simulations. These models are indeed limited by their white-in-time nature and their Gaussian statistics. Direct quantitative comparison between such models and simulations may therefore not be straightforward. Our results are furthermore limited by the fact that we consider a vanishing magnetic Prandtl number. Care should also be taken in extending the results for the temporal growth of the second moment of the magnetic field to moments of a different order (Seshasayanan & Pétrélis, 2018). Nevertheless, the large literature on Kazantsev-type models shows that the qualitative insight gained from the study of such models is robust.
The above procedure involves the parameters N , µ and . N and µ must be chosen in such a way as to accurately resolve U eff (ρ), namely the repulsive barrier at ρ = 1, the potential well near to the diffusive scale, and the decay of the potential to zero at ρ = 0. In particular, the choice of the values of N and µ requires some care for ξ near to 2, since the effective potential decays to zero slowly in that case. In our simulations we varied N between 5×10 4 and 10 7 and µ between 10 −5 and 5 × 10 −2 according to the values of ξ and ℘. The threshold for the convergence, , needs to be sufficiently small, especially for ξ or ℘ close to 1, because the convergence to the theoretical eigenvalue may be rather slow for these values of ξ and ℘. In our simulations was varied between 10 −14 and 10 −8 .
Appendix C: Structure functions for a finite Rem
Following Monin & Yaglom (1975), we introduce a finite Re m in the Kazantsev model by assuming that the kineticenergy spectrum has the form given in equation (25). Such a spectrum can be obtained by choosing the longitudinal and normal three-dimensional kinetic-energy spectra as follows: The structure functions corresponding to the above spectra are (Monin & Yaglom, 1975, p. 108 N /dr (we remind the reader that our definition of the structure functions differ from that of Monin & Yaglom (1975) by a factor of 2).
For r L, S L (r) and S N (r) reduce to the functions given in equations (20) and (21) with The structure functions for the case in which the solenoidal and potential parts of the velocity field have same scaling exponent can be derived from the above expressions by setting ξ S = ξ P = ξ.
|
2019-03-01T16:32:05.000Z
|
2018-09-05T00:00:00.000
|
{
"year": 2018,
"sha1": "bf12fb636473ac91cd52b40b4dc4c366a7b287dd",
"oa_license": null,
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2018.0591",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "daea5ba21e93c0cc5192520cbd31749215366e94",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
14104848
|
pes2o/s2orc
|
v3-fos-license
|
Residual attractive force between superparamagnetic nanoparticles
A superparamagnetic nanoparticle (SPN) is a nanometre-sized piece of a material that would, in bulk, be a permanent magnet. In the SPN the individual atomic spins are aligned via Pauli effects into a single giant moment that has easy orientations set by shape or magnetocrystalline anisotropy. Above a size-dependent blocking temperature $T_{b}(V,\tau_{obs})$, thermal fluctuations destroy the average moment by flipping the giant spin between easy orientations at a rate that is rapid on the scale of the observation time $\tau_{obs}$. We show that, depite the vanising of the average moment, two SPNs experience a net attractive force of magnetic origin, analogous to the van der Waals force between molecules that lack a permanent electric dipole. This could be relevant for ferrofluids, for the clumping of SPNs used for drug delivery, and for ultra-dense magnetic recording media.
Introduction
In many areas of physics, forces are effectively suppressed in the interaction between separated fragments of matter, because of the neutrality of each fragment with respect to the appropriate charge quantity. Nevertheless "residual" forces still occur between these fragments, typically with a decay (as a function of the spatial separation D between the fragments) that is different from that of the "bare" interaction.
For example, ordinary matter consisting of atoms and molecules is typically neutral with respect to electrical charge, but two well-separated chargeneutral fragments always experience at least the van der Waals or dispersion interaction. This is a residual force that arises because the zero-point motions of the electrons on the two fragments are correlated via the Coulomb interaction, leading to a non-zero time-averaged force of Coulombic origin, despite overall charge neutrality of each fragment. For neutral molecules distant D, this leads to an interaction energy varying as −D −6 . This is to be compared with the bare Coulomb interaction proportional to Q 1 Q 2 D −1 that acts between between fragments with nonzero electric charges Q 1 , Q 2 . (D −6 is replaced by D −7 when D is large enough that retardation of the electromagnetic interaction needs to be considered [1,2]).
Similarly the nuclear force between two nucleons has sometines been regarded as a residual color interaction between color-neutral objects.
Here we propose a similar residual force, of magnetic dipolar origin, acting between two "superparamagnetic nanoparticles (SPNs)". By this we mean that each nanometre-sized particle is composed of a material that is ferromagnetic in its bulk state [3,4]. Typically at the temperatures of interest, the elementary electron spins inside an individual nanoparticle remain locked together by the microscopic exchange interaction, yielding effectively a single giant spin with a magnetic dipole moment d 0 . If the directions of the giant moments remain steady over time, two such nanoparticles experience a conventional magnetic dipole-dipole energy proportional to d (1) 0 d (2) 0 f (θ 1 , φ 1 ; θ 2 , φ 2 )/R 3 . Here R is the spatial separation of the nanoparticles, and f is a dimensionless function of the angles between each fixed moment and the vector R joining the spatial locations of the nanoparticles. However each particle has one or more "easy axes" in directions determined by magnetocrystalline or, more typically, shape anisotropy. The latter effect arises in the strong angular dependence of the magnetostatic self-energy of a non-spherical magnetised particle. We will consider the simplest case, in which the particle is sufficiently elongated that it has a single easy axis, i.e. dominant uniaxial shape anisotropy. Then the energy of a single nanoparticle is lowest when its giant spin (dipole moment, d) lies parallel or antiparallel to this easy axis. Because the energy barrier E 0 for rotation of d between easy orientations (not mechanical rotation of the particle) derives from the magnetic self-energy of the nanoparticle, it decreases with decreasing volume of the nanoparticle. For very small particles, therefore, the projection of d on a measurement axis averages to zero over time, because of repeated thermal flipping of the giant spin [3], caused by thermal agitation from the heat bath (e.g. a fluid or solid matrix) that surrounds the nanoparticle. Thus on time average the nanoparticle is "neutral" i.e. it has a zero magnetic moment.
When the thermal agitation of the giant spin is insufficient to flip it between easy orientations within the observation time, τ obs , the nanoparticle is "blocked", i.e. apparently frozen as to its magnetism. This occurs below the blocking temperature of this nanoparticle, T b , which depends on E 0 and therefore on the volume V of the nanoparticle. If the relaxation time of d over the barrier E 0 is τ , then T b is defined by τ (T b ) = τ obs . Blocking is thus a purely dynamic phenomenon: extending the observation time, or lowing the frequency, lowers T b and vice-versa [3]. For the present case of SPNs suspended in a fluid, the observation time τ obs will be a relevant time for mechanical motion of the SPN through the fluid. -e.g. a rotational or translational diffusion time. Note that the direct dipolar magnetic interaction between SPNs could in principle lead them to clump. However when T > T b the motion of the SPNs through the fluid will not "see" the bare dipolar magnetic interaction between the SPNs, as it has been averaged away between attractive and repulsive values during the thermal flipping of the spins. It could lead to additional Brownian type of damping and difusion of course, but we show here that there is also a net attractive force between SPNs even above the blocking temperature.
The destruction of the permanent magnetic moments by thermal fluctations is highly undesirable in the case of a magnetic data recording medium, where very fine magnetic particles in the nanometer size range will be needed in order to pack the magnetically stored data as densely as possible for the next generation of devices. The thermal destruction of the permanent moments means that data cannot be stored over long times.
On the other hand, as will be discussed below, the same thermal flipping occuring for T > T b is beneficial in the case of nanoparticles deliberately suspended in human blood as carriers for drug or thermal therapies, since now the clumping of the nanoparticles from magnetic dipole interaction is suppressed because each particle has effectively a zero magnetic moment. The strong clumping that would occur for fully ferromagnetic particles from their R −3 dipole-dipole interactions could be clinically dangerous, potentially causing blockage of blood vessels, difficulty of elimination etc. We will show below, however, that despite the vanishing of the average individual moments, there is a residual attractive interaction between two superparam-agnetic nanoparticles separated by distance R, that falls off as (const)/R 6 . It is the magnetic analog of the van der Waals or dispersion force that arises via the Coulomb interaction between fluctuating electric dipoles on two electrically neutral molecules [5] lacking permanent dipole moments. This residual force could also lead to clumping of the nanoparticles, and so its analysis could be signifiant in modern magnetic-particle therapies [4].
Simple preliminary model
The model described here is based on an argument frequently used to explain the attractive van der Waals energy proportional to −R −6 that arises between temporary electric dipoles occurring on a pair of electronically neutral atoms separated by distance R (see e.g. [5]). It is not rigorous derivation, but may help to elucidate the more careful and general mathematical treatment to be provided in later Sections. Consider two superparamagnetic nanoparticles SPN1 and SPN2 as defined above. While averaging to zero over time as described above, the magnetic moment d (1) on SPN1 can exhibit a short-lived thermal (or quantal) fluctuation so that its value d (1) (t) is nonzero at some particular time t. For simplicity we will assume that only magnetizations of SPN1 and SPN2 along one axis (sayẑ) are possible so that d (1) (t) = d (1) (t)ẑ, and we will consider the case that the spatial separation R between SPN1 and SPN2 is parallel tox. Then the spontaneous moment d 1 (t) produces a dipolar magnetic induction (B-field) at the position of SPN2. Responding to this field, SPN2 produces its own magnetic moment whereχ (2) is the dynamic magnetic susceptibility of SPN2, assumed for now to represent an instantaneous response to the field. (Note that hereχ represents the response of the total magnetic moment of the SPN to a small applied magnetic induction b. By contrast, the symbol χ is normally used for the response of the magnetic moment per unit volume to a small applied magnetic field h. Thus for a single SPN of volume V , The dipole (1) in turn produces a dipolar magnetic induction back at the position of SPN1: The interaction energy of this back-field with the original moment d (1) (t) is and this energy has a time or thermal ensemble average which is non-zero because (d (1) (t)) 2 = 0 even though d (1) (t) = 0. This negative energy produces, upon differentiation with respect to R, a net time-averaged attractive force between SPN1 and SPN2 that falls off as R −7 .
The above simplified theory produces the basic physics and the R −7 force, but it glosses over a number of issues, such as the role of entropic effects at finite temperature, the tensor nature of the magnetic dipole-dipole interaction, the quantal aspects of the problem, and the retardation of the electromagnetic field. Also, the response χ 2 has been assumed to be instantaneous, whereas there can be a strong and important frequency dependence (timedelayed aspect) to the linear response of a SPN. All of these considerations are treated in detail in the theory given the next Section.
Detailed theory
The magnetic dipolar energy (hamiltonian) between two particles with magnetic dipoles d (1) and d (2) , separated in space by a nonzero vector R = RR, is of form We assume that we are above the blocking temperature, T > T b , i.e. that the temperature is high enough (compared with the anisotropy energy barrier), that each isolated giant magnetic dipole has zero thermal expectation taken over the time-scale of interest The theory to be developed here is meaningful provided that the thermal fluctuations of the moment occur on a time-scale τ that is short compared to the time τ obs ≡ T mech for the nanoparticle to change its spatial position (or physical angular orientation) appreciably, within its fluid medium. Under these conditions we will derive a residual attractive force between the two superparamagnetic nanoparticles, that could for example be used to study residual clumping effects in fluid suspension at temperatures above the blocking temperature .
The quantum-thermal expectation, denoted < >, of the interaction energy between the giant spins is However at finite temperature it is not this energy but the corresponding thermal Helmholz free energy that must be considered, where S is the entropy. We achieve an expression for A via a Feynman-theorem argument in Appendix A for a classical treatment of the fluctuations, and in Appendix B for the fully quantal case. In either case the result is Here the subscript λ means that the quantity is evaluated in the thermal ensemble with modified interaction Since the coupling will be zero (equivalent to λ = 0) at infinite separation R → ∞, we can write Eq (4) as an expression for just the free energy of interaction between the two nanoparticles: The problem now reduces to the calculation of the equal-time cross-correlation function < d (1) i d (2) j > λ between the moments in a thermal ensemble with λreduced interaction.
The equal-time correlation function < d (1) i d (2) j > λ can be recovered from the time Fourier transform We can use the finite-temperature fluctuation-dissipation theorem (see e.g. [6]) to relate the fluctuation quantity < d of the combined interacting system, defined in Eq (10) below: This can also be expressed as a Matsubara sum by closing upwards in the complex ω plane, using Cauchy's theorem to obtain a sum of residues at the poles ω n = iu n = i2nπ/(βh), but we will not make explicit use of this here.
The interaction energy E and Helmholtz free energy A then become We assume we know the dipole responsesχ ij,λ (ω) of each isolated giant dipole SPN1, SPN2 to an external B field b such that These individual responses must express the known superparamagnetic properties of individual systems. In general it should also describe any Brownian tumbling aspects of the response, in the case that the time scale of these tumbling motions overlaps that of the magnetic reponse behaviour of each SPN. For now we assume that the tumbling is slow so that only the magnetic response of a SPN oriented in a fixed spatial orientation is required to appear inχ. (The interaction energy may of course depend on the details of this orientation, which will be manifested in the particular values ofχ (12) ij in the chosen cartesian frame.) In Appendix C we discuss a simple model for theχ of a single isolated SPN. However, to calculate the interaction of two SPNs, Eq (8) requres knowledge of the cross-response function (cross-susceptibility) χ (12) λ for the interacting pair of SPNs. This is defined as the linear response of SPN1's moment to an alternating B field that acts upon SPN2 only: where the subscripts i, j label caretesian components of the vectors. To calculateχ (12) we now consider the slightly more general situation where independently-specified small external B fields b (1) are applied to the individual dipoles, in the presence of the dipolar coupling between the two systems.
In time-dependent mean-field theory (RPA), the equations of motion of the coupled systems are (all at arbitrary frequency ω and with Einstein summation convention for repeated indices): These equations describe the evolution of each giant spin in an effective B field containing a time-dependent contribution due to the polarizaton of the other giant spin. Using (12) to eliminate d (2) β in(11), we get Then for b (1) µ = 0 (i.e. an external oscillating B field applied only to moment SPN2) we have This becomes simpler if we have strictly uniaxial responses of the individual spins along (say) the x axis, i.e.
and similarly forχ (2) . Then we can ignore the 2 and 3 components d and only need solve a scalar equation, givinḡ and then from (8) From (9), the corresponding free energy of the residual interaction is The corresponding force between SPN1 and SPN2 is Eq (15) is valid for the uniaxial case but is readily generalized: there is in general a sum of logarithms of the eigenvalues of the matrix 1 − R −6 Tχ (1) Tχ (2) . Note that bothχ (1) andχ (2) in (14, 15) are frequency-dependent. If the denominator of (14) vanishes for some frequency ω 0j then we have a finite oscillation of the magnetic moments for zero driving field -i.e. a free magnon collective oscillation mode of the coupled giant spins. Indeed the free energy (15) can be related to a sum of the thermal free energies of these magnons. Actually for the present model, namelyχ(ω) =χ 0 /(1 − iωτ ) (see Appendix C) these frequencies will have a large imaginary part (damping), so there are really no magnons in the absence of an applied DC magnetic field. The exception is the case ω ≈ 0, where the damping vanishes. If one of the magnon frequencies vanishes, ω 0J = 0, then we have an instability and the system will try to "feeze in" the magnon. This means that the denominator in (14) vanishes for zero frequency, which, as the coupling is increased, will happen first for λ = 1, i.e.
In time-dependent mean-field theories such as this, this behaviour is usually taken to indicate a transition to a broken-symmetry state -in this case the moments presumably freeze into a permanent ordering in the antiparallel configuration.
Energy in second order (weak coupling)
Note that if we only want the energy to second order in the interaction then from (8) we only needχ (12) to first order in T ij , so we can take ε λ = I in (13), givingχ so that (8) becomes, since 1 0 λ 2 dλ λ = 1 2 , The R −6 dependence is apparent. In the case of uniaxial response From Appendix C, a simple model for a superparamagnetic susceptibility isχ(ω) =χ 0 /(1 − iωτ ), and the frequency integral I in (22) can be estimated analytically in two limits depending on the thermal flipping time τ of the giant spins (see Eqs (36) and (35) of Appendix C). This gives a residual free energy and a residual van-der-Waals-like force 5 Orders of magnitude
SPNs below the blocking temperature
First consider the energy and force of interaction between two SPNs below their blocking temperature so that each has a permanant magnetic moment of magnitude d 0 = nµ B . At separation R the direct dipole-dipole energy is dependent on orientation but is of order For example if n = 1000 and R = 1 nm, E direct ≈ 10 −20 J. At T = 300K the thermal energy is k B T room = 4 × 10 −21 J , so E direct ≈ 2k B T . Thus if the two SPNs are not thermally suppressed at T = 300K and are able to approach to within a nanometer, they will not be prevented by thermal effects from rotating to the antiparallel configuration and binding (clumping). The corresponding force F direct between the SPNs is highly orientationdependent but is of order For n = 1000 and R = 1 nm this gives a force of order 20pN, which is small but should be directly detectable via Atomic Force Microscopy (AFM) with single SPNs attached to substrate and tip.
SPNs above the blocking temperature
Now consider a similar system but with a blocking temperature below room temperature so that at 300K there are no permanent moments. Then the vdW-like theory derived above gives the free energy of interaction. For numerical estimates we assume uniaxial susceptibilites and work in the weakcoupling limit. We also assume that the giant spins have a zero-frequency susceptibilityχ corresponding to a giant moment of n Bohr magnetons. Then (24) gives For example, let n = 1000, R = 1 nm , T = 300K, andhτ −1 << k B T . Then A residual ≈ −2×10 −7 (1000) 4 (300) 2 k B T = 2k B T. This means that for the present case the residual energy predicted by the perturbative theory is about the same as the direct energy (26), which is unphysical and simply means that the weakcoupling condition is not met and we need (at least) the full RPA theory here (Eq. (15)). If we are in the limithτ −1 >> k B T the residual interaction will be even larger. In this case the system of two SPNs, despite the thermal averaging of an individual SPN, is most probably near to a trasition to a spin-locked configuration. In the RPA theory the onset of this condition would correspond to a zero denominator in (14). This would occur for For the present case with n = 1000 and T = 300K, the crossover occurs at about R lock = 1 nm, which is consistent with the above finding that the perturbative calculation of the attraction at this separation was unphysical.
To give another example, suppose that n = 100, R = 10nm, T = 300K, andhτ −1 << k B T .Then A residual ≈ −2 × 10 −6 (100) 4 (300) 2 ( 1 10 ) 6 k B T = 2 × 10 −10 k B T,whereas the direct interaction between permanent moments under the same conditions from (26) is E direct = 8.1 × 10 −27 100 2 1 10 3 /(300(1.24 × 10 −23 ))k B T = 2 × 10 −5 k B T. So for this example, neither the direct nor the residual interaction would tend to lock the SPNs into an antiferromagnetically aligned pair. The mechanical forces on the SPNs due to the spin-spin interaction in either the direct or the thermally smeared residual case would be negligible in the context of normal Brownian motion. thermal processes. Furthermore under these same conditions the demagnetization field inside a single SPN might be significant, so that the SPN would no longer contain a single domain as assumed so far. At larger separations R the binding energy falls off as R −3 , and so the direct magnetic energy, as R is increased, will soon be less than the thermal energy k B T . The interaction at these larger separations will not immediately cause binding, but may well determine the kinetics of closer approach between nanoparticles, resulting ultimately in clumping when shorter separations are attained. This process is complicated by the strong orientational dependence of the direct interaction (3). SPNs will tend to rotate mechanically within in the fluid, in order to minimize the free energy in the "antiferromagnetic" relative orientation, after which their mutual force is attractive. Thus the kinetics of clumping will be far from straightforward.
If clumping is undesirable, the n 2 R −3 dependence of of the direct SPN-SPN magnetic binding energy suggests that smaller SPNs (e.g. n = 100) will be desirable because they are less susceptible to clumping, i.e. they can approach to smaller distances (e.g. R = 3 √ 10 −2 nm = 0.2 nm) before clumping occurs.
In fact, at such small separations R, the point dipole approximation used here may break down, softening the interaction and possibly leading to the conclusion that the binding energy even at contact is less than the thermal energy. This would imply minimal clumping.
(ii) Consider SPNs in suspension at T = 300K, with n = 1000, but now with T b < 300K. Here, despite the thermal suppression of the net individual moments, there is a uniformly attractive residual magnetic SPN-SPN free energy A residual . This varies as n 4 R −6 within the perturbative approximation (see (29)), and so becomes much weaker than the direct interaction at large separations R . However at shorter separations, the stronger n and R dependence of the perturbative residual energy expression (29) suggests that E resid could exceed E direct . This is of course unphysical: the correlations between the orientations of giant moments that give rise to E resid cannot be greater than perfect correlation, corresponding to the direct interaction in the antiferromagnetic configuration of the two giant moments. Thus in general E resid ≤ E direct,max . In fact the perturbative approximation breaks down in small-R regime, and the full RPA expression (15) will be needed instead of (29). We do not yet have analytic energy and force expressions in this regime. However it is clear that this approach can yield a residual interaction E resid of a strength approaching E direct,max . It seems likely, therefore, that because of the residual interaction, there will not always be a discontiunous cessation of clumpimg as the temperature is raised above the blocking temeparure T b . However the direct interaction can be repulsive whereas the residual interaction is always attractive, so there is scope for some quite rich behaviour.
7 Prospects for experimental verification of the theory 7.1 Direct measurement of the force between two individual SPNs In Section 5.1 above, the direct interaction between permanently magnetized SPNs with n = 1000 at separation R = 1 nm was estimated to exceed k B T room , and the force was estimated as 20 pN. A force of this magnitude is likely to be observable, with some care, via atomic Force Microscopy. The simplest configuration might involve one SPN attached to a non-magnetic substrate, and another SPN attached to the AFM tip. One could then measure the force as a function of temperature. One might expect a reduction in the measured force as T is increased above T b . As discussed above, the force could even change from repulsive to attractive, depending on the initial orientation of the giant moments prior to heating and subsequent destruction of the net moments. The need for a measurably large force puts us out of the perturbative regime for the residual interaction, so more straightforward but messy theoretical work will be required in order to predict the way in which F varies with distance and temperature near (R, T ) = (1nm, T b ). It is not clear whether the force will be large enough for AFM detection in the regime of larger separations where the perturbative analysis (29) is valid.
Indirect measurement via observation of structure factors in fluid suspension
Here we propose (e.g.) small-angle xray diffraction measurements on SPNs in suspension in a viscous fluid such as glycerine. The metallic SPNs should provide good Xray contrast. The measured structure factor of the array of SPNs should reveal evidence of positional corrrelations between the SPNs, which in turn is related to the forces between the SPNs as predicted here. Again, one hopes to see some changes as the temperature is raised through the blocking temperature T b .
Magnetic resonance experiments
Although the present theory did not predict any lightly damped magnons (combined oscillations of the magnetic moments) for a pair of adjacent SPNs, there might be the possibility of such modes if a strong DC magnetic field is applied. Magnetic resonance experiments might then be able to detect shifts in the single-SPN resonance frequency due to the proximity of a pair of SPNs. Even without the external DC field, an analysis of the linewidth of the zero-frequency "resonance" might reveal information about SPN-SPN coupling.
Summary and future directions
We have predicted a residual force between superparamagnetic nanoparticles that persists above the blocking temperature. The force is the magnetic analogue of the electrically-driven van der Waals interaction between electrically neutral molecules. Our theory also deals with the dynamic spin response of coupled SPNs to small ac external magnetic fields. Our results may be experimentally testable, and may have implications for ferro-fluids, for nanoparticle-based medical therapies, and for magnetic recording technology. The new force is most likely to be significant for nanoparticles that approach one another quite closely, at separations of O(nm). At these separations the point-magnetic-dipole approximation used here will need to be replaced by a theory that attributes a finite spatial size and definite physical shape to the nanoparticles. A good starting model will be an ellipsoidal shape, and fortunately the full electrodynamic theory of Casimir interactions is quite well developed for this geometry. A theory along these lines will be the next step.
9 Appendix A: How to deal with the entropic part (classical angle-distribution approach) The joint state of two interacting superparamagnets is specified by a classical distribution f (2) (Ω 1 , Ω 2 ) in the two solid angles Ω 1 , Ω 2 defining spatial directions where the 2 giant spins point: The reduced-strength interaction λE between the superparamagnets is given by (6). Then from general thermodynamic principles, at a given temperature T ,coupling strength λ and separation R, the correct distribution f (2) λ (T, R) is that which minimizes the trial free energy: so that the following functional derivative is zero Consider an infinitesimal increase in the coupling strength from λ to λ + ∆λ.
As a result, f (2) changes by an amount ∆f (2) and noting that E =< H > λ = E λ /λ we have a resulting change in A : where the zero comes from (30). Notice that we only have to know the interaction E and not the entropic part, to find the change in A.
Then the change in A in switching on the interaction adiabatically is We have already shown how to calculate E λ by using the fluctuation-dissipation theorem and the mean-field (RPA) assumption. Also note that the λ = 0 value of the free energy is independent of separation R: Thus the entire R dependence of A(λ = 1, R, T ) is captured by the integral 10 Appendix B: How to deal with the entropic part (fully quantal approach) Our quantum mechanical basis (NOT the eigenstates) for the combined magnetic state of the two systems together consists of the factorised states |ij = |i |j .
where the first ket refers to quantum state of SPN1 and the second ket to SPN2. The thermal density matrix operator of a pair of magnetically interacting nanoparticles has matrix elements in this basis denoted by ρ (2) ij:kl and traces can be taken over this or any other basis with the same result.
We consider starting from the thermal equilibriumn of two isolated nanoparticles, and consider the effect on the free energy of turning on the interaction by replacing the inter-nanoparticle interaction hamiltonianĤ (12) (R) by λĤ (12) (R), and then increasing λ from 0 to 1 while holding the inter-particle separation R fixed.
Then the first-order change in the equilibrium free energy, when the coupling is increased from λ to ∆λ, is ∆A = ∆λT r Ĥ (12) (R)ρ λ + λT r δA δρ (2) ∆ρ (2) = ∆λ E (12) λ λ + 0 Then the change in free energy in switching on the interaction between the two systems is The same formula can be derived for the classical case, by considering a pair distribution f (Ω 1 , Ω 2 ) of angular orientations Ω of the two giant moments. (See Appendix A).
|
2009-02-20T23:24:53.000Z
|
2009-02-20T00:00:00.000
|
{
"year": 2009,
"sha1": "709254079d37278f0fd8fda5a69fc488741be98d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "709254079d37278f0fd8fda5a69fc488741be98d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
251162567
|
pes2o/s2orc
|
v3-fos-license
|
Depression and its associated factors among prisoners in East Gojjam Zone prisons, Northwest Ethiopia: a multi-centered cross-sectional study
Background Little is known about the prevalence and risk factors for depression in this vulnerable population around the world, including Ethiopia. Furthermore, information on the health of inmates is limited. The study sought to assess the prevalence and associated factors for depression among prisoners in the East Gojjam Zone of Northwest Ethiopia. Methods Institution-based cross-sectional study was conducted in East Gojjam Zone prisons. Data were gathered from 462 eligible prisoners who were chosen using a computer-generated simple random sampling technique. The patient health questionnaire nine was used to assess an individual's depression level. The information was entered into Epi-Data Version 4.2 and exported to STATA Version 14.1 for further analysis. Variables with a P < 0.05 in the multivariable binary logistics regression were considered significant. Results In this study the prevalence of depression among prisoners was 50.43% (95% CI 46–55%). Having work inside prison (AOR 0.6, CI 0.37–0.96), have no history of mental illness (AOR 0.37, 95% CI 0.16–0.85), had monthly income greater than 1500 birr (AOR 0.16, CI 0.05–0.5), Not thinking about the life after prison (AOR 0.4, 95% CI 0.27–0.64), and Prisoners who are sentenced for more than 5 years (AOR 2.2, CI 1.2–4), were significantly associated with depression. Conclusions According to this study, half of the prisoners in East Gojjam Zone prisons had depressive symptoms. Prisons should place a greater emphasis on the mental health of prisoners who have been sentenced for a long time, those who have a history of mental illness, and those who have no work in the prison. Supplementary Information The online version contains supplementary material available at 10.1186/s40001-022-00766-0.
Introduction
Depression is a psychiatric disease characterized by a gloomy mood, a lack of interest or pleasure in activities, and a loss of energy that lasts 2 weeks or longer.
Symptoms may include changes in food, weight, sleep, and motoric activity, as well as feelings of triviality or guilt, difficulty thinking, concentrating, or making decisions, and persistent thoughts of death or suicide plots or attempts [1]. Depression entails more than simply feeling down or going through a difficult time. It is a major mental health problem that necessitates knowledge and treatment. Depression, if left untreated, may be disastrous for both the person suffering from it and their families. 27:136 Mental diseases are prevalent in almost every country on the planet. In addition, it is estimated that 450 million people around the world suffer from mental illnesses [2,3]. Several studies conducted in various countries consistently reported a gradual increase in depression morbidity across the general population. Depression affected over 350 million people worldwide. The severity of crime in prison far outweighs that of the general population. Prisoners all over the world are at a high risk of morbidity and victimization. Prisoners become depressed as a result of their living conditions, overcrowding, restrictions and lack of freedom, and other factors. Depression causes loss of freedom, insecurity about future prospects, insufficient health services, particularly mental health services, a lack of meaningful activities, a lack of social support, and interdependence [4][5][6][7].
When prisoners are afflicted with depression symptoms on a regular or chronic basis, it can lead to suicide. According to one study, one million people die each year as a result of suicide. This equates to 3000 deaths per day. Around one in every nine prisoners worldwide suffers from common mental health disorders, with depression symptoms being the most common. When compared to the general population, prisoners are 5 to 10 times more likely to suffer from depression [5,[8][9][10][11][12].
Despite the fact that more than two-third of incarcerated men and women live in low-and middle-income countries, the vast majority of evidence on mental disorders among prisoners is based on studies from highincome countries, providing implications that are hardly applicable or generalizable to low-and middle-income settings. The prevalence of psychiatric disorders in lowincome countries' penal justice systems may differ from that of high-income countries due to resource scarcity, cultural and legal factors [18].
In Sub-Saharan Africa, research on inmates' mental health is limited, and largely focused on infectious disorders. Prisoners suffer from depression, which is a common public health issue. Inmates are frequently excluded from national health surveys, despite the fact that inmate populations are expanding, and information about prisoners' mental health status in this subject area is scarce. Though there are some studies in the country, because prison administration varies from time to time and place to place, and this study specifically assessed depression, unlike other studies that focus on mental illness in general and identify factors related to depression.
Study design and period
From March 1 to March 30, 2019, an institution-based cross-sectional investigation was undertaken.
Study area
The research was carried out in jails in the East Gojjam Zone. The Amhara Region of Ethiopia has 11 zones, one of which is the East Gojjam Zone. With an estimated population of 3,800,000 people, the zone encompasses a total area of 1292.98 km 2 . During the study period, three prisons in the East Gojjam Zone housed a total of 2648 inmates. Debre-Markos town is 298 km from Addis Ababa and 265 km from Bahir Dar; Bichena town is 265 km from Addis Ababa and 185 km from Bahir Dar; and Motta town is 370 km from Addis Ababa and 120 km from Bahir Dar.
Study population
All prisoners found in East Gojjam Prisons available during the time of data collection period.
Eligibility criteria
Prisoners who stayed in prison at least for 2 weeks and do not have severe illnesses.
Procedure for determining sample size
The maximum required sample size was determined by balancing two objectives. The sample size for the first objective was calculated using a single population proportion formula while taking into account the following statistical assumptions: 95% confidence interval, power, 80%, and 5% marginal error. Depression prevalence (P = 44%) was derived from a previous study [19] The calculated value of the initial sample size was 37 using the above formula. The sample size for the second objective was calculated using the Epi info statistical package version 7 software by considering the prior stud's significant factors. Power 80% and CI 95% were utilized as assumptions. As a result of the assumption, the highest needed sample size estimated with Epi information was 422. The total sample size was 464 after accounting for a 10% nonresponse rate (Table 1).
A proportional fraction of the total sample size was given to each prison. The formula n i = N i n/N was applied for the three jails. The study participants in each prison were then picked using a simple random sampling technique utilizing a computer-generated random sample technique ( Fig. 1).
Data collection tools and procedure
Data were collected by five trained psychiatry nurses using structured questionnaire through face-to-face interview method (Additional file 1). Questionnaires had four parts, which are socio-demographic characteristics, environmental conditions of prisoners in prison, Patient health questionnaire nine (PHQ9), which contained nine questions each measuring a problem that the prisoners bothered in the last 15 days was used to measure depression with scale measurement ranging from 0 (not at all) to 3 (nearly every day) and clinical factors assessment tool. PHQ-9 is a validated and widely used tool used to assess depression in Ethiopia in clinical and community settings [20][21][22]. The PHQ-9 has good reliability and validity [sensitivity (86%) and specificity (67%)] for diagnosing MDD among Ethiopian adults. The internal consistency reliability was also found to be excellent 0.92 and Cronbach's alpha of 0.81. A threshold of ten on the PHQ-9 was the most appropriate cut off and offered the optimal discriminatory power in detecting depression [23]. This tool also used by different studies previously on inmates in Ethiopian and its reliability is confirmed [24][25][26]. Receiver operating characteristics curve (ROC) analysis was done using STATA version 14.1software to cheek the recommended cut of value with maximum sensitivity and specificity. An individual was considered as in a state of depression if he or she scores 10 and above. Within our data the ROC area was 0.99935 at cutoff 10. The tool validation test was also done in Ethiopia [21] and cut off value ≥ 10 to screen depressive symptoms is recommended by different literatures [27][28][29]. OSLO-3 which contains three items was used to assess the social support status [21]. Using the OSLO-3 scale, an individual is deemed to have weak social support if he or she scores 3-8, moderate social support if he or she scores 9-11, and good social support if he or she scores 12-14 [30].
Data processing and analysis
Epi-Data version 4.2 was used to enter the data, which was cheeked for completeness and consistency. The data was then exported to the STATA statistical program version 14.1 for additional analysis. Tables, graphs, and texts were used to calculate and describe descriptive statistics, such as frequency tables. The binary logistic regression models were used to discover factors linked to depression among jail inmates. The multivariable logistic regression model was fitted with variables that had a P value of less than 0.25 in the bivariable analysis. The Hosmer and Lemshow goodness of fit test was used to assess model fitness, yielding a P value of 1.15. Finally, in multivariable analysis, variables with a P value less than 0.05 at the 95% confidence range were judged to be factors substantially associated with depression.
Data quality control
To assure data quality, pretest was done among 5% (22 participants) of a total respondent who are randomly selected from Finot-Selam town prisoners check the understandability and reliability of the questionnaires. The questionnaire is written in English and then translated into Amharic, and back to English language to check consistency of the questionnaire. Two days training for five data collectors and three supervises was given on the study instrument, data collection procedure and the ethical principles of confidentiality. The collected data was reviewed and checked for completeness and relevance by the supervisors and principal investigator each day.
Ethical consideration
The ethical clearance was obtained from the ethical review committee of the University's Health Science College. Permission letters were also obtained from each of the three prison administration offices. Each study participant provided informed verbal consent prior to data collection. Furthermore, the information gathered from the respondent prisoners was used solely for research purposes and was kept strictly confidential. Furthermore, the names of study participants were not included to maintain confidentiality. Prisoners who became severely depressed during the interview were referred to health services and contacted by guidance counseling personnel at the prison.
Results
A total of 464 prisoners were sampled to be included in the study, and 462 (99.5%) agreed to participate. The respondents' median age was 30 years, with an interquartile range (IQR = 24-40) years. The majority of the respondents, 411 (88.96%) were men. The majority of those who responded were Orthodox Christians (95.88%). From a total of 462 respondents, 123 (26.62%) could not read or write and 22 (4.76%) had a diploma or higher educational level. Three hundred nine (66.88%) of the 462 respondents were from rural areas. The majority of the 452 respondents (97.84%) were Amhara by ethnicity. More than half 274 (59.31%) of study participants were married. In addition, 276 (59.74%) had children ( Table 2).
Prisoner's prison environment related characteristics
From a total of 462 study participants, 53 (11.47%) are awaiting trial and 10 (2.16%) have been sentenced to life in prison. From the criminality type that the prisoner was sentenced for, 193 (41.8%) of total respondents were sentenced for murder. Among respondent prisoners who knew their sentenced year, 209 (51.9%) were sentenced for more than 5 years. One hundred ninety-seven (42.64%) of the prisoners refused to accept the reason for their incarceration. In addition, of those who knew their time in prison, 243 (59.41%) did not accept the total penalty year. The study participants' median time in prison was 5.6 years (IQR = 2-15). Approximately 232 (50.22%) of prisoners were concerned about the difficulties they would face after being released from prison and 286 (61.9%) of the total respondents worked in prison. Participants with poor social support were 227 (49.13%).
Clinical factors characteristics of prisoners
There were 133 (28.79%) study subjects who had chronic physical illness out of a total of 462 respondents and 42 (9.09%) of the respondents had a history of mental illness, in addition, 47 (10.17%) of study subjects had a family history of mental illness.
Life time substance use characteristics of prisoners
According to the study, approximately half (50%) of the respondents had a history of lifetime alcohol use. About (41.99%) of the alcohol consumed was a local drink ("Tela"). Approximately 7.6% and 6.7% of respondents, respectively, had a lifetime history of chat and cigarette use.
Prevalence of depression
Based on the PHQ9 tool assessment, approximately half (50.43%) of study participants were identified as having depressive symptoms in the previous 2 weeks. In terms of depression, approximately (15.8%) of participants had minimal depression, while approximately (11.04%) of participants had severe depression (Fig. 2).
Factors associated with depression among prisoners
Variables such as residency, work inside the prison, monthly income, history of mental illness, current chronic illness, total penalty year spent in prison, thinking about life after prison, and social supports had a P value less than 0.25 and were exported to multivariable analysis. In the final model, variables such as having work inside the prison, length of sentenced year, history of mental illness, income per month greater than 1500 birr, and not thinking about life after release had a P value less than 0.05 and were thus significantly associated with depression (Table 3). In this study, prisoners who worked inside the prison were 40% less likely to develop depression (AOR 0.6, CI (0.37-0.96) than those who did not work inside the prison. Inmates with no history of mental illness were 63 percent (AOR 0.37, CI 0.16-0.85) less likely to develop depression than those with a history of mental illness. Prisoners with a monthly income of more than 1500 birr were 84% (AOR 0.16, CI 0.05-0.5) less likely to develop depression than those with a monthly income of less than 500 birr. The odds of developing depression among prisoners who were sentenced for more than 5 years was 2.2 times higher (AOR 2.2, CI (1.2-4) as compared to those prisoners who were sentenced for less than 1 year. Moreover, the study indicated that prisoners who do not think about their life after released were 60% (AOR 0.4, (CI (0.27-0.64) less
Discussion
This institution-based cross-sectional study was carried out to determine the prevalence of depression and to identify risk factors for depression among prisoners in East Gojjam, Zone prisons. According to this study, the overall prevalence of depression among prisoners is 50.43% (CI 95%, 45,55%). This study's findings are consistent with those of a previous study conducted in Bahir-Dar, Ethiopia (45.5%) [24]. However, this findings is higher than the studies conducted in Northwest Amhara (43.8%) [24], Jimma Town (41.9%) [8], in Nigerian (42.2%) [16], in Iran (29%) [15], and in France (17.9%) [14]. On the contrary the prevalence obtained from this study was lower than the studies done in Hawassa Town (56.4%) [17], Mekelle Ethiopia (55.9%) [26], and Malaysia (55.4%) [13]. The above variations could be explained by differences in the tool used to measure depression, the cut of value, the study settings, the length of sentenced year difference, and differences in the environmental characteristics of prisons.
In this study, prisoners who worked inside the prison were 56% less likely to develop depression than those who did not work inside the prison. The findings are consistent with previous research conducted in Brazil [31], Hawassa Central Prison [17], and Jimma Town Prisoner [8]. It is well known that working gives people pleasure in a variety of ways; it can be a way for them to spend their time, earn money, support their families, and live a better life both inside and outside of prison. Prisoners who had no past history of mental illness were 63% less likely to develop depression as compared to those who had history of mental illness. The finding is similar with study in Hawassa Central prison, Ethiopia [17], in Nigeria [16], and in France [14]. The possible reason is might be due to that prison environment can easily stimulates past mental health problems and leads to depression. When compared to convicts with a monthly income of less than 500 Ethiopian birr, individuals with a monthly income of more than 1500 Ethiopian birr were 84% less likely to develop depression. The likely answer is that having enough birr is utilized for many things, and inmates want to go about their daily lives as usual, and they need things such as food, clothing, and as much as possible to maintain their families and their lives in and outside jail which could reduce stress. Supportive findings are documented in Hawassa, Ethiopia [17] and Kenya [32].
The odds developing depression among prisoners who are sentenced for more than 5 years was 2.2 times higher as compared to prisoners who are sentenced for < 1 year. The possible explanation for this is that as the length of time spent in jail increases, convicts may become more concerned about their future. Furthermore, life in prison inhibits their freedom of movement, separation from their loved ones, denial of sexual intimacy, and many other aspects of their lives that are not typical. As a result, staying in prison for a longer period of time will make you depressed. These finding was supported by studies done in Bahir-Dar Town [24], and United Kingdom [33].
Furthermore, this study found that prisoners who did not consider life difficulties after prison were 60% less likely to develop depression than those who did consider life difficulties after prison. This can be explained by the fact that, physiologically, more thinking can lead to anxiety, stress, and depression. When prisoners consider their life after release from prison, they may be concerned about a variety of issues. Because they are incarcerated, their home, family, and resources may be lost, and starting a new life may be the main challenge, so prisoners may develop depression when they consider the difficulties of life after incarceration. Supportive findings are documented in Northwest Amhara [19], and in Jimma town prison, Ethiopia [8].
In resource poor countries, the magnitude of psychiatric disorders in prison populations is higher than in the general population. The prevalence could even be higher than in high-income countries. Because correctional facilities in low-income economies frequently lack basic treatment resources, the implementation of cost-effective interventions and scalable treatments for individuals suffering from depression is required.
Conclusions
According to this survey, half of the inmates in East Gojjam prisons suffer from depression. In comparison to their counterparts, convicts who had no work in jail, had a history of mental illness, had a lower monthly income, and worried about life after prison were more likely to develop depression. According to the findings of this study, all prisoners who are severely depressed should be treated and assisted in their rehabilitation. Prisoners must be involved in work and given more opportunities to earn money. Prison administration officers must teach poisoners how to live their lives after they are released from prison. The assigned professionals in prison who provide counseling should address effective counseling, screening, and evaluating the depression status of the prisoners. It would be preferable if special training and counseling were provided for prisoners with a history of mental illness as well as those who have been incarcerated for an extended period of time. Nongovernmental organizations (NGOs), and the government would benefit from a greater emphasis on prisoner health.
Limitations of the study
Due to the nature of the cross-sectional study design, the study had its own limitations; it does not identify the temporal relationship between associated factors and depression. Because the study was conducted only in prisons, it does not generalize to the general population. Some of the reports were based on prior events, which can lead to recall bias. Alcohol drinking, Khat chewing, and other substances are delicate topics that might lead to social desirability bias.
|
2022-07-30T13:30:39.725Z
|
2022-07-30T00:00:00.000
|
{
"year": 2022,
"sha1": "98b2bfbc39840efc5e26e86e433a9be199a3758c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "98b2bfbc39840efc5e26e86e433a9be199a3758c",
"s2fieldsofstudy": [
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234209224
|
pes2o/s2orc
|
v3-fos-license
|
Conversion of municipal solid waste to refuse-derived fuel using biodrying
Municipal Solid Waste (MSW) in Indonesia comes from urban settlements, markets, and industries. MSW decomposes naturally and without being used at all. The purpose of this study to convert MSW to refuse-derived fuel (RDF) using biodrying. The research was conducted on a laboratory scale using a biodrying reactor. The biodrying process takes place aerobically with an airflow rate of 6 L/min, the highest temperature reaches 60°C on the third day and the water content on the 21st day is 32.65%. The final RDF calorific value is 6,102.82 cal/g. This calorific value is equivalent to low-energy coal (brown coal). RDF from MSW can be applied to the cement industry that requires heat >6000 cal/g, PLTU requires 5242 cal/g, the metal industry requires 6000 cal/g, and the paper industry requires 5240 cal/g to carry out the production process.
Introduction
The primary source of municipal solid waste in Indonesia come from urban settlement, market, office, and industrial area. In 2016 the amount of solid waste that entered the Banyuurip landfill in Magelang City was 71.82 tons/day [1]. MSW was not used at all and decomposed naturally in the Banyuurip landfill. MSW is a proven energy resource because solid waste contains sufficient energy/heat as a substitute for fuel [2]. According to SNI No. 13-6011-1999, regarding the classification of coal resources and reserves, the lowest rank, soft and easily crushed type of coal contains high water content (10-70%) can be used as an energy source if it has a calorific value <7,000 cal/g.
The technology used to convert solid waste into energy (waste to energy) has developed. Pretreatment of solid waste using biodrying technology can produce Refuse Derived Fuel (RDF). This technology is one of the best methods to maximize energy recovered into fuel products [3]- [5]. Municipal solid waste from the Banyuurip landfill has been studied as a mixture of paving block materials -a substitute for fine aggregate. Composition of portland cement (PC): sand (1: 3) + 15% sludge mixture produces a compressive strength of 197.080 kg/cm 2 . It meets the quality criteria B (can be used for car parking spaces) [6]. Alam, Oktiawan and Wardhana (2014) planning to use the solid waste of Banyuurip landfill to be processed into organic fertilizer and plastic recycling [7]. Based on the plan, organic fertilizer production is estimated to be 15 tons per day and 150 kg of recycled plastic. However, this has not been realized in the field. Based on the energy aspect, municipal solid waste equal to fossil fuels because it contains oxidizable materials (mainly carbon and hydrogen), which can release energy, which is 2 quantified as heat. This energy can be used for heat production, power generation, and heat generation [4]. Therefore, other efforts are needed; such conversion of MSW to energy is an effort to increase MSW utilization. The purpose of this research is to convert solid waste from Banyuurip landfill to refusederived fuel using biodrying. RDF can be used for the cement industry, metal industry, and paper industry.
Methodology
The research was conducted in the greenhouse of the environmental laboratory, Environmental Engineering Department, Faculty of Engineering, Diponegoro University. The biodrying reactor is used to convert solid waste from the final processing plant to refuse-derived fuel using biodrying. Solid waste samples came from the Banyuurip landfill, Magelang Regency.
The biodrying reactor is made of plywood with dimensions of 30 cm x 30 cm x 100 cm. The reason for choosing a reactor with plywood material is to avoid direct sunlight because it can increase temperature and affect microbial activity. The first reactor is a control reactor (air flow rate 0 L/min), and the second reactor is a biodrying reactor (air flow rate 6 L/m). Both reactors are equipped with a cover made of geotextile non-woven so that air and water vapor easily escape the reactors. On one side of the reactor, three holes were made with a height of 25 cm (BR), 50 cm (TR), and 75 cm (AR) from the bottom. The hole serves to measure the temperature and the solid waste matrix sampling point. At the bottom of the reactor is given a leachate outlet hole, at the top 5 cm thick gravel is placed, then equipped with wire nets to limit the waste and gravel stones. Aeration is supplied via a hose connected to a ½ inch PVC pipe that is placed horizontally on top of the rock structure. PVC pipe is given a hole at the top for air distribution. Figure 1 shows the schematic of the biodrying reactor. Municipal solid waste samples originating from the Banyuurip landfill, Magelang Regency. MSW are sorted based on the categories of food waste, leaf waste, plastic waste, and paper waste. The composition of solid waste (v/v) are 38.60%: 15.24%: 24.63%: 21.53%, respectively, for food waste: leaf waste: plastic waste: paper waste. The waste was chopped with 2-3 cm, mixed, and added with water until each reactor's water content is ± 60%. The waste was put into two reactors (reactor control and biodrying reactor) until it was full. The biodrying reactor was supplied with an airflow rate of 6 L/min, while the control reactor was not aerated. Airflow rate is provided using a blower, and airflow velocity is measured using a flow meter (Dyer, USA). The matrix's temperature was measured using a thermometer every day at 06.00 am, 02.00 pm, and 10.00 pm for 21 days. Water content was measured daily using the gravimetric method. Calorific values were measured on the first day, peak temperature, 7 th , 14 th , and 21 st days. Calorific value testing used a bomb calorimeter in the Sepuluh Nopember Institute of Technology environmental engineering laboratory, Surabaya. The leachate produced during the biodrying process was collected by flowing the leachate with a hose located under the reactor (if formed).
Temperature
The temperature measurement aims to determine the success of the solid waste conversion process from Banyuurip landfill to refuse-derived fuel (RDF) using biodrying. Temperature is an important factor affecting water evaporation and degradation of organic compounds in MSW [8]. The temperature measurement points in this study are at 3 points, namely the bottom of the reactor (BR), which is located 25 cm, the middle of the reactor (TR), which is located 50 cm, and the top of the reactor (AR) which is located 75 cm from the bottom of the reactor. Meanwhile, the measurement time was taken at 06.00 am, 02.00 pm, and 10.00 pm for 21 days. The results of temperature measurements in this study are shown in figure 2. The solid waste matrix temperature can reach the thermophilic phase on the third day of 49 o C and 60 o C, respectively, for reactor control and reactor biodrying. The temperature in the biodrying reactor is higher than that of the control reactor. This is due to differences in the treatment of the airflow rate. The biodrying reactor gets aeration of 6 L/m while the control reactor is 0 l/m. This study's results correspond with previous research, which states that a different airflow rate causes the temperature difference in the reactor and the airflow rate of 1.13 L/min causes the solid waste matrix temperature to be higher than the airflow rate of 0.57 L/m [9].
The temperature in each reactor has decreased after passing the peak temperature until it is in the mesophilic phase. The control reactor decrease in temperature range 40 o C-41 o C on the 11 st day, while the biodrying reactor decreased in the range of 35 o C-39 o C on the 11 st day. The higher rate of air being given would quickly cool the material and stop microbial activity [10]. After the 11 st day, each reactor experienced a steady decline in the mesophilic phase in the range of 30 o C-39 o C. This shows that microorganisms' activity is not too large to achieve biological stability after the biodrying process occurs [10]. The temperature decrease in the gradual biodrying process is an indication that the activity of microorganisms is going well [11]. In the biodrying process, the temperature will affect the degradation process. This temperature affects the biodrying product as indicated by the value of water content, ODS (organic dry solid), carbon, and ash [12].
Water content analysis
The biodrying process is evaluated based on the target moisture level or water content and its length to reach these levels. In general, the biodrying process should not exceed 21 days [10]. The water content is considered as a critical parameter for evaluating the efficiency of the biodrying process. Water content affects microbial activity and biodegradation of organic components during the biodrying process [13]. The water content decreases through two stages, namely the evaporation of water molecules (for example, changing the phase from liquid to gas) from the surface of the solid waste fragments to the surrounding air and evaporation of water, which is transported through the matrix with air discharge and reduction with exhaust gases [14]. Loss of water correlates with organic matter during the degradation process and is influenced by temperature and airflow and the combination of the two [15].
The results of measuring the water content in each reactor for 21 days are shown in figure 3. Measurement of water content is carried out every day with a sample of ± 1 gram. Measurements are made using the gravimetric method. The initial water content of solid waste in the control reactor was 59.90%, and the biodrying reactor was 62.74%. The water content is in accordance with the literature, where the optimum water content at the beginning of the biodrying process is between 50% -65% [9,10]. The initial water content is very important for the continuity of microbial activity. The too high water content will support anaerobic conditions because water will limit the matrix's oxygen transport space. In contrast, the low water content will slow down microorganisms' activity due to insufficient moisture and can lead to decreased drying performance.
|
2021-05-11T00:04:26.307Z
|
2021-01-09T00:00:00.000
|
{
"year": 2021,
"sha1": "1ade95bd969972dc1c76dbafd0ad47973f400d3a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/623/1/012003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8ece45d282967061c711b1261888a2d53d5dff0e",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
}
|
119498681
|
pes2o/s2orc
|
v3-fos-license
|
On the liquid-glass transition line in monatomic Lennard-Jones fluids
A thermodynamic approach to derive the liquid-glass transition line in the reduced temperature vs reduced density plane for a monatomic Lennard-Jones fluid is presented. The approach makes use of a recent reformulation of the classical perturbation theory of liquids [M. Robles and M. L\'opez de Haro, Phys. Chem. Chem. Phys. {\bf 3}, 5528 (2001)] which is at grips with a rational function approximation for the Laplace transform of the radial distribution function of the hard-sphere fluid. The only input required is an equation of state for the hard-sphere system. Within the Mansoori-Canfield/Rasaiah-Stell variational perturbation theory, two choices for such an equation of state, leading to a glass transition for the hard-sphere fluid, are considered. Good agreement with the liquid-glass transition line derived from recent molecular dynamic simulations [Di Leonardo et al., Phys. Rev. Lett. {\bf 84}, 6054(2000)] is obtained.
The liquid-glass transition in Lennard-Jones (LJ) fluids has been a subject of interest for over twenty five years [1]. Recently, Di Leonardo et al. [2] using molecular dynamic simulations determined the liquid-glass transition line of a monatomic LJ system in the reduced density (ρ * ) vs. reduced temperature (T * ) plane. Here ρ * = ρσ 3 and T * = k B T /ǫ, with ρ the number density k B the Boltzmann constant, T the temperature, and σ and ǫ being the usual parameters of the LJ potential (φ LJ (r) = 4ǫ(σ 12 /r 12 − σ 6 /r 6 ), where r is the distance). Based on these results, they propose an off-equilibrium criterion to define the glass transition temperature T g . This effective temperature compares rather well with the T g obtained from equilibrium calculations.
In a completely different context, a long time ago Hudson and Andersen [3] addressed the nature of the glass transition in monatomic liquids through an equilibrium calculation. In particular, in the case of the LJ fluid, they used the Weeks-Chandler-Andersen (WCA) perturbation theory of liquids [4] and two independent indications that a change in properties similar to a glass transition happens in the hard-sphere (HS) fluid at a packing fraction of 0.533±0.014. Given the then scarce amount of simulation data available to compare with, their conclusion was that the use of the HS fluid as a reference fluid within the WCA scheme was adequate to derive the glass-transition line of monatomic fluids. As they already pointed out, for their approach to be useful in conection with glass formation in real systems, it is crucial that the HS fluid itself undergoes a glass transition. As it is well known, the existence of a glass transition in the HS fluid has been a debatable issue for a long time, but evidence from various sources seems to suggest that this is indeed the case [5,6,7,8,9,10].
The main aim of this letter is to assess whether an equilibrium calculation, such as the one carried out by Hudson and Andersen [3], can still be useful to describe the new data on the liquid-glass transition line of Lennard-Jones fluids. In a recent paper [11], we have reformulated the most popular schemes of the perturbation theory of liquids, namely the Barker-Henderson [12], the variational Mansoori-Canfiled/Rasaiah-Stell [13,14] and the WCA [4] schemes, using the HS fluid as the reference fluid. Our study focussed on the equilibrium properties (equation of state, critical point, liquid and solid branches of the reduced temperature vs. reduced density curve at coexistence and radial distribution function) of the Lennard-Jones fluid. All the calculations related to the different perturbative schemes that we reported in ref. [11] rely on a known analytical rational function approximation (RFA) to the radial distribution function (rdf) g HS (r) for the HS fluid developed by Yuste and Santos [15]. Their main idea is to propose that the Laplace transform G(t) = L[rg HS (r)], where t is the Laplace transform variable, may be approximated by where η = π/6ρd 3 is the packing fraction, d being the hard-sphere diameter and Φ(t) is a rational function of the form The coefficients S i and L i are algebraic functions of η. They are determined by imposing two physical restrictions to the rdf [16], namely 1. The first integral moment of h HS (r) = g HS (r) − 1 is well defined and non-zero.
2. The second integral moment of h HS (r) must guarantee the thermodynamic consistency of the compressibility factor Z HS = p/(ρk B T ) ( p being the pressure) and the isothermal susceptibility χ HS = (d(ηZ HS )/dη) −1 .
These two conditions readily imply [15,16] that (1+2η) 2 denote the compressibility factor and isothermal susceptiblity arising in the Percus-Yevick theory. In order to close the approximation, a given equation of state for a HS fluid i.e. an explicit expression for the compressibility factor Z HS is needed.
All the perturbation schemes mentioned above introduce an effective (in general density and temperature dependent) diameter of the spheres as a fitting parameter to adjust some of the thermodynamic properties of the system of interest with respect to those of the reference HS system. Once this effective diameter is determined, one can infer by inversion the values of the temperature and density in the real system that correspond to a given packing fraction of the HS fluid. This fact was used in ref. [11] to determine the liquid and solid branches of the reduced temperature vs. reduced density curve at coexistence for a LJ fluid from the knowledge of the packing fractions for the fluid-solid transition η F −S = 0.494 and the solid-fluid transition η S−F = 0.54 in the HS fluid, respectively [17]. In a similar fashion, the liquid-glass transition line for the LJ system in the ρ * − T * plane may be derived from the simple relationship where η g is the packing fraction corresponding to the glass transition in the HS fluid and we have introduced the reduced diameter (in units of σ)denoted by d * .
In order to proceed with such derivation, one has to specify the perturbation scheme and to know the value of η g . As already stated above, Hudson and Andersen used the WCA and took η g = 0.533 ± 0.014. In our case, we will consider two different equations of state for the HS fluid that allow us to calculate in a self-consistent way the value of η g , and take the MC/RS scheme.
The first such equation of state is the Padé(4,3) constructed from the knowledge of the first eight virial coefficients [18,19]. The compressibility factor corresponding to this equation of state is given by Note that, as was pointed out in previous work [8,20], within the RFA the Padé(43) leads to a glass transition in the HS fluid at η g = 0.5604. It should also be borne in mind that this equation of state has a simple pole at a packing fraction very close to the fcc close-packing fraction [19].
On the other hand, the second equation of state is an ad-hoc approximation constructed in the following way. Since the well known Carnahan-Starling(CS) [21] equation of state has been shown to be very accurate throughout the complete fluid region and even in a small density range within the metastable regime but has a pole at η = 1 which is clearly unphysical, we demand that the new compressiblity factor Z prop has a pole at the random close packing fraction η rcp (up to this point taken as a parameter) and that the first eight coefficients in its series expaexpansionnsion coincide with the same coefficients in the series expansion of the Carnahan-Starling compressibility factor. Thus, Z prop (η) is taken to be of the form where the coefficients a i (i = 1, ..., 5) and b j (j = 1, 2) depend on the value of η rcp . In order to determine this value, we further assume that, according to the criterion that has been used earlier [8,20], a glass transition in the HS fluid (considered to be a second order phase transition) occurs at η g (also unspecified at this stage) if S4 vanishes for η g . The above conditions lead to the following set of equations: where A is a constant and in writing the compressibility factor for the glass we have taken the form proposed by Speedy [5,7]. The set of equations (12)- (14) together with (11) allow us to determine η rcp , A and η g in a selfconsistent way. The results are η rcp = 0.6504, A = 2.780 and η g = 0.5684, which are well within the range of published values for these quantities. In turn the value of η rcp leads to the explicit form of Z prop (η), namely In the range 0 ≤ η ≤ 1 this equation of state only presents a simple pole at η = η rcp .
In order to illustrate the numerical accuracy of these two equations of state in the metastable region, in Fig. 1 we display the inverse of the contact values of the rdf (g(d + ) −1 = 4η/(Z HS − 1)) derived from them as a function of the packing fraction and compare the results with the simulation data obtained by Rintoul and Torquato [22]. In this figure we have also included the predictions of the corresponding equations of state for the glass adopting the form suggested by Speedy [5,7].
For the sake of making the paper self contained, we now recall the condition required to derive the temperature and density dependent diameter within the MC/RS perturbation scheme (for details see refs. [11,13,14]). Here the effective diameter is chosen to minimize the The region corresponding to the glass has been obtained from Z glass and in the case of Z43, A = 2.765 and ηrcp = 0.6448 [20]. The full symbols are simulation results of ref. [22] and the arrows indicate the location of ηg for each case.
Helmholtz free energy of the perturbed system. To first order in β = 1/T * this diameter is determined from the equation [11] ∂ ∂d * Clearly in solving this equation it should be understood that Z HS and Φ(t) have been expressed as functions of ρ * and d * .
Using the effective diameters determined from eq.(16) with either Z 43 or Z prop and the condition given by eq. (9) (with the η g value corresponding to each equation of state) we have determined the liquid-glass transition lines in the ρ * -T * plane for the LJ fluid. These are shown in Fig. 2 where we have also included the recent simulation results of Di Leonardo et al. [2]. As clearly seen in the figure, not only the qualitative trend observed in the simulations is reproduced, but also the quantitative agreement is rather good, specially for the case of Z prop . A remarkable aspect of these results is that they have been derived self-consistently with no free parameters.
One question that immediately arises is to what extent the above results depend on the MC/RS perturbation scheme and the choices for Z HS . Concerning the first issue, which is particularly relevant in view of the fact that the previous calculation by Hudson and Andersen [3] used the WCA, we have checked that the performance of this latter scheme is much poorer. This may reflect that the choice of minimizing the Helmholtz free energy of the perturbed system to derive the effective diameter may be more astringent than equating the isothermal compressiblities of the actual and the HS reference system. Something similar applies to the Barker-Henderson [2] .
perturbation theory in which the lack of density dependence of the effective diameter does not reproduce the desired trend. As far as the second issue is concerned, we have also performed calculations taking the CS equation of state (which does not lead to a glass transition in the RFA formulation) and adjusted the value of the packing fraction η g until good agreement with the simulation data of Di Leonardo et al. [2] was obtained. In fact for η g = 0.55 we get results pretty close to those derived using Z prop . The above suggests that if the exact equation of state of the HS system (including a glass transition) were available then the use of perturbation theory and the RFA approach would yield a very accurate description of the liquid-glass transition line of the LJ fluid.
In conclusion, in this paper we have shown that an equilibrium approach to the glass transition in LJ fluids, much in the same spirit as discussed by Baeyens and Verschelde [6] in the case of HS fluids, is wholly compatible with the molecular dynamics simulation results for the liquid-glass transition line reported recently [2]. Perhaps the key aspect of our approach is its self-consistency and the provision of a theory with no free parameters. Furthermore, the application of the same approach for other monatomic fluids may be laborious, but in principle should follow the same steps.
|
2019-04-14T01:59:28.449Z
|
2002-03-29T00:00:00.000
|
{
"year": 2002,
"sha1": "3a873a3894b45b4d786ef8690b078c244eb1b6c8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0203603",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "95f186857654f56bf43ceed61057434b2252e7ef",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
261705559
|
pes2o/s2orc
|
v3-fos-license
|
Atmospheric retrievals with petitRADTRANS
petitRADTRANS (pRT) is a fast radiative transfer code used for computing emission and transmission spectra of exoplanet atmospheres, combining a FORTRAN back end with a Python based user interface. It is widely used in the exoplanet community with 222 references in the literature to date, and has been benchmarked against numerous similar tools. The spectra calculated with pRT can be used as a forward model for fitting spectroscopic data using Monte Carlo techniques, commonly referred to as an atmospheric retrieval. The new retrieval module combines fast forward modelling with nested sampling codes, allowing for atmospheric retrievals on a large range of different types of exoplanet data. Thus it is now possible to use pRT to easily and quickly infer the atmospheric properties of exoplanets in both transmission and thermal emission.
Statement of need
Atmospheric retrievals are a cornerstone of exoplanet atmospheric characterisation.Previously this required interfacing with a sampler and writing a likelihood function in order to explore a parameter space and fit input data, following the description provided in (Mollière et al., 2019).pRT now provides a powerful and user-friendly tool for researchers to fit exoplanet spectra with a range of built-in or custom atmospheric models.Various thermal structures, chemistry and cloud parameterisations and opacity calculation methods can be combined and used to perform parameter estimation and model comparison for a given atmospheric spectrum.The new retrieval module standardises data input, parameter setup, likelihood calculation and interfaces with Multinest, allowing for both simple 'out-of-the-box' retrievals as well as fully customisable setups.New tutorials and documentation have been published to facilitate the adoption of the retrieval module as a widely used tool in the exoplanet community.With increasing volumes of both ground-and space-based spectra available, it is necessary for exoplanet researchers to have access to a range of characterisation tools.While many other retrieval codes exist, pRT offers a computationally efficient and highly flexible framework for inferring the properties of a wide variety of exoplanets.While most feature of pRT are available in other retrieval codes, few of the implement the diversity of the pRT feature set; the availability of both correlated-k and line-by-line opacities, free, equilibrium, and disequilibrium chemistry, and multiple cloud implementations in a single framework let it stand out as a uniquely flexible approach to atmospheric retrievals.
petitRADTANS Retrieval Module
The new retrieval module combines the existing Radtrans forward modelling class with a nested sampler via a likelihood function to perform an atmospheric retrieval.Both MultiNest (Buchner et al., 2014;Feroz et al., 2009Feroz et al., , 2019;;Feroz & Hobson, 2008) and Ultranest (Buchner et al., 2014;Buchner, 2019) samplers are available, with both offering MPI implementations that allow for easy parallelisation.
Datasets, priors and other retrieval hyper parameters are set through the RetrievalConfig class, while the models module includes a range of complete atmospheric models that can be fit to the data.Users can also define their own model function, either by making use of temperature profiles from the physics module and chemistry parameterisations from the chemistry module or by implementing their own forward model.Once set up, the Retrieval class is used to run the retrieval using the chosen sampler and to produce publication ready figures.Multiple datasets can be included into a single retrieval, with each dataset receiving its own Radtrans object used for the radiative transfer calculation where some or all forward model parameters may be shared between the different data sets.This allows for highly flexible retrievals where multiple spectral resolutions, wavelength ranges and even atmospheric models can be combined in a single retrieval.Each dataset can also receive scaling factors (for the flux, uncertainties or both), error inflation factors and offsets.The model functions are used to compute a spectrum ⃗ , which is convolved to the instrumental resolution and binned to the wavelength bins of the data using a custom binning function to account for non-uniform bin sizes.The resulting spectrum compared to the data with flux ⃗ and covariance C in the likelihood function: The second term is included in the likelihood to allow for uncertainties to vary as a free parameter during the retrieval, and penalises overly large uncertainties.A likelihood function for high resolution data based on that of (Gibson et al., 2020) is also available.
pRT can compute spectra either using line-by-line calculations, or using correlated-k (c-k) tables for defining the opacities of molecular species.We include up-to-date correlated-k line lists from Exomol (Chubb et al., 2021;McKemmish et al., 2016;Polyansky et al., 2018;Tennyson & Yurchenko, 2012) and HITEMP Hargreaves et al. (2020), with the full set of available opacities listed in the online documentation.The exo-k package is used to resample the correlated-k opacity tables to a lower spectral resolution in order to reduce the computation time (Leconte, 2021).Combining the c-k opacities of multiple species requires mixing the distributions in space.This operation is necessary when calculating emission spectra and accounting for multiple scattering in the clouds.Previously, this was accomplished by taking 1000 samples of each distribution.This sampling process resulted in non-deterministic spectral calculations with a small (up to 1%) scatter about the expected result, which could lead to unexpected behaviour from the nested sampler as the same set of parameters could result in different log-likelihood.pRT has been updated to fully mix the c-k distributions, iteratively mixing in any species with a significant opacity contribution: a species is only mixed in if the highest opacity value in a given spectral bin is larger than a threshold value.This threshold value is obtained by listing the smallest opacity value for every species in a given spectral bin, and then setting the threshold to 1% of the largest value from the list for each spectral bin.The resulting grid is linearly interpolated back to the 16 points at each pressure and frequency bin as required by pRT.This fully deterministic process produces stable log-likelihood calculations, and resulted in a 5× improvement in the speed of the c-k mixing function.
Various thermal, chemical and cloud parameterisations are available in pRT.Built-in temperature profiles range from interpolated splines to physically motivated profiles as in Guillot (2010) and Mollière et al. (2020).Equilibrium and disequilibrium chemistry can be interpolated from a pre-computed grid on-the-fly.Chemical abundances can also be freely retrieved, with the additional possibility of using a combination of free and chemically consistent abundances.Cloud parameterisations range from a 'grey' continuum opacity applied at all wavelengths, to clouds parameterised as in Ackerman & Marley (2001), using log-normal or Hansen (1971) particle size distributions with real optical opacities for different compositions and particle shapes, and including self-scattering.Users can also pass functions to the Radtrans object that encode any absorption or scattering opacity as a function of wavelength and atmospheric position, allowing for a generic cloud implementation.Included in pRT is an option to use an adaptive pressure grid with a higher resolution around the location of the cloud base, allowing for more precise positioning of the cloud layers within the atmosphere.
Photometric data are fully incorporated into the retrieval process.The spectral model is multiplied by a filter transmission profile from the SVO database using the species package (Stolker et al., 2020).This results in accurate synthetic photometry, which can be compared to the values specified by the user with the add_photometry function.
Publication-ready summary plots of best fits, temperature and abundance profiles and corner plots can be automatically generated.Multiple retrieval results can be combined in the plots for model comparisons.Such results have been benchmarked against other widely used retrieval codes, such as PLATON (Zhang et al., 2019), POSEIDON (Grant et al., 2023) and ARCiS in (Dyrek et al., 2024).The forthcoming retrieval comparison of the JWST Early Release Science (ERS) program will comprehensively compare pRT and other retrieval codes in the analysis of in prep).Figure 1 shows the fit of a transmission model to the JWST/NIRISS/SOSS observations of WASP 39 b (Feinstein et al., 2023) from the ERS program.
Figure 1 :
Figure 1: Typical example of default pRT outputs.This highlights the fit of a transmission spectrum model to JWST/NIRISS/SOSS data of WASP 39 b as part of the Transiting Early Release Science program.
|
2023-09-14T06:42:58.423Z
|
2023-09-13T00:00:00.000
|
{
"year": 2023,
"sha1": "7c3e03261de7c035533f53554f067010a9885df4",
"oa_license": "CCBY",
"oa_url": "https://joss.theoj.org/papers/10.21105/joss.05875.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "103f0c96fb5a7103f0e580e1eae4426c5cde84ce",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
204152129
|
pes2o/s2orc
|
v3-fos-license
|
ST3Gal3 confers paclitaxel-mediated chemoresistance in ovarian cancer cells by attenuating caspase-8/3 signaling
The aberrant expression of sialyltransferase has a role in cell differentiation, neoplastic transformation and the progression of various types of cancer. Our previous studies have shown that high expression of β-galactoside-α2,3-sialyltransferase III (ST3Gal3) in the metastatic ovarian cancer cell line HO8910PM attenuated cisplatin-induced apoptosis. The present study demonstrated that paclitaxel-induced chemoresistance in ovarian cancer cells upregulated the expression of ST3Gal3 and reduced the activity of caspase-8/3. The results of the present study revealed that the endogenous levels of ST3Gal3 mRNA and protein were significantly higher in HO8910PM cells compared with SKOV3 cells. A higher expression of ST3Gal3 was correlated with an increased resistance to paclitaxel, while the downregulation of ST3Gal3 resulted in paclitaxel-induced apoptosis. Paclitaxel upregulated ST3Gal3 expression at the mRNA and protein levels in HO8910PM cells, but not in SKOV3 cells. Silencing of ST3Gal3 by small interfering RNA reversed these effects and increased the protein levels of caspase-8/3, which may contribute to paclitaxel-induced apoptosis. The results of the present study suggested that ST3Gal3 was a target for paclitaxel-related resistance during ovarian cancer chemotherapy.
Introduction
ovarian cancer remains one of the most aggressive and highly recurrent malignant diseases worldwide with a poor prognosis (1,2). The vague clinical symptoms in the early stages of ovarian cancer have a major role in delaying intervention and treatment (3). Several diagnostic biomarkers have been identified for ovarian cancer. Serum human epididymis protein 4 (HE4) and transthyretin (TTR) were identified as novel biomarkers for ovarian cancer (4). compared with the golden standard marker mucin-16 (ca125), He4 is suitable for patients with advanced ovarian cancer, while TTr is a better serum biomarker for patients with early stage ovarian cancer (4). a panel of four serum biomarkers (ca125, He4, e-cadherin and interleukin-6) has been shown to have a sensitivity of 95-100% for patients with early stage ovarian cancer (5). However, unlike the progression in the discovery of diagnostic biomarkers for ovarian cancer, biomarkers associated with more efficient therapeutic targets remain elusive.
Sialylation is one of the essential molecular posttranslational modifications, which has important roles in metabolism, immunity, development and cancer biology (6)(7)(8)(9). in ovarian cancer, sialylation of glycoproteins is a common modification in ovarian cancer proximal fluids (10). aberrantly sialylated n-linked glycopeptides may serve as serum biomarkers for patients with ovarian cancer (11)(12)(13). The fully sialylated α-chain of complement 4-binding protein has been identified as a diagnostic and prognostic marker for ovarian cancer (14,15). β-galactoside-α2,3-sialyltransferase i (ST3Gal1) was found to be expressed at a higher level in the advanced stage of epithelial ovarian cancer, and was demonstrated to facilitate epidermal growth factor receptor (eGFr) signaling and the migration and peritoneal dissemination of ovarian cancer cells (16). The α2,6 n-linked sialylation of the β1 integrins was reported to promote cell adhesion and invasion of ovarian cancer cells (17). in addition, the volume of ST3Gal3 confers paclitaxel-mediated chemoresistance in ovarian cancer cells by attenuating caspase-8/3 signaling ascites indicated the occurrence of transmesothelial invasion, which correlated with a poorer prognosis for patients with ovarian cancer (18,19). it has recently been reported that there is a positive correlation between the volume of ascites and the amount of serum sialylated structures in patients with epithelial ovarian cancer (8). Therefore, sialyltransferase-catalyzed sialylations are a prevalent and aggravating factor in ovarian cancer initiation and progression.
The treatment of ovarian cancer includes surgery, chemotherapy and immunotherapy (20). due to the vague clinic symptoms in the early stages of ovarian cancer, most patients with ovarian cancer are identified when the malignancy becomes advanced, which reduces the availability of surgical intervention (21,22). The first line chemotherapies used for ovarian cancer are predominantly platinum-based and taxane-based drugs (22,23). Surgery in combination with chemotherapy or chemotherapy alone remain the standard care for advanced ovarian cancer, although most patients with advanced ovarian cancer develop chemoresistance, metastasis and bowel obstructions, which are the most frequent causes of mortality (22). The molecular mechanisms of chemoresistance are complex, including the maintenance of cancer stem cells, the aberrant activation of multi-drug resistant pathways, the aberrant activation of aBc transporters and oncogenic mutation (24)(25)(26)(27). He4 not only serves as a diagnostic marker for ovarian cancer, but is also a predictor of platinum sensitivity in ovarian cancer (28). α2,6 n-linked sialylated eGFr confers acquired resistance to gefitinib in ovarian cancer (29). ST6GAL1 confers cisplatin resistance in ovarian tumor cells (30). our previous study showed that ST3Gal3 correlated with cisplatin resistance in ovarian tumor cells (31). in the present study, the relationship between ST3Gal3 expression and paclitaxel resistance in ovarian tumor cells was investigated in order to provide a better understanding of the effect of paclitaxel treatment alone or in combination with ST3Gal3.
Materials and methods
Cell culture. The human ovarian cell line SKoV3 was purchased from the american Type culture collection and the Ho8910PM cell line was purchased from The cell Bank of Type culture collection of The chinese academy of Sciences. SKOV3 cells were cultured in RPMI-1640 medium (Gibco; Thermo Fisher Scientific, Inc) containing 10% FBS (Gibco; Thermo Fisher Scientific, Inc.) and 1% penicillin-streptomycin (Gibco; Thermo Fisher Scientific, Inc.). HO8910PM cells were cultured in DMEM high-glucose medium (Gibco; Thermo Fisher Scientific, Inc.) containing 10% FBS and 1% penicillin-streptomycin. All cell lines were cultured at 37˚C in a humidified incubator with 5% CO 2 .
RNA isolation and reverse transcription-quantitative (RT-q) PCR. Total RNA was extracted using TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc.), according to the manufacturer's instructions. Total rna was reverse transcribed using TransScript one-Step gdna removal and cdna Synthesis SuperMix (Beijing Transgen Biotech co., Ltd.). Complementary DNA was amplified using UltraSYBR Mixture (High ROX; CWBio) and the following primers: ST3Gal3 forward, 5'-aaa acG aca cTG cGc aTc ac-3' and reverse, 5'-TCG AGT GGC CAC AGA TTT CC-3'; and GAPDH forward, 5'-aGc cTc aaG aTc aTc aGc-3' and reverse 5'-GaG Tcc TTc cac GaT acc-3'. The qPcr cycling conditions were as follows: 95˚C for 30 sec, followed by 40 cycles of 95˚C for 5 sec and 60˚C for 40 sec. The relative levels of mRNA expression were normalized to GAPDH and calculated using the 2 -ΔΔcq method (32).
Cell Counting assays. SKoV3 and Ho8910PM cells were seeded in 96-well plates at a density of 5x10 3 cells/well. The next day, the cells were treated with 0, 5, 10, 20, 40, 80, 160 or 320 ng/ml paclitaxel (Sigma-Aldrich; Merck KGaA) or the equivalent volume of dMSo as a negative control. after 48 h of incubation, the cell viability was determined using the ccK-8 assay (Beyotime institute of Biotechnology), according to the manufacturer's instructions. cytotoxicity was calculated as follows: cytotoxicity (%)=[1-(optical density of tested cells)/(optical density of control cells)] x100. The ic 50 (half maximal inhibitory concentration) value was calculated by GraphPad Prism 6 (GraphPad Software, inc.).
Small interfering (si)RNA transfection and paclitaxel treatment. in total, three ST3Gal3 sirna sequences were designed and synthesized by Guangzhou RiboBio Co., Ltd. The sirna sequences for sicTrl was: 5'-TTc Tcc Gaa cGT GTc acG T-3'. The sirna sequences for ST3Gal3 were: 1#-sense 5'-cGT GGa aGc Tac acT TacT-3', 2#-sense 5'-ccT Gaa TcT GGa cTc Taa a-3' and 3#-sense 5'-ccT GGa cGc aca aTa Tcc a-3'. These sirna sequences were tested in a previous study (31), and the 1# sirna sequence was then used for further experiments. Briefly, cells were seeded into 6-well plates at a density of 2x10 5 cells/well. The next day, 7.5 µl lipofectamine ® rnaiMaX reagent (Invitrogen; Thermo Fisher Scientific, Inc.) was diluted in 125 µl Opti-MEM (Invitrogen; Thermo Fisher Scientific, Inc.) and 30 pmol sirna was diluted in 125 µl opti-MeM, and each was mixed by vortexing for 10 sec. The diluted sirna was added to the diluted lipofectamine ® rnaiMaX reagent and incubated for 10 min at room temperature. during the incubation, the cells were washed once with 3 ml of PBS and 2 ml fresh growth medium was added. Subsequently, the 250 µl transfection mixture was added dropwise onto the cells in the 6-well plate, incubated for two days and then exposed to 20 ng/ml paclitaxel for a further 48 h.
Flow cytometry for Maackia amurensis lectin II (MAL II) staining. Briefly, after the medium was discarded, cells were washed twice with PBS and digested with trypsin/0.25% edTa (Gibco; Thermo Fisher Scientific, Inc.). Fresh growth medium was added to terminate the digestion and cells were collected by centrifugation at 1,000 x g for 5 min at room temperature. The cells were resuspended and washed twice in 1 ml PBS. The cell pellets were resuspended in 100 µl of HePeS containing 0.5% BSa and 2.5 µg/ml biotinylated Malii (Vector laboratories, ltd.), and incubated at room temperature for 2 h in the dark. The cells were washed twice with PBS and incubated with 1 µg/ml streptavidin-phycoerythrin (Sigma-Aldrich; Merck KGaA) for 1 h at room temperature in the dark. after washing the stained cells twice with PBS, cell surface MAL II was quantified using a FACSCanto II flow cytometer (BD Biosciences), and analyzed by BD CellQuest Software version 3.3 (Bd Biosciences) according to the manufacturer's instructions.
Flow cytometry for apoptosis analysis. cells were digested and collected as aforementioned. cells were resuspended and briefly washed twice in 1ml PBS. The final pellets were resuspended in 500 µl of 1X binding buffer, and 5 µl of annexin-V-FiTc (Bd Biosciences) was added and incubated at room temperature for 15 min in the dark. Then 5 µl of propidium iodide (Bd Biosciences) was added to the cells and incubated at room temperature for 5 min in the dark. apoptosis was immediately quantified using the FacScanto ii flow cytometer (BD Biosciences), and analyzed by BD CellQuest Software version 3.3 (Bd Biosciences).
TUNEL assay. In brief, cells were fixed with 4% paraformaldehyde at room temperature for 20 min and washed twice with PBS. Cells were then permeabilized with 0.3% Triton X-100 at room temperature for 10 min, washed once with PBS and incubated with 0.3% H 2 o 2 in PBS for a further 20 min in the dark. after washing twice with PBS, 50 µl Tunel working solution (Beyotime institute of Biotechnology) was added to each sample and incubated at room temperature for 1 h. after washing twice with PBS, stop solution was added at room temperature for 10 min. after washing twice with PBS, streptavidin-HrP working solution was added to the cells at room temperature for 30 min. after washing twice with PBS, 3,3'diaminobenzidine solution was dropped onto the samples and incubated at room temperature for 3 min. after washing with PBS and mounting with neutral balsam, images were captured using a leica optical microscope (leica Microsystems, inc.) and routine light microscopy (magnification, x100).
Statistical analysis. experiments were independently performed for a minimum of three times and results were qualitatively similar. representative experiments are shown. numerical data are presented as the mean ± Sd. Statistical analyses were performed using one-way anoVa and the Tukey-Kramer multiple comparisons test using GraphPad Prism 6 (GraphPad Software, inc.). P<0.05 was considered to indicate a statistically significant difference.
Results
Expression and activity of ST3Gal3 in ovarian cancer cell lines. To identify the relationship between ST3Gal3 and Mal ii in ovarian cancer cell lines, the mrna expression levels of ST3Gal3 were investigated. The results showed that the levels of ST3Gal3 mrna expression in Ho8910PM cells was more than double that in SKoV3 cells (Fig. 1a). The protein expression levels of ST3Gal3 were also markedly higher in Ho8910PM cells than in SKoV3 cells (Fig. 1B). Mal ii binds to sialic acid in an α-2,3 linkage rather than an α-2,6 linkage, which is specific for the activity of ST3Gal3 (33). Biotinylated Mal ii was used to investigate the levels of cell surface Mal ii in ovarian cancer cell lines. in line with the expression levels of ST3Gal3, the α-2,3 sialic acid linked terminal glycosylated modification was ~2-fold higher in HO8910PM cells than in SKoV3 cells ( Fig. 1c and d). collectively, the expression and activity of ST3Gal3 were significantly higher in Ho8910PM cells than in SKoV3 cells.
ST3Gal3 affects paclitaxel-resistance in ovarian cancer cells. The aberrant expression and activity of ST3Gal3 in ovarian cancer cells prompted an investigation into
the relationship between the expression of ST3Gal3 and paclitaxel-resistance. Paclitaxel treatments ranging from 0 to 320 ng/ml were used to induce cytotoxicity in SKoV3 and Ho8910PM cells. The ic 50 of paclitaxel in SKoV3 cells and Ho8910PM cells were calculated to be 57.90 and 130.61 ng/ml, respectively, indicating that the paclitaxel-resistance in Ho8910PM cells was >2-fold higher than in SKoV3 cells (Fig. 2a). Tunel assays were then used to examine paclitaxel-induced apoptosis in SKoV3 and Ho8910PM cells. The results revealed that the apoptotic ratio was ~2-fold higher in SKoV3 cells than in Ho8910PM cells following 48 h exposure to 50 ng/ml paclitaxel (Fig. 2B and c), indicating that paclitaxel-resistance was higher in Ho8910PM cells than in SKoV3 cells.
ST3Gal3 knockdown sensitizes ovarian cancer cells to paclitaxel-induced apoptosis.
To further investigate the role of ST3Gal3 in paclitaxel resistance, ST3Gal3 was silenced. sirna was used to knockdown ST3Gal3 expression in SKoV3 and Ho8910PM cells, followed by treatment with 20 ng/ml paclitaxel for 48 h. The level of apoptosis was then quantified using flow cytometry. in Ho8910PM cells, the knockdown of ST3Gal3 significantly increased the level of apoptosis, (si-cTrl vs. si-ST3Gal3; Fig. 3A and B). The apoptosis ratio was ~3-fold higher in si-cTrl knockdown cells treated with paclitaxel than in si-cTrl knockdown cells without paclitaxel treatment. The apoptosis ratio was ~2-fold higher in the si-ST3Gal3 cells compared with the si-cTrl cells after treated with paclitaxel ( Fig. 3a and B). in SKoV3 cells, the knockdown of ST3Gal3 also increased the level of apoptosis, but not significantly (si-CTRL vs. si-ST3Gal3). The apoptosis ratio was ~1.5-fold higher in the si-ST3Gal3 cells plus paclitaxel compared with the si-cTrl cells plus paclitaxel (Fig. 3c and d). in the absence of STG3Gal3, HO8910PM cells are no longer significantly more resistant to paclitaxel than SKoV3 cells. (Fig. 3e). ST3Gal3 knockdown was found to increase paclitaxel-chemosensitivity in both SKoV3 and Ho8910PM cells.
ST3Gal3 knockdown increases paclitaxel-mediated activation of caspase-8/3 signaling. The mechanism of ST3Gal3-induced chemoresistance was investigated next. RT-qPCR and western blot results first confirmed that the si-ST3Gal3 siRNA significantly downregulated STG3Gal3 mrna and protein levels in both Ho8910PM and SKoV3 cells (Fig. 4). Paclitaxel treatment induced ST3Gal3 expression at the mrna and protein level in Ho8910PM cells ( Fig. 4a and B). after the sirna-induced downregulation of ST3Gal3, paclitaxel treatment further decreased the levels of ST3Gal3 mrna in Ho8910PM cells, rather than rescuing STG3Gal3 expression (Fig. 4a). The change in the protein level was similar (Fig. 4B). in addition, it was found that paclitaxel increased caspase-8 and caspase-3 protein expression levels in si-cTrl Ho8910PM cells. Knockdown of ST3Gal3 alone only induced caspase-3 protein levels in Ho8910PM cells compared with si-cTrl, while caspase-8 levels were not altered (Fig. 4B). Furthermore, si-ST3Gal3 knockdown plus paclitaxel treatment markedly elevated caspase-8 and caspase-3 protein levels in Ho8910PM cells compared with si-ST3Gal3 knockdown alone (Fig. 4B).
By contrast, paclitaxel treatment did not alter the mrna and protein expression levels of ST3Gal3 in si-cTrl SKoV3 cells or si-ST3Gal3 SKoV3 cells (Fig. 4c and d). it was also found that paclitaxel induced caspase-3 protein expression, but not caspase-8 expression, in si-cTrl SKoV3 cells. ST3Gal3 knockdown alone induced caspase-3 expression, but seemed to decrease caspase-8 expression, in SKoV3 cells, compared with the si-cTrl. Finally, ST3Gal3 knockdown plus paclitaxel treatment markedly increased both caspase-8 and caspase-3 protein level in SKoV3 cells, compared with si-ST3Gal3 knockdown alone (Fig. 4d). These results indicated that ST3Gal3 knockdown synergistically facilitated paclitaxel-mediated activation of caspase-8/3 signaling.
Discussion
Paclitaxel, a taxane-based drug, has been approved for use in a number of solid tumors for decades. However, serious side effects and chemoresistance are major obstacles that limit its clinical application, even though this drug has a remarkable response rate and increases the survival rate (34,35). Therefore, focusing on reversing chemoresistance to paclitaxel remains an important and urgent focus in cancer treatment. The present results demonstrated that the expression levels of ST3Gal3 were associated with paclitaxel-resistance in ovarian cancer cell lines. of note, paclitaxel treatment increased the mrna and protein levels of ST3Gal3, while ST3Gal3 knockdown reduced the level of paclitaxel-induced apoptosis in ovarian cancer cells. Thus, the results of the present study suggested that ST3Gal3 may be a potential target for improving paclitaxel-based chemotherapy.
The advance in glycoinformatics and glycoproteomics has provided the needed tools to probe and understand glycosylation in development and cancer biology (8,31). a number of sialyltransferases and sialylated glycoproteins have been identified as diagnostic or prognostic markers for ovarian cancer (8,36). The ascites volume has been shown to be correlated with the degree of sialylation in epithelial ovarian cancer, providing a new insight into tumor progression and recurrence (8). our previous study showed that the expression of sialyltransferase mRNA differs significantly among different ovarian cancer cell lines (31). in the present study, it was found that the expression of ST3Gal3 was higher in Ho8910PM cells than in SKoV3 cells, indicating that more (α-2,3)-linked sialylation events may be catalyzed on the cell surface of Ho8910PM cells. aberrant sialylations have been associated with a number of processes, including adhesion, tumor progression and metastasis (7,37,38). in addition, ST3Gal3 and ST6Gal1 play roles in cisplatin-resistance, as previously reported (30,31,39). However, the combination of cisplatin and paclitaxel serves as a standard chemotherapy regimen for patients with advanced ovarian cancer (40,41). Therefore, in the present study the relationship between ST3Gal3 and paclitaxel-resistance in ovarian cancer cell lines was investigated. The results of the present study showed that a higher expression level of ST3Gal3 conferred paclitaxel resistance to Ho8910PM cells. Following sirna depletion of ST3Gal3 in Ho8910PM cells, a higher level of apoptosis was induced by paclitaxel compared with si-cTrl cells. There was no significant difference in paclitaxel-induced apoptosis between si-ST3Gal3 SKoV3 cells and si-ST3Gal3 Ho8910PM cells. These results suggested that ST3Gal3 plays an important role in chemoresistance to paclitaxel in ovarian cancer cells.
The mechanism by which ST3Gal3 regulates paclitaxel-resistance was also explored. at a mechanistic level, paclitaxel stabilizes microtubule polymers to inhibit mitotic spindle assembly, blocking cell division and triggering apoptosis (34,35). The mechanism for paclitaxel-resistance has been reported to be associated with microtubule dynamics, tubulin isotype expression and the mutation, or modifications, of tubulin-/microtubule-regulatory proteins (34). The results of the present study suggested an alternative mechanism for paclitaxel-resistance based on the following findings: i) Paclitaxel treatment upregulated ST3Gal3 expression at both the mrna and protein level in Ho8910PM cells, but not in SKOV3 cells; ii) either paclitaxel treatment alone or ST3Gal3 knockdown induced caspase-3 expression in ovarian cancer cell lines; iii) paclitaxel alone induced caspase-8 expression in HO8910PM cells, but not in SKOV3 cells; iv) ST3Gal3 knockdown did not affect caspase-8 expression in Ho8910PM cells, and even reduced caspase-8 expression in SKOV3 cells; and v) the combination of paclitaxel treatment and ST3Gal3 knockdown markedly increased caspase-8 and caspase-3 expression in ovarian cancer cell lines. These results indicated that ST3Gal3 knockdown activated caspase-3, rather than caspase-8, signaling, however, paclitaxel-induced caspase signaling depended on the cell type involved. Therefore, ST3Gal3 knockdown may directly and synergistically facilitate the paclitaxel-mediated activation of caspase-3 signaling, while the activation of caspase-8 may be dependent on the activation of caspase-3.
in conclusion, the results of the present study suggested an alternative mechanism for paclitaxel-associated chemoresistance in ovarian cancer cells. it was also indicated that aberrant ST3Gal3 expression may serve as a diagnostic and prognostic marker, and a potential chemotherapeutic target, for ovarian cancer. Future studies should be directed towards the development of sialyltransferase inhibitors; for example, high-affinity lectin may be used to block sialylated sites, or gene modification may be employed to downregulate the expression and function of sialyltransferases, with the aim to ameliorate resistance to paclitaxel-based chemotherapies.
|
2019-10-03T09:12:07.592Z
|
2019-09-26T00:00:00.000
|
{
"year": 2019,
"sha1": "090f722bcbb8ed85393cdbdec916e7dbaeb63d0b",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2019.10712/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "495b17c9000e740c183268e30d86cc2f62839138",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
245483482
|
pes2o/s2orc
|
v3-fos-license
|
LotuS2: an ultrafast and highly accurate tool for amplicon sequencing analysis
Background Amplicon sequencing is an established and cost-efficient method for profiling microbiomes. However, many available tools to process this data require both bioinformatics skills and high computational power to process big datasets. Furthermore, there are only few tools that allow for long read amplicon data analysis. To bridge this gap, we developed the LotuS2 (less OTU scripts 2) pipeline, enabling user-friendly, resource friendly, and versatile analysis of raw amplicon sequences. Results In LotuS2, six different sequence clustering algorithms as well as extensive pre- and post-processing options allow for flexible data analysis by both experts, where parameters can be fully adjusted, and novices, where defaults are provided for different scenarios. We benchmarked three independent gut and soil datasets, where LotuS2 was on average 29 times faster compared to other pipelines, yet could better reproduce the alpha- and beta-diversity of technical replicate samples. Further benchmarking a mock community with known taxon composition showed that, compared to the other pipelines, LotuS2 recovered a higher fraction of correctly identified taxa and a higher fraction of reads assigned to true taxa (48% and 57% at species; 83% and 98% at genus level, respectively). At ASV/OTU level, precision and F-score were highest for LotuS2, as was the fraction of correctly reported 16S sequences. Conclusion LotuS2 is a lightweight and user-friendly pipeline that is fast, precise, and streamlined, using extensive pre- and post-ASV/OTU clustering steps to further increase data quality. High data usage rates and reliability enable high-throughput microbiome analysis in minutes. Availability LotuS2 is available from GitHub, conda, or via a Galaxy web interface, documented at http://lotus2.earlham.ac.uk/. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40168-022-01365-1.
Background
The field of microbiome research has been revolutionized in the last decade, owing to methodological advances in DNA-based microbial identification. Amplicon sequencing (also known as metabarcoding) is one of the most commonly used techniques to profile microbial communities based on targeting and amplifying phylogenetically conserved genomic regions such as the 16S/18S ribosomal RNA (rRNA) or internal transcribed spacers (ITS) for identification of bacteria and eukaryotes (especially fungi), respectively [1,2]. The popularity of amplicon sequencing has been growing due to its broad applicability, ease-of-use, cost-efficiency, streamlined analysis workflows as well as specialist applications such as low biomass sampling [3].
Open Access
*Correspondence: falk.hildebrand@quadram.ac.uk Alas, amplicon sequencing comes with several technical challenges. These include primer biases [4], chimeras occurring in PCR amplifications [5], rDNA copy number variations [6], and sequencing errors that frequently inflate observed diversity [7]. Although modern read error corrections can already significantly decrease artifacts of sequencing errors [8], some of the biases can be further corrected in the pre-and post-processing of reads and OTUs/ASVs, respectively. To process amplicon sequencing data from raw reads to taxon abundance tables, several pipelines have been developed, such as mothur [9], QIIME 2 [10], DADA2 [8], PipeCraft 2 [11], and LotuS [12]. These pipelines differ in their data processing and sequence clustering strategies, reflected in differing execution speed and resulting amplicon interpretations [12,13].
Here, we introduce Lotus2, designed to improve reproducibility, accuracy, and ease of amplicon sequencing analysis. LotuS2 offers a completely refactored installation, including a web interface that is freely deployable on Galaxy clusters. During development, we focused on all steps of amplicon data analysis, including processing raw reads to abundance tables as well as improving taxonomic assignments and phylogenies of operational taxonomic units (OTUs [14]; or amplicon sequence variants (ASVs [15];) at the highest quality with the latest strategies available.
Pre-and post-processing steps were further improved compared to the predecessor "LotuS1": the read filtering program sdm (simple demultiplexer) and the taxonomy inference program LCA (least common ancestor) were refactored and parallelized in C++. LotuS2 uses a 'seed extension' algorithm that improves the quality and length of OTU/ASV representative DNA sequences. We integrated numerous features such as additional sequence clustering options (DADA2, UNOISE3, VSEARCH and CD-HIT), advanced read quality filters based on probabilistic and Poisson binomial filtering, and curated ASVs/ OTUs diversity and abundances (LULU, UNCROSS2, ITSx, and host DNA filters). LotuS2 can also be integrated in complete workflows. For instance, the microbiome visualization-centric pipeline CoMA [16] uses LotuS1/2 at its core to estimate taxon abundances.
Here, we evaluated LotuS2 in reproducing microbiota profiles in comparison to contemporary amplicon sequencing pipelines. Using three independent datasets, we found that LotuS2 consistently reproduces microbiota profiles more accurately and reconstructs a mock community with the highest overall precision.
Design philosophy of LotuS2
Overestimating observed diversity is one of the central problems in amplicon sequencing, mainly due to sequencing errors [7,17]. The second read pair from Illumina paired-end sequencing is generally lower in quality [18] and can contain more errors than predicted from Phred quality scores alone [19,20]. Additionally, merging reads can introduce chimeras due to read pair mismatches [21]. The accumulation of errors over millions of read pairs can impact observed biodiversity, so essentially is a multiple testing problem. To avoid overestimating biodiversity, LotuS2 uses a relatively strict read filtering during the error-sensitive sequence clustering step. This is based on (i) 21 quality filtering metrics (e.g., average quality, homonucleotide repeats, and removal of reads without amplicon primers), (ii) probabilistic and Poisson binomial read filtering [18,22], (iii) filtering reads that cannot be dereplicated (clustered at 100% nucleotide identity) either within or between samples, and (iv) using only the first read pair from pairedend Illumina sequencing platforms. These reads are termed "high-quality" reads in the pipeline description and are clustered into OTUs/ASVs, using one of the sequence clustering programs (Fig. 1B).
However, filtered out "mid-quality" sequences are partly recovered later in the pipeline, during the seed extension step. LotuS2 will reintroduce reads failing dereplication thresholds or being of "mid-quality" by mapping these reads back onto high-quality OTUs/ASVs if matching at ≥ 97% sequence identity. In the "seed extension" step, the optimal sequence representing each OTU/ASV is determined by comparing all (raw) reads clustered into each OTU/ASV. The best read (pair) is then selected based on the highest overall similarity to the consensus OTU/ASV, quality, and length, which can then be merged in case of paired read data. Thereby, the seed extension step enables more reads to be included in taxon abundance estimates, as well as enabling longer ASV/OTU representative sequences to be used during taxonomic classifications and the reconstruction of a phylogenetic tree.
Implementation of LotuS2 Installation
LotuS2 can be accessed either through major software repositories such as (i) Bioconda, (ii) as a Docker image, or (iii) GitHub (accessible through http:// lotus2. earlh am. ac. uk/) (Fig. 1A). The GitHub version comes with an installer script that downloads the required databases and installs and configures LotuS2 with its dependencies. Alternatively, we provide iv) a wrapper for Galaxy [23] allowing installation of LotuS2 on any Galaxy server from the Galaxy ToolShed. LotuS2 is already available to use for free on the UseGalaxy.eu server (https:// usega laxy. eu/), where raw reads can be uploaded and analysed (Supplementary Figure S1). While LotuS2 is natively programmed for Unix (Linux, macOS) systems, other operating systems are supported through the Docker image or the Galaxy web interface.
Input
LotuS2 is designed to run with a single command, where the only essential flags are the path to input files (fastq(.gz), fna(.gz) format), output directory, and mapping file. The mapping file contains information on sample identifiers, demultiplexing barcodes, or file paths to already demultiplexed files and can be either automatically generated or provided by the user. The sequence input is flexible, allowing simultaneous demultiplexing Fig. 1 Workflow of the LotuS2 pipeline. A LotuS2 can be installed either through (i) Bioconda, (ii) GitHub with the provided autoInstaller script, or (iii) using a Docker image. Alternatively, (iv) Galaxy web servers can also run LotuS2 (e.g., https:// usega laxy. eu/). B LotuS2 accepts amplicon reads from different sequencing platforms, along with a map file that describes barcodes, file locations, sample IDs, and other information. After demultiplexing and quality filtering, high-quality reads are clustered into either ASVs or OTUs. The optimal sequence representing each OTU/ ASV is calculated in the seed extension step, where read pairs are also merged. Mid-quality reads are subsequently mapped onto these sequence clusters to increase cluster representation in abundance matrices. From OTU/ASV sequences, a phylogenetic tree is constructed, and each cluster is taxonomically assigned. These results are made available in multiple standard formats, such as tab-delimited files, .biom, or phyloseq objects to enable downstream analysis. New options in LotuS2 for each step are denoted with black colour whereas options in grey font were already available in LotuS of read files and/or integration of already demultiplexed reads.
Output
The primary output is a set of tab-delimited OTU/ASV count tables, the phylogeny of OTUs/ASVs, their taxonomic assignments, and corresponding abundance tables at different taxonomic levels. These are summarized in .biom [25] and phyloseq objects [26], that can be loaded directly by other software, such as R and Python programming languages, for downstream analysis.
Furthermore, a detailed report of each processing step can be found in the log files which contain commands of all used programs (including citations and versions) with relevant statistics. We support and encourage users to conduct further analysis in statistical programming languages such as R, Python, or MATLAB and using analysis packages such as phyloseq [26], documented in tutorials at http:// lotus2. earlh am. ac. uk/.
In the "seed extension" step, a unique representative read of a sequence cluster is chosen, based on quality and merging statistics. Each sequence cluster, termed ASVs in the case of DADA2, OTUs otherwise 1 , is represented by a high confidence DNA sequence (see Design Philosophy of LotuS2 for more information).
OTUs/ASVs are further post-processed to remove chimeras, either de novo and/or reference based using the program UCHIME3 [31] or VSEARCH-UCHIME [30]. By default, ITS sequences are extracted using ITSx [32]. Highly resolved OTUs/ASVs are then curated based on sequence similarity and co-occurrence patterns using LULU [24]. False-positive OTU/ASV counts can be filtered using the UNCROSS2 algorithm [33]. OTUs/ASVs are by default aligned against the phiX genome, a synthetic genome often included in Illumina sequencing runs, using Minimap2 [34]; and OTUs/ ASVs that produce significant matches against the phiX genome are subsequently removed. Additionally, the user can filter for host contamination by providing custom genomes (e.g., human reference), as host genome reads are often misclassified as bacterial 16S by existing pipelines [3].
Each OTU/ASV is taxonomically classified using one of RDP classifier [35], SINTAX [36], or by alignments to reference database(s), using the custom "LCA" (least common ancestor) C++ program. Alignments of OTUs/ASVs with either Lambda [37], BLAST [38], VSEARCH [30], or USEARCH [39] are compared against a user-defined range of reference databases. These databases cover the 16S, 18S, 23S, 28S rRNA genes, and the ITS region; by default, a Lambda alignment against the SILVA database is used [40]. Other databases bundled with LotuS2 include Greengenes [41], HITdb [42], PR2 [43], beetax (bee gut-specific taxonomic annotation) [44], and UNITE (fungal ITS database) [45]. In addition, users can provide reference databases (a fasta file and a tab-delimited taxonomy file, see "-refdb" flag documentation in the LotuS2 help). These databases can be used by themselves or in conjunction with the bundled ones. From mappings against one or several reference databases, the least common ancestor for each OTU/ASV is calculated using LCA. Priority is given to deeply resolved taxonomies, sorted by the earlier listed reference databases. LotuS2 can also be used to analyse amplicons from other phylogenetically conserved genomic regions (e.g., Cytochrome c oxidase subunit I (COI) or dissimilatory sulfite reductase (dsr)). For these cases, users have to provide custom reference databases and taxonomic assignments (via -refdb flag, see above). For inferring phylogenetic trees, multiple sequence alignments for all OTUs/ASVs are calculated with either MAFFT [46] or Clustal Ω [47]; from these a maximum likelihood phylogeny is constructed using either fasttree2 [48] or IQ-TREE 2 [49]. User discretion is advised, as ITS amplicons might be less suitable for inferring reliable phylogenies.
ITS amplicons were clustered with CD-HIT, UPARSE, and VSEARCH and filtered by default using ITSx [32] in LotuS2. ITSx identifies likely ITS1, 5.8S, and ITS2 and full-length ITS sequences, and sequences not within the confidence interval are discarded in LotuS2. In analogy, QIIME 2-DADA2 uses q2-ITSxpress [51] that also removes unlikely ITS sequences.
Error profiles during ASV clustering were inferred separately for the samples sequenced in different MiSeq runs during DADA2 and Deblur clustering in all pipelines. We truncated the reads into the same length (200 bases, default by LotuS2) in all pipelines while analysing the datasets. Primers were removed from the reads, where supported by the pipeline in question.
Measuring computational performance of amplicon sequencing pipelines
When benchmarking pipelines, processing steps were separated into 5 categories in each tested pipeline: (a) Pre-processing (demultiplexing if required, read filtering, primer removal, and read merging for QIIME 2-Deblur), (b) sequence clustering (clustering + refining of the clusters and denoising for QIIME 2-DADA2, (c) OTU/ASV taxonomic assignment, (d) construction of a phylogenetic tree (the option is available only in mothur, QIIME 2, and LotuS2 and applied only for the 16S datasets), and (e) removal of host genome (the option is available only in QIIME 2 and LotuS2). In mothur, sequence clustering and taxonomic assignment times were added since these pipeline commands are entangled (https:// mothur. org/ wiki/ miseq_ sop/).
Data used in benchmarking pipeline performance
Four datasets with different sample characteristics (with respect to, e.g., compositional complexity, target marker and region, and amplicon length) were analyzed: (i) Gut-16S dataset [12]: 16S rRNA gene amplicon sequencing of 40 human faecal samples in technical replicates that were sequenced in separate MiSeq runs, totalling 35,412,313 paired-end reads. Technical replicates were created by extracting DNA twice from each faecal sample. Primer sequences were not available for this dataset [12]. Since the Illumina runs were not demultiplexed, pipelines had to demultiplex these sequences, as applicable (please see the Computational performance and data usage section for further details). (ii) Soil-16S dataset: 16S rRNA gene amplicon sequencing of two technical replicates (a single DNA extraction per sample) from 50 soil samples, that were sequenced in separate MiSeq runs, totalling 11,820,327 paired-end reads. PCR reactions were conducted using the 16S rRNA region primers 515F (GTG YCA GCMGCC GCG GTAA) and 926R (GGC CGY CAA TTY MTTT RAG TTT). The soil-16S dataset was already demultiplexed, requiring pipelines to work with paired FASTQ files per sample. (iii) Soil-ITS dataset: ITS amplicon sequencing of 50 technical replicates of soil samples (a single DNA extraction per sample), sequenced in two independent Illumina MiSeq runs, totalling 6,006,089 paired-end reads. The ITS region primers gITS7ngs_201 (GGG TGA RTC ATC RAR TYT TTG) and ITS4ngsUni_201 (CCTSCSCTTANTDATA TGC ) [52] were used to amplify DNA extracted from soil samples. The soil-ITS dataset was already demultiplexed.
(iv) Mock dataset [53]: This was a microbial mock community with known species composition, mock-16 [53]. The mock dataset comprised a total of 59 strains of Bacteria and Archaea, representing 35 bacterial and 8 archaeal genera. The mock community was sequenced on an Illumina MiSeq (paired-end) by targeting the V4 region of the 16S rRNA gene using the primers 515F (GTG CCA GCMGCC GCG GTAA) and 806R (GGA CTA CHVGGG TWT CTAAT) [53]. This dataset was demultiplexed and contained 593,868 paired reads.
Benchmarking the computational performance of amplicon sequencing pipelines
To evaluate the computational performance of LotuS2 in comparison to mothur, QIIME 2 [10], DADA2 [8], and the last released version of LotuS [12] (v1.62 from Jan 2020; called LotuS1 here), all pipelines were run with 12 threads on a single computer free of other workloads (CPU: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10 GHz, 32 cores, 375 GB RAM). To reduce the influence of network latencies on pipeline execution, all temporary, input, and output data were stored on a local SSD hard drive. PipeCraft 2 is not designed for high performance computing cluster execution (https:// pipec raft2-manual. readt hedocs. io/ en/ stable/ insta llati on. html# windo ws) and was therefore excluded from computational performance benchmarking; however, the gut-16S and soil-16S datasets using default options and 6 cores where possible was executed in a laptop in > 8 h (excluding the demultiplexing step) and in > 24 h, respectively.
The remaining pipelines were run three times consecutively to account for pre-cached data and to obtain average execution time and maximum memory usage. To calculate the fold differences in execution speed between pipelines, the average time of QIIME 2, mothur, and DADA2 to complete the analysis was divided by the average time by all LotuS2 runs (using different clustering options). The average of these numbers across the gut-16S, soil-16S, and soil-ITS datasets was used to estimate the average speed advantage of LotuS2.
Benchmarking reproducibility of amplicon sequencing pipelines
Technical replicates of the soil and gut samples were used to estimate the reproducibility of the microbial community composition between replicates. This was measured by calculating beta and alpha diversity differences between technical replicate samples. To calculate beta diversity, either Jaccard (measuring presence/absence of OTUs/ASVs) or Bray-Curtis dissimilarity (measuring both presence/absence and abundances of OTUs/ASVs) were computed between technical replicate samples. Before computing Bray-Curtis distances, abundance matrices were normalized. Jaccard distances between samples were calculated by first rarefying abundance matrices to an equal number of reads (to the size of the first sample having > 1000 read counts) per sample using RTK [54]. Significance of pairwise comparisons of the pipelines in beta diversity differences was calculated using the ANOVA test where Tukey's HSD (honest significant differences) test was used as a post hoc test in R.
To calculate alpha diversity, abundance data were first rarefied to an equal number of reads per sample. Significance of each pairwise comparison in alpha diversity was calculated based on a paired Wilcoxon test, pairing technical replicates.
Analysis of the mock community
We used an already sequenced mock community [53] of known relative composition and with sequenced reference genomes available. Firstly, taxonomic abundance tables (taxonomic assignments based on SILVA 138.1 [40] in all pipelines) were compared to the expected taxonomic composition of the sequenced mock community. Precision was calculated as (TP/(TP + FP)), recall as (TP/(TP + FN)), and F-score as (2*precision*recall/ (precision+recall)), TP (true positive) being taxa present in the mock and correctly identified as present, FN (false negative) being taxa present in the mock but not identified as present, and FP (false positive) being taxa absent in the mock but identified as present. The fraction of read counts assigned to true positive taxa was calculated based on the sum of the relative abundance of all true positive taxa. These scores were calculated at species and genus levels.
Secondly, we investigated the precision of reported 16S rRNA nucleotide sequences, representing each OTU or ASV, by calculating the nucleotide similarity between ASVs/OTUs and the known reference 16S rRNA sequences. To obtain the nucleotide similarity, we aligned ASV/OTU DNA sequences from tested pipelines via BLAST to a custom reference database that contained the 16S rRNA gene sequences from the mock community (https:// github. com/ capor aso-lab/ mockr obiota/ blob/ master/ data/ mock-16/ source/ expec ted-seque nces. fasta), using the -taxOnly option from LotuS2. The BLAST % nucleotide identity at > 50% horizontal OTU/ASV sequence coverage is subsequently used to calculate the best matching 16S rRNA sequence per ASV/OTU.
Results
We analyzed four datasets to benchmark the computational performance and reliability of the pipelines. The datasets consisted either of technical replicates (gut-16S, soil-16S, and soil-ITS) or a mock community. Technical replicates were used to evaluate the reproducibility of community structures and were chosen to represent different biomes (gut and soil) using different 16S rRNA amplicon primers (gut-16S and soil-16S), or ITS sequences (soil-ITS) as well as a synthetic mock community of known composition.
Computational performance and data usage
The complete analysis of the gut-16S dataset was fastest in LotuS2 (on average 35, 12, 9, and 3.8 times faster than mothur, QIIME 2-DADA2, QIIME 2-Deblur, and native DADA2, respectively, Fig. 2A). Note that since DADA2 could not demultiplex the dataset, the average of LotuS2 and QIIME2 demultiplexing times were used Fig. 2 Computational performance of amplicon sequencing pipelines. 16S rRNA amplicon MiSeq data from A gut-16S, B soil-16S, and C soil-ITS samples were processed to benchmark resource usage of each pipeline, run on the same system under equal conditions (12 cores, max 150 Gb memory). In all pipelines, OTUs/ASVs were classified by similarity comparisons to SILVA 138.1. In LotuS2, Lambda was used to align sequences for all clustering algorithms. Pipeline runs were separated by common steps (pre-processing, sequence clustering, taxonomic classification, and phylogenetic tree construction and/or off-target removal). Because native DADA2 cannot demultiplex reads, we used the average demultiplexing time of QIIME 2 and LotuS2 (LotuS2 demultiplexed, unfiltered reads were provided to DADA2). Since phylogenetic trees based on ITS sequences may lead to erroneous phylogenies [55], we did not include the phylogenetic tree construction step in the analysis of the soil-ITS dataset. LotuS2 runs are labelled with red color. D, E, F Data usage efficiency of each tested pipeline, by comparing the number of sequence clusters (OTUs or ASVs) to retrieved read counts in the final output matrix of each pipeline. Note that mothur results for soil-16S are not shown, because the pipeline rejected all sequences at the default parameters instead. LotuS2 was also faster in the analysis of the soil-16S dataset compared to the other tested pipelines (5.7, 3.5, and 3.5 times faster than DADA2, QIIME 2-DADA2, and QIIME 2-Deblur, respectively, Fig. 2B). The difference in speed between LotuS2 and QIIME 2 was more pronounced in the analysis of the soil-ITS dataset, where LotuS2 was on average 69 times faster than QIIME 2 and DADA2 (Fig. 2C).
LotuS2 also outperformed other pipelines in the case of the gut-16S dataset (on average LotuS2 was 15 times faster) compared to the soil dataset (average 4.2). This difference stems mainly from the demultiplexing step, where LotuS2 is significantly faster. The sequence clustering step was fastest using the UPARSE algorithm with an average 60-fold faster run time than sequence clustering in other pipelines. Averaged over these three datasets, LotuS2 was 29 times faster than other pipelines.
Taxonomic classification of OTUs/ASVs was also faster in LotuS2 (~ 5 times faster for gut-16S and 2 times for soil-16S). However, this strongly depends on the total number of OTUs/ASVs for all pipelines. For example, the default naïve-Bayes classifier [56] in QIIME 2 is faster than the LotuS2 taxonomic assignment in this benchmark (using Lambda LCA against the SILVA reference database). Nevertheless, LotuS2 also offers taxonomic classifications via RDP classifier [35] or SINTAX [36], both of which are significantly faster.
Compared to LotuS1, LotuS2 was on average 3.2 times faster, likely related to refactored C++ programs that can take advantage of multiple CPU threads ( Fig. 2A, B). In its fastest configuration (using "UPARSE" option in clustering and "RDP" to assign taxonomy), the gut and soil 16S rRNA datasets can be processed with LotuS2 in under 20 min and 12 min, respectively, using < 10 GB of memory and 4 CPU cores.
Despite using similar clustering algorithms (e.g., DADA2 clustering is available in DADA2, QIIME 2, and LotuS2), the tested pipelines apply different pre-and post-processing algorithms to raw sequence reads and clustered ASVs and OTUs, leading to differing ASV/ OTU numbers and retrieved reads (the total read count in the ASV/OTU abundance matrix) (Supplementary Table S1 and Fig. 2D-F). DADA2 typically estimated the highest number of ASVs, but the number of retrieved reads varied strongly between datasets. QIIME 2-DADA2 estimated fewer ASVs than DADA2, but more ASVs than LotuS2-DADA2, while mapping fewer reads than LotuS2. Although retrieving a smaller number of reads, QIIME 2-Deblur reported comparable numbers of ASVs to LotuS2, despite the differences in clustering algorithms. PipeCraft 2 using VSEARCH clustering retrieved slightly higher number of reads in the final output matrix than LotuS2-VSEARCH; but it also reported a considerably higher number of OTUs (Supplementary Figure S2). Although retrieving a smaller number of reads, QIIME 2-Deblur reported comparable numbers of ASVs to LotuS2, despite the differences in clustering algorithms. mothur performed differently in the gut-16S and soil-16S datasets, where it estimated either the highest number of OTUs or could not complete the analysis since all the reads had been filtered out, respectively. Overall, LotuS2 often reported the fewest ASVs/ OTUs, while including more sequence reads in abundance tables. This indicates that LotuS2 has a more efficient usage of input data while covering a larger sequence space per ASV/OTU.
Benchmarking the reproducibility of community compositions
Next, we assessed the reproducibility of community compositions between pipelines analysing the gut-16S, soil-16S, and soil-ITS datasets. This was estimated by comparing beta diversity between technical replicates (Bray-Curtis distance, BCd and Jaccard distance, Jd). We found that Jd and BCd were the lowest in LotuS2, largely independent of the chosen sequence clustering algorithms and dataset. This indicates a greater reproducibility of community compositions generated by LotuS2 (Fig. 3A, B and Supplementary Figure S2). The lowest BCd and Jd were overall observed for LotuS2-UPARSE (Fig. 3A, B and Supplementary Figure S2) in both gutand soil-16S datasets, though this was not always significant between different LotuS2 runs (Supplementary Table S2).
Even using the same clustering algorithm, LotuS2-DADA2 compositions were more reproducible compared to both QIIME 2-DADA2 and DADA2 (significant only on soil data). LotuS2-DADA2 denoises by default all reads (per sequencing run) together, while in the default DADA2 setup each sample is denoised separately; the latter strategy has a reduced computational burden but can potentially miss sequence information from rare taxa. Also, LotuS2-VSEARCH compositions were more reproducible than PipeCraft 2-VSEARCH, except in the Jd between the replicates of the soil-16S dataset. mothur showed poorer performance compared to other pipelines on the gut-16S dataset and did not give results for the soil-16S dataset.
We then calculated the fraction of samples being closest in BCd distance to its technical replicate for each pipeline (Fig. 3D, E), simulating the process of identifying technical replicates without prior knowledge. While LotuS1 resulted in the highest fraction of samples being closest to its replicate among all samples in the gut-16S dataset, it performed the worst in the soil-16S dataset.
On the other hand, in the mothur result, technical replicates were the most unlikely to be closest to their technical replicate. LotuS2 with UNOISE3 clustering resulted in the highest fraction of samples being closest to its replicate in the soil-16S dataset. When this comparison was made with the non-default options in LotuS2 (using different dereplication parameters, deactivating LULU, using UNCROSS2 or retaining taxonomically unclassified reads), BCd between the technical replicates remained largely unchanged, especially in the soil-16S dataset (Supplementary Figure S2 Lastly, we calculated the reproducibility of reported alpha diversity between technical replicate samples in both gut-16S and soil-16S datasets (Supplementary Figure S6A, B). In both datasets, LotuS2 alpha diversity was not significantly different between technical replicates, as expected (6 of 8 comparisons, Wilcoxon signed-rank test). Although this was also the case for PipeCraft 2, in 6 of 6 cases, mothur, QIIME 2, and DADA2 had significant differences in the alpha diversity between technical replicates.
Thus, LotuS2 showed in our benchmarks a higher data usage efficiency and higher reproducibility of community compositions than mothur, PipeCraft 2, QIIME 2, and DADA2. These benchmarks also showed the importance of pre-and post-processing raw reads and OTUs/ASVs, since LotuS2-DADA2 and QIIME 2-DADA2 performed Table S2). D-F Further, the fraction of technical replicates being closest to each other (BCd) was calculated to simulate identifying technical replicates without additional knowledge. Numbers above bars are the ordered pipelines performing best. Lower Bray-Curtis distances between technical replicates and a higher fraction of correct technical replicates indicate better reproducibility. LotuS2 runs are labelled with red color better than DADA2, despite using the same clustering algorithm. LotuS2-VSEARCH also performed better than PipeCraft 2-VSEARCH.
Benchmarking the soil-ITS dataset
Compared to 16S rRNA gene amplicons, ITS amplicons typically vary more in length [4], thus requiring a different sequence clustering workflow; LotuS2 in ITS mode uses by default CD-HIT to cluster ITS sequences, and ITSx to identify plausible ITS1/2 sequences.
In terms of data usage, both LotuS2 and QIIME 2-DADA2 retrieved similar numbers of reads, but for QIIME 2 these read counts were distributed across twice the number of ASVs (Fig. 2F). QIIME 2-DADA2 reproduced the fungal composition significantly worse in replicate samples, compared to LotuS2-UPARSE, having higher pairwise BCd (Fig. 3C) and Jd (Supplementary Figure S2H, I). However, it spanned the highest fraction of samples closest to its technical replicate, although this fraction was overall very high for all the pipelines (0.978-1) (Fig. 3F). DADA2 showed a poor performance in comparison to the other pipelines, resulting in the lowest data usage efficiency (Fig. 2F) (yielding the highest number of ASVs, lowest retrieved read counts) and the lowest reproducibility (highest BCd) (Fig. 3C, Supplementary Table S2) between replicate samples. LotuS2 had overall the lowest BCd and Jd between replicates, using both UPARSE and CD-HIT clustering (Fig. 3C, Supplementary Figure S2H, I). The use of CD-HIT in combination with ITSx led to increased OTU numbers (from 947 to 1008) although read counts remained mostly the same in the final output matrix and BCd was largely similar (Supplementary Figure S3C). Here, deactivating LULU slightly decreased reproducibility (Supplementary Figure S3C).
Finally, we calculated the reproducibility of alpha diversity between the technical replicate samples in the soil-ITS dataset (Supplementary Figure S6C). All pipelines resulted in no significant difference between the technical replicate samples, thus alpha diversity was reproducible in all pipelines.
Benchmarking the dataset from the mock microbial community
To assess how well a known community can be reconstructed in LotuS2, we used a previously sequenced 16S mock community [53] containing 43 genera and 59 microbial strains, where complete reference genomes were available.
All pipelines performed poorly at reconstructing the community composition (Pearson R = 0.43-0.67, Spearman Rho = 0.54-0.80, Supplementary Table S3 and Supplementary Figure S7), possibly related to PCR biases and rRNA gene copy number variation. Therefore, we focused on the number of correctly identified taxa. For this, we calculated the number of reads assigned to true taxa as well as precision, recall, and F-score at genus level. LotuS2-VSEARCH and LotuS2-UPARSE had the highest precision, F-score, and fraction of reads assigned true positive taxa, (Fig. 4A and Supplementary Figure S8). LotuS1 had the highest recall, but low precision. When applying the same tests at species level, LotuS2-DADA2 had overall the highest precision and F-score (Supplementary Figure S9). QIIME 2-Deblur had often competitive, but slightly lower, precision, recall, and F-scores compared to LotuS2, while mothur, PipeCraft 2-VSEARCH, QIIME 2-DADA2, and DADA2 scores were lower (Fig. 4A).
Next, we investigated which software could best report the correct OTU/ASV sequences. For this, we calculated the fraction of TP OTUs/ASVs (i.e., OTUs/ASVs which are assigned to a species based on the custom mock reference taxonomy) with 97-100% nucleotide identity to 16S rRNA sequences from reference genomes in each pipeline (Fig. 4B). Here, LotuS2-VSEARCH and LotuS2-UPARSE reported OTU sequences were most often identical to the expected sequences, having 82.2% of the OTU sequences at 100% nucleotide identity to reference sequences. QIIME 2-Deblur ASV sequences were of similar quality, but slightly less often at 100% nucleotide identity (78.2%). DADA2, QIIME 2-DADA2 and PipeCraft 2-VSEARCH ASV/ OTU sequences were often more dissimilar to the expected reference sequences. It is noteworthy that LotuS2-DADA2 and LotuS2-VSEARCH outperformed these pipelines based on the same sequence clustering algorithm, likely related to the stringent read filtering and seed extension step in LotuS2.
The mock community consisted of 49 bacteria and 10 archaea [53], with a total of 128 16S rRNA gene copies included in their genomes. If multiple 16S copies occur within a single genome, these can diverge but are mostly highly similar or even identical to each other [57]. Thus, the expected biodiversity would be 59 OTUs and ≤ 128 ASVs. Notably, the number of mothur and QIIME 2-Deblur TP ASVs/OTUs exceeded this threshold (N = 370, 198, respectively), indicating that both pipelines overestimate known biodiversity. DADA2, QIIME 2-DADA2, and PipeCraft 2-VSEARCH generated more ASVs than expected per species (N = 94, 122, and 90 respectively), but this might be explained by divergent within-genome 16S rRNA gene copies. LotuS2 was notably at the lower end in predicted biodiversity, predicting between 53 and 61 OTUs or ASVs in different clustering algorithms (Supplementary Table S4). However, these seemed to mostly represent single species, covering the present species best among pipelines, as the precision at species level was highest for LotuS2 (Supplementary Figure S9), thus capturing species level biodiversity most accurately.
Based on the mock community data, LotuS2 was more precise in the reported 16S rRNA gene sequences, assigning the correct taxonomy, and detecting biodiversity. Within-genome 16S copies were less likely to be clustered separately using LotuS2.
Discussion
LotuS2 offers a fast, accurate, and streamlined amplicon data analysis with new features and substantial improvements since LotuS1. Software and workflow optimizations make LotuS2 substantially faster than all QIIME 2, DADA2, and mothur. On large datasets, this advantage becomes crucial for users: for example, we processed a Fig. 4 Benchmarking of amplicon sequence data analysis pipeline's performance using a mock community with known species composition. A Accuracy of each pipeline in predicting the mock community composition at genus level. For benchmarking we compared the fraction of reads assigned to true genera and both correctly and erroneously recovered genera. Precision, Recall, and F-score were calculated based on the true positive, false positive, and false negative taxa identified. At species level, LotuS2 excelled also in these statistics (Supplementary Figure S9). B Percentage of true positive ASVs/OTUs having a nucleotide identity ≥ indicated thresholds to 16S rRNA gene sequences of genomes from the mock community. Pipeline(s) showing the highest performance in each comparison is denoted with a star (*). TP, true positive; ASV, amplicon sequencing variant; OTU, operational taxonomic unit. LotuS2-UPARSE and LotuS2-VSEARCH had the same result, therefore colors are overlaid highly diverse soil dataset consisting of > 11 million nondemultiplexed PacBio HiFi amplicons (26 Sequel II libraries) in 2.5 days on 16 CPU cores, using a single command (unpublished data). Besides being more resource and user-friendly, compositional matrices from LotuS2 were more reproducible and accurate across all tested datasets (gut 16S, soil 16S, soil ITS, and mock community 16S).
LotuS2 owes high reproducibility and accuracy to the efficient use of reads based on their quality tiers in different steps of the pipeline. Low-quality reads introduce noise and can artificially inflate observed biodiversity, i.e., the number of OTUs/ASVs [58]. Conversely, an overly strict read filter will decrease sensitivity for lowabundant members of a community by artificially reducing sequencing depth. To find a trade-off, LotuS2 uses only truncated, high-quality reads for sequence clustering (except ITS amplicons), while the read backmapping and seed extension steps restore some of the discarded sequence data.
Notably, OTUs/ASVs reported with LotuS2 were the most similar (at > 99% identity) to the reference, compared to other pipelines (Fig. 4B). This was mostly independent of clustering algorithms used, rather resulting from a combination of selecting high-quality reads for sequence clustering and the seed extension step selecting a high-quality read (pair) best representing each OTU or ASV. The seed extension unique to LotuS2 also decouples read clustering and read merging, avoiding the use of the error-prone 3′ read end or the second read pair during the error sensitive sequence clustering step [18]. Decoupling sequence clustering length restrictions from other pipeline steps thus avoids limiting information in computational steps benefitting from longer DNA sequences, such as taxonomic assignments or phylogeny reconstructions.
In conclusion, LotuS2 is a major improvement over LotuS1, representing pipeline updates that accumulated over the past 8 years. It offers superior computational performance, accuracy, and reproducibility of results, compared to the other tested pipelines. Importantly, it is straightforward to install, and programmed to reduce required user time and knowledge, following the idea that "less is more with LotuS2".
|
2021-12-26T16:03:33.746Z
|
2021-12-24T00:00:00.000
|
{
"year": 2022,
"sha1": "e46c5910e180010bdbcc105e7362713b9046b754",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "d7c3191dbc8d24639d5be52114213028fb5c0b8f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
126122702
|
pes2o/s2orc
|
v3-fos-license
|
Contribution self efficacy and independent learning math toward students' mathematics learning outcomes
Students’ experiencing problems on mathematics learning outcomes at school and many students who did not complete the study of mathematics. Self-efficacy and independent learning math is suspected of influencing factors. This study aimed to examine the contribution of self-efficacy, independent learning math mathematics on learning outcomes. The approach of this research was quantitative with correlational method. The population of this research consists samples 305 and 173 students. The instrument used a Likert scale model. Data were Analyzed using multiple regression. The results showed that there is a contribution of self-efficacy and independent learning math to mathematics learning outcomes.
Introduction
Learning is a simultaneous process which is undertaken by individualsto obtain relatively fixed behavior, observable and non-observable behavioral changes that occurred as a result of exercise or experience in their interaction with the environment.One indicator that shows the process of good quality learning is the acquisition of optimal learning outcomes, both the learning outcomes in the form of cognitive, affective and psychomotor.Hasbulah (2012) elaborated that the product of learning is the end result obtained by students after experiencing the learning process that involves changes in ability, understanding, skills and attitudes that can be observed and measured.
According to Permendiknas No. 22 of 2006 on Standard Content explained that mathematics is one of the mandatory lessons.Because, as an universal science that lies a foundation of modern technology development, mathematics has an important role in upgrading various disciplines and advancing human comprehension.Ormrod (2004) explained that mathematics is acknowledged as one of the major source of stress in school learning process.Jbeili (2003) emerged that the high level of anxiety in learning mathematics leads to the dislike of mathematics lessons so that it potentially decreases students' understanding of mathematics.
Based on the observations and information from counseling teachers at SMP Negeri2 Koto XI Tarusan in March 2017, some of the students found that it is difficult to understand mathematics lessons and make them often do not make assignments or homework assigned by teachers in school.As the consequence, some of the student's math score still marked below average.Based on the data can be understood that the problem faced by most students is their mathematics learning result.Author discovered that it is necessary to observe the results of learning mathematics students SMP Negeri 2 Koto XI Tarusan to improve learning outcomes optimally.Daryanto (2009) explains that one of the factors that influence the learning result is the emotional factor of the student that is the students who have positive self-conception, self-confident or selfefficacy, tends to free from frustration, anxious, tense, conflict, and low self.Facts in various studies show that self-efficacy has a significant relation to the achievement of learning outcomes in school.Students with high self-efficacy turn out to have excellent achievements (Schunk&Meece, 2005).Bandura (1997) explained that self efficacy will affect one's behavior, effort, persistence, feelings, way of thinking, and behavior.Self efficacy is a person's belief in coordinating his/her own capabilitiesand manifest them by a series of actions in fulfilling the demands of his life.Students with high self-efficacy will see the task as a challenge is not a threat, so they will minimize disruption, implement effective strategies, find a learning partner, not easily desperate and even overcome the failures.While low self efficacy students believe that they will not capable of performing the task even before the task is given as a result, they will carry out learning with doubts and fears.
It can be concluded that self efficacy can bring students to different behaviors among students with equal ability because self-efficacy influences choice, purpose, problem-solving and persistence in business (Sari, 2014).That is, students with high self efficacy believe that students are able to do something to change the events around it, while students with low self efficacy tend to give up easily.
In addition to self-efficacy, other factor that also affects the results of learning mathematics students is the independence of learning mathematics.Learning independence is the nature and ability of students to perform active learning activities which driven by the motive to master something that has been owned competence.
The independence of learning mathematics will be achieved if a student can have the success he has gained.The independence of learning mathematics needs to be improved by students in achievingthe desired learning success.Students who have high independence, then will try to complete the training or tasks given by teachers with the ability they have, otherwise students who have low independence will depend on others to complete the task.
From preliminary data obtained at SMP Negeri 2 Koto XI Tarusan related to self efficacy and independence of students in mathematics learning, phenomenon that happened in field is there are still many students who lack the courage to present themselves in learning mathematics, such as asking or answering questions given by the teacher.In fact, when they are appointed sometimes students can answer questions given by the teacher correctly.However, due to their lack of confidence and uncertainty about their ability to correctly answer questions and fear of being mistaken and laughed at by friends, the students ultimately choose not to answer any questions given by the teacher.At the time of test the students are also discovered to cheat on their friends, and becameless active in following the process of teaching and learning in the classroom, so the value obtained by students during the exam is below average.
It can be concluded the independence of learning mathematics can affects student learning outcomes mathematics.Increasing the independence of learning mathematics students is very important in achieving student learning outcomes in school math.One of the efforts that can be done is through guidance and counseling services.
The phenomenon above shows that the existence of the relationship between self efficacy and the independence of learning mathematics to the students' mathematics learning result, so that researchers feel the need to conduct research to examine and analyze in depth related to the mathematics learning result that is the contribution of self efficacy and the independence of learning mathematics to the learning result mathematics students.Based on these explanations can be seen that there is a relevance between the factors and the results of learning mathematics.But how big the contribution between these factors, it is necessary to do research.The results will be used as a reference in the preparation of BK programs.This is the main purpose of this research, because there is no research findings that show how much self efficacy and independence of learning mathematics on student learning outcomes mathematics.The purpose of this study is to examine the contribution of self efficacy and the independence of learning mathematics to the students' mathematics learning outcomes.
Method
This research is a descriptive quantitative research with correlational method.The population of the research is a sample of 173 students from 305 students of class VII and VIII with proportional stratified random sampling technique.The instrument used is Likert scale model.To know the contribution of two independent variables to one dependent variable, then the data is analyzed by multiple regression.Data analysis was assisted by using SPSS program version 20.00.
Data Analysis Requirements Test
Test requirements analysis conducted in this research is a test of normality, linearity test, and multicollinearity test.Normality test using Kolmogorov Smirnov method indicated that the research variable data is normally distributed, with Asymp value.Sig.Self efficacy 0.526, and math learning independence of 0.510.The result of linearity test assumed that the data of self efficacy variable and the independence of learning mathematics with mathematics learning result is linear with Sig.0.000 ≤ 0.05.The result of multicollinearity test discovered that the VIF self efficacy value is 1.242 and the VIF value of math learning independence is 1.242.
Contribution of Self Efficacy and Independence of Mathematics Learning on Mathematics Learning Outcomes
Based on the results of tests that have been done, there is a significant positive relationship betweenself efficacy and Independence of Mathematics Learning on Mathematics Learning Outcomes.The results showed that self efficacy and mathematics learning independence correlated significantly to mathematics learning outcomes with 41.4% contribution.By another words, it can be summarized that self efficacy and independency in learning mathematics are the most influential factors to the final result of learning mathematics.
According to this research can be understood that higher self efficacy and independence of learning mathematics students impacted to better results of learning mathematics achieved by the students.Moreover, self efficacy and the independence of learning mathematics determine the high level of mathematics learning outcomes of students.
The individual who considers himself incapable of completing the task will give up in a short time because he or she does not possess the skills required to complete the task, other wise the students with high self efficacy individual will assume that he has is capable of performing the task and he has feels the ability needed to complete the task (Bandura, 1997).More difficult a task tends to rose higher the desire to be diligent in developing themselves.
Self efficacy improves student's sincerity in performing a task.Self efficacy can also increase the ability and increase the endurance of students in facing various difficulties in learning.
Self efficacy is a good predictor to leverage interest and student learning outcomes in mathematics.However, Students who have a good level of intelligence, personality, and school environment that supports it, will not be able to achieve good mathematics learning outcomes without supported by independence of learning mathematics Self efficacy is a component that plays a role in improving student self reliance which ultimately can improve learning outcomes (Ernawati, 2013).One characteristic of the independence of learning mathematics is that students have the freedom to decide what learning goals are to be achieved and use ful for them.
Based on the above explanation, it substantially emerged the importance of self efficacy and the independence of learning mathematics to improve students' mathematics learning outcomes.One that must be improved and developed in the students is self efficacy and independence of learning mathematics.
Many BK services can be under taken to improve students' mathematics learning outcomes such as information services, content mastery services, guidance activities (Diniaty, 2011), group learning activities (Yusri, 2010), and provision of BK services (Hasibuan, 2008).Based on the results of research, BK services can improve students' mathematics learning outcomes, so BK teachers and mathematics subjects especially play a very important role in providing BK services in order to improve student learning outcomes in school.
Conclusion
Based on the findings and discussion of the results of research, it can be concluded that the self efficacy and independence of learning mathematics correlated significantly to the results of learning mathematics.The research described that to improve the result of learning mathematics hence student need to improve self efficacy and independence in learning math.
676PROCEEDINGS|
International Conferences on Educational,Social Sciences and Technology 2018
677PROCEEDINGS|
International Conferences on Educational,Social Sciences and Technology 2018 <CHICI PRATIWI, NEVIYARNI, SOLFEMA > (Contribution Self Efficacy and Independent Learning Math toward Students' Mathematics Learning Outcomes) 675 PROCEEDINGS | International Conferences on Educational,Social Sciences and Technology 2018
|
2018-12-09T12:40:50.720Z
|
2018-04-25T00:00:00.000
|
{
"year": 2018,
"sha1": "3f7208799c241e1b39133aaeaa175959fb10fddc",
"oa_license": "CCBYNC",
"oa_url": "https://www.gci.or.id/assets/papers/icesst-2018-99.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3f7208799c241e1b39133aaeaa175959fb10fddc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
1654150
|
pes2o/s2orc
|
v3-fos-license
|
Haemorrhoidal disease in severe portal hypertension: a combined approach with transjugular intrahepatic portosystemic shunt (TIPS) and transanal haemorrhoidal dearterialization (THD)
Haemorrhoidal disease is a common finding in cirrhotic patients (40–44%) [1, 2], as haemorrhoidal plexus is a possible site of porto-systemic venous anastomosis. Portal hypertension (PH) can both exacerbate pre-existent essential haemorrhoidal varices (HV) and cause secondary vascular pathologies, called ano-rectal varices (ARV), particularly in conditions of severely increased hepatic vein pressure gradient (HVPG). Coexisting in up to 30% of cases, ARV are different from HV: as they are mainly due to portal pressure, they usually do not benefit from local therapies, requiring instead a corrective strategy for the underlying PH.
Here we report a case of a 45-year-old man, known to have alcoholic Child-B liver cirrhosis, who was admitted to our clinic in March 2011 for severe anaemization (haemoglobin 5.8 g/dl) secondary to persistent rectorrhagia. On admission, he was haemodynamically stable and physical examination evidenced pale skin, tachycardia and tachypnoea, while laboratory results showed microcytic anaemia and severe thrombocytopenia (46 000/µl), and mild elevation of hepatic tests. On anamnestic deepening, he underwent an haemorrhoidal prolapsectomy 15 years earlier and a haemorrhoidal ligature 6 years earlier for persistent symptomatic bleeding haemorrhoids. Moreover, he had a 25-year history of severe alcohol consumption but he had been in a relatively good condition until August 2010, when he received the diagnosis of liver cirrhosis because of the first episode of hepatic decompensation (ascites and encephalopathy).
Clinical examination evidenced third-degree haemorrhoids and a pancolonoscopy documented ARV excluding other colonic bleeding lesions, while upper endoscopy showed congestive gastropathy in absence of oesophageal varices. Doppler ultrasonography revealed an increased diameter of the main portal tract (13.5 mm), with preserved hepatopetal flow at reduced mean velocity (10 cm/s). An abdominal computed tomography scan with contrast medium detected a patent ectatic superior mesenteric vein with congestion of the rectal venous plexus and mild perihepatic ascites and splenomegaly (Figure 1). Repeated blood transfusions were prescribed to raise haemoglobin and propranolol in increasing dosage was started to reduce portal hypertension. However, severe rectorrhagia continued so we opted for reducing portal system congestion with a transjugular intrahepatic portosystemic shunt (TIPS). The initial portal venous pressure was 22 mm Hg and the estimated HVPG was 18 mm Hg. Despite reduction of the pressure gradient from 18 mm Hg to 4 mm Hg, rectorrhagia persisted, so surgical intervention for haemorrhoids was suggested. Transanal haemorrhoidal dearterialization (THD) was performed through a dedicated proctoscope, which incorporates a Doppler probe that allows one to identify and to ligate haemorrhoidal arteries and arteriovenous shunts. Transanal haemorrhoidal dearterialization is an innovative mini-invasive technique that has gradually gained in popularity among surgeons since the encouraging initial results reported by Morinaga et al. [3] in 1995. It consists of the Doppler-guided ligation of the terminal branches of the superior rectal artery, which solely contribute to the blood supply of the haemorrhoidal plexus [4, 5], thereby reducing the congestion [6]. The postoperative course was uneventful and the patient was discharged after 5 days. After a 15-month follow-up period, the patient is in good clinical condition, complying with his medical treatment, and the ano-rectal blood loss has not recurred.
Figure 1
Coronal contrast-enhanced CT image showing ectatic superior mesenteric vein (white arrow)
This case highlights how difficult it can be to diagnose and treat haemorrhoidal disease in cirrhotic patients with severe portal hypertension. The presence of persistent bleeding HV, relapsing after previous local therapies, and of endoscopic-evident ARV, suggested that the severely increased portal hypertension was the real cause of bleeding and of persistence of varices. So, we first reduced portal hypertension with both medical (β-blockers) and interventional (TIPS) techniques, probably achieving a reduction in ARV degree and, when we observed that rectorrhagia persisted, it was clear that blood loss was due to coexistent essential HV, this time. Thus, we treated them with a local mini-invasive technique (THD, previously described). Conventional surgical approaches are usually considered unsuitable in this context [7]: they can worsen portal hypertension after surgical removal of the venous porto-caval shunts, as well as having an increased risk of complications in cirrhotic patients.
In conclusion, our experience sheds further light on the relevance of managing portal pressure in case of persistent bleeding and relapsing HV in presence of ARV: TIPS could be performed as the first approach in order to reduce the degree of portal hypertension and to plan, in case of ineffective resolution of ano-rectal bleeding, a mini-invasive surgical procedure with reduced operative risk.
Haemorrhoidal disease is a common finding in cirrhotic patients (40-44%) [1,2], as haemorrhoidal plexus is a possible site of porto-systemic venous anastomosis. Portal hypertension (PH) can both exacerbate pre-existent essential haemorrhoidal varices (HV) and cause secondary vascular pathologies, called ano-rectal varices (ARV), particularly in conditions of severely increased hepatic vein pressure gradient (HVPG). Coexisting in up to 30% of cases, ARV are different from HV: as they are mainly due to portal pressure, they usually do not benefit from local therapies, requiring instead a corrective strategy for the underlying PH.
Here we report a case of a 45-year-old man, known to have alcoholic Child-B liver cirrhosis, who was admitted to our clinic in March 2011 for severe anaemization (haemoglobin 5.8 g/dl) secondary to persistent rectorrhagia. On admission, he was haemodynamically stable and physical examination evidenced pale skin, tachycardia and tachypnoea, while laboratory results showed microcytic anaemia and severe thrombocytopenia (46 000/μl), and mild elevation of hepatic tests. On anamnestic deepening, he underwent an haemorrhoidal prolapsectomy 15 years earlier and a haemorrhoidal ligature 6 years earlier for persistent symptomatic bleeding haemorrhoids. Moreover, he had a 25-year history of severe alcohol consumption but he had been in a relatively good condition until August 2010, when he received the diagnosis of liver cirrhosis because of the first episode of hepatic decompensation (ascites and encephalopathy).
Clinical examination evidenced third-degree haemorrhoids and a pancolonoscopy documented ARV excluding other colonic bleeding lesions, while upper endoscopy showed congestive gastropathy in absence of oesophageal varices. Doppler ultrasonography revealed an increased diameter of the main portal tract (13.5 mm), with preserved hepatopetal flow at reduced mean velocity (10 cm/s). An abdominal computed tomography scan with contrast medium detected a patent ectatic supe-rior mesenteric vein with congestion of the rectal venous plexus and mild perihepatic ascites and splenomegaly ( Figure 1). Repeated blood transfusions were prescribed to raise haemoglobin and propranolol in increasing dosage was started to reduce portal hypertension. However, severe rectorrhagia continued so we opted for reducing portal system congestion with a transjugular intrahepatic portosystemic shunt (TIPS). The initial portal venous pressure was 22 mm Hg and the estimated HVPG was 18 mm Hg. Despite reduction of the pressure gradient from 18 mm Hg to 4 mm Hg, rectorrhagia persisted, so surgical intervention for haemorrhoids was suggested. Transanal hae mor rhoidal dearterialization (THD) was performed through a dedicated proctoscope, which incorporates a Doppler probe that allows one to identify and to ligate haemorrhoidal arteries and arteriovenous shunts. Transanal haemorrhoidal dearterialization is an innovative mini-invasive technique that has gradually gained in popularity among surgeons since the encouraging initial results reported by Morinaga et al. [3] in 1995. It consists of the Doppler-guided ligation of the terminal branches of the superior rectal artery, which solely contribute to the blood supply of the haemorrhoidal plexus [4,5], thereby reducing the congestion [6]. The postoperative course was uneventful and the patient was discharged after 5 days. After a 15-month follow-up period, the patient is in good clinical condition, complying with his medical treatment, and the ano-rectal blood loss has not recurred.
This case highlights how difficult it can be to diagnose and treat haemorrhoidal disease in cirrhotic patients with severe portal hypertension.
The presence of persistent bleeding HV, relapsing after previous local therapies, and of endoscopic-evident ARV, suggested that the severely increased portal hypertension was the real cause of bleeding and of persistence of varices. So, we first reduced portal hypertension with both medical (β-blockers) and interventional (TIPS) techniques, probably achieving a reduction in ARV degree and, when we observed that rectorrhagia persisted, it was clear that blood loss was due to coexistent essential HV, this time. Thus, we treated them with a local mini-invasive technique (THD, previously described). Conventional surgical approaches are usually considered unsuitable in this context [7]: they can worsen portal hypertension after surgical removal of the venous porto-caval shunts, as well as having an increased risk of complications in cirrhotic patients.
In conclusion, our experience sheds further light on the relevance of managing portal pressure in case of persistent bleeding and relapsing HV in presence of ARV: TIPS could be performed as the first approach in order to reduce the degree of portal hypertension and to plan, in case of ineffective resolution of ano-rectal bleeding, a mini-invasive surgical procedure with reduced operative risk.
|
2018-04-03T06:05:10.812Z
|
2014-02-23T00:00:00.000
|
{
"year": 2014,
"sha1": "ac26802c5fcaf904913cf6d90c82d6e2212467d4",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-19/pdf-22297-10?filename=haemorrhoidal.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ac26802c5fcaf904913cf6d90c82d6e2212467d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264117066
|
pes2o/s2orc
|
v3-fos-license
|
Mathematical modeling and control the process of fuel combustion in gas combustion furnaces
,
Introduction
Recently, special attention in the world has been paid to the tasks of automatic control and management of complex technological processes of fuel combustion in gas furnaces and automatic control systems for thermal processes, as well as transferring the results in the form of advice to the operator or downloading to a computer in the form of signals -control actions on actuators, located at the control facilities.The use of the means of the fourth industrial revolution, called "Industry 4.0" [1,2], for the purpose of ensuring energy and resource saving in heat-technological processes of the chemical, petrochemical, metallurgical, food industries and in the production of building materials occupies a leading position.In this regard, in the developed countries of America, Europe, and Asia, a global map of the degree of use of the means of the fourth industrial revolution in the industry, in which a special task is the implementation of an extremely automated control system that provides multiparameter control and support for energy-saving technologies has been developed.
It has been shown that automated control of gas-burning installations increases the reliability and technical and economic indicators of technological installations and furnaces [3][4][5][6].At the same time, fuel losses are largely associated with the improvement of the combustion of the latter.
When implementing extreme regulation, the optimal parameters of the fuel combustion process are determined by analytical or experimental methods based on the results of a study on operating gas-burning furnaces.Open-loop systems of extreme regulation can have a significant static error.At the same time, the search for the extremum of the static characteristic of the object is performed using search techniques, during which the sign and absolute value of the deviation of the operating point from the optimum are revealed and movement is carried out in the direction of the extremum.This class of high-speed linear systems directly controls the current value of the optimum [4,7], which characterizes the quality of the control system functioning.
Reliable and trouble-free operation of gas-burning plants largely depends on solving the problem of the functioning of heat-stressed heating surfaces in the form of furnace screens, characterized by a significant number of structural and operational parameters that reflect the aerodynamic mode of operation of gas-burning furnaces and their design.The structure and geometry of the flame in the furnace chamber predetermine the temperature profile and the gas composition of the heat flow throughout the entire space of the gas combustion plant.The currently used methods for measuring thermal loads, inserts, portable thermal probes, and others do not meet the requirements.The above conclusions indicate the need to develop algorithms for monitoring and controlling the combustion process in gas-burning furnaces and their practical application in solving automation and process control problems.
Formulation of the problem
The design of the measuring transducer is based on the method of spectrophotometric gas analysis [8,9], the advantage of which is the relative simplicity of the constructive implementation of the measurement method, speed, the possibility of performing single-and multi-component measurement analysis, as well as the high selectivity of the analysis of carbon oxides (CO) (has absorption band Imax = 4.7 µm); methane (CH4), propane C3H8 and other types of hydrocarbons (3.4 µm), carbon dioxide (CO2) (has an absorption band of 4.27 µm).
It has been established that under real conditions the dependence of the transmittance on the concentration of the gas mixture deviates from the exponential form of the Beer-Bouguer law: where a -gas mixture flow absorption coefficient; 0 U -incident beam intensity; c -gas mixture concentration; l -beam path length.
In [10,11], it is theoretically substantiated that the measurement error is achieved at a metric density value of 0.4.Following dependencies has been received: * 0, 4353 ( lg ) 0, 4343 ( / lg ) or As a result of the corresponding transformations, the following dependence was obtained: in which the parameter is detected on the basis of 510 parallel readings; Following theoretical and experimental methods established: expressions (2) and ( 4) have an underestimated reproducibility and it is possible to carry out the photometry process with much higher reproducibility with a concentration error of less than 0.88%, resulting from expressions (2) and (4); for the interval of single-beam spectrophotometers and double-beam photocolorimeters, optical densities, in which the total measurement error does not exceed twice the minimum error, in contrast to the generally accepted value (0.12-1.2), reaches values in the range (1.35 -1.45); the measurement area at 0,1 A is unfavorable due to the sharply increasing values of the error ∆.It has been established that in each specific variant, in order to ensure the best reproducibility, it is advisable to identify the optimal measurement area, taking into account the instability and sensitivity of the gas analyzer.The acceptable range of optical densities is from 0.1 to 0.7.The undesirability of performing measurements at optical densities above 0.7 is due to the low level of the measured parameter.
Another factor that is of significant importance when choosing the length of the cuvette is its physical volume.In this case, the criterion of the length of the optical path of the cuvette is more acceptable.We chose it equal to 145 mm, since for a given optical path length, the parameters of the gas layers under consideration at the maximum amounts of CO (10% vol) and CnHm (1% vol) take place, the following: their optical densities are quite close DCH (1 vol) = 0 .48and Dco (10% vol) = 0.84.
In this work, a study was made of a gas analyzer designed to control the concentrations of carbon monoxide, hydrocarbons, and carbon dioxide (CO) in gas emissions.Since all three analyzed components are active in the infrared (IR) region of the spectrum, a high selectivity of measurements is appropriate.The non-dispersive infrared method was chosen as the analysis method.
The maxima of the absorption bands of these gases are at wavelengths L1 = 3.39 µm (for hydrocarbons), L2 = 4.26 µm (for carbon dioxide), L3 = 4.6 µm (for carbon monoxide).The absorption bands do not overlap with each other or with the absorption bands of interfering components.In addition, there is a spectral range (3.8 -4 μm), in which there is no absorption of radiation by any of the gases present in the analyzed mixture, and, therefore, the wavelength L4 = 3.9 μm can serve as a reference.To achieve high metrological parameters, IR gas analyzers are used, built according to a single-beam multichannel scheme.In this case, the channel is understood as the spectral region in which the measurement of the gas transmission value is carried out.For the analysis of the above three components of the studied gas mixture, four channels are required: three working and one reference.It has been established that in order to obtain the optimal optical density of the gas, the lengths of working cuvettes for dioxide are 0.4 cm, for hydrocarbons and carbon monoxide are 14.5 cm.Proceeding from this, an original combined optical scheme of a gas analyzer is proposed, consisting of two identical radiation sources 1 and 1' (Fig. 1) representing lamps TRS 1500-2300; lenses 2 and 2' for the formation of light fluxes of radiation; beam-splitting plate 3, with which two light fluxes are mixed; two working cells 4 and 5, through which the analyzed gas mixture is sequentially pumped (the length of the working cell 4 is 14.5 cm, cells 5 is 0.4 cm); disk 6, on which four interference filters 7 are installed; a motor 8 driving the disc 6 in rotation; lenses 9; photodetector 10, which is a cooled temperature-controlled photoresistor FUO-614.The radiation from the lamp 1 passes through the cuvette 4, is reflected from the beam splitter 3, enters the cuvette 5 and then to the light filters 7, which are alternately introduced into the light flux when the disk 6 rotates and transmits the wavelengths 1 , 2 , 3 and 4 .The radiation from the lamp 1' passes through the beam splitter 3, the working cell 5 and the light filters 7. On the photodetector 10 are focused using the lens 9 light pulses having different lengths and carrying information about the absorption of radiation from the lamps 1 and 1' in the cuvettes 4, 5 at the specified wavelengths.When a certain concentration of one or another concentration of the component appears in the cuvettes 4, 5, the light flux is absorbed at the corresponding wavelengths: where 0 -luminous flux from lamps 1 and 1'; ci -concentration of i-th component; ki -absorption coefficient of the i th component; l1total length of cuvettes 4, 5; l2 -cuvette length 5.
It follows from equation (5) that at low concentrations of the component to be determined, the light flux from lamp 1 is absorbed.At high concentrations, the radiation from lamp 1 is almost completely absorbed: and the change undergoes a light flux from the lamp 1'.Thus, it becomes possible to analyze in a wide range of concentrations of the analyte.To increase the selectivity of the analysis and reduce the losses of the light flux, a beam splitter 3 is used, which is an interference light filter that transmits a wavelength and reflects all other wavelengths ( 1 , 2 , 3 and 4 ).Therefore, radiation at wavelength passes only through cuvette 5 (short cuvette for carbon dioxide analysis), and radiation at wavelengths 1 -through both cells 4 and 5.This allows the measurement of gas components with significantly different absorption coefficients with high selectivity.
To study the concentration dependences of the device "TAZAL", calibration gas mixtures prepared in gas cylinders were used.Composition of gas mixtures: "СО+air" or "СО+N2" and С3Н8+air" or "С3Н8+N2".The studies were carried out on debugged tuned instruments.Depending on the change in concentration, the transmission value was determined.The measurement results are presented in table 1 and in graphical interpretation.Note: SGM -supply of test gas mixtures It was established that the exponential change of the concentration transition has the law of the theory of dependence.
According to the Bouguer-Lambert-Beer law, the type of dependence is as follows [12,13]: whre 0 T , T -transmission before and after passing; a -absorption coefficient; l - absorbent layer length; c -sample gas concentration.
Since the concentrations of the measured gases are quite high, the dependence has deviations from the Bouguer-Lambert-Beer law.The dependence is most accurately described by a cubic equation.
A typical dependence of the CO concentration in nitrogen on the transmission T has the following form: (1 ) 5, 47653 (1 )(26,02161 (1 )(100,7986)) A typical dependence of the concentration of C3H8 in nitrogen on the transmission T has the following form:
38
(1 ) 8080,961 (1 )( 201, 2232 (1 )(17604,36)) For C3H8, the dependence is reproduced well, the characteristics are practically indistinguishable.For dependences on CO, the differences are more noticeable.This means that individual calibration of the scales is required for CO.On calibration gas mixtures ("CO + air", "CO + N2", "C3H8 + N2", "C3H8 + air" or "C3H8 + N2"), studies of concentration dependences were carried out, the results of measurements on various instruments were studied, the scales were calibrated pressure and temperature sensors, as well as an analysis of methods for temperature correction of the readings of the device "TANZAL".
Solution of the task
Extreme control of the efficiency of the combustion process can be implemented by applying a signal according to the heat perception ΔРpb and a proportional signal of heat release QT depending on the air flow in a wide range of loads.The proposed system of extreme control contains two circuits: a stabilizing internal circuit is formed by a control object and a general air supply regulator, and an external circuit containing an object and a device for extracting the extremum of the target function, maintaining the optimal fuel-air ratio at a constant flow rate of the fuel burned in the furnace.Such a construction of the control system increases the dynamic accuracy of tracking the position of the extremum of the signal ∆Pps, because in the case of a change in fuel consumption, the flow immediately changes to a value close to optimal [4,[14][15][16][17][18].The extreme regulator eliminates the static inaccuracy of the stabilizing regulator, outputs and maintains the ∆Pps signal in the area of extreme values by influencing the supply airflow.
The effectiveness of the control system under study is characterized by the value of the total losses П1 and П2 for the search for the extremum: To check the operation of the system, a program was used that implements the proposed system and in which the step method for searching for the maximum signal ∆Рps was used, the essence of which is to form a control action according to the steepness of the static characteristic of the object.The latter at each step is calculated as the ratio of the differences between the previous and subsequent values ∆Рps and Gв: 11 ( ) / ( ) where RI -the steepness of the static characteristic of the object at the first step; I -step number.
The algorithm for determining the value of the step change criterion Rval has the following form: where max step of maintaining the extremum leads to an increase in П2 losses.The model of the control system for the efficiency of the combustion process is shown in Fig. 2. The mathematical model of the formation of the random process i x is selected on the basis of the correlation function () obtained from the statistical analysis of signal realizations of the ps P singnal, the evaluation of which is performed according to the following algorithm: where N -number of sample data; where () ps DP -output dispersion.
The non-linear link (NL) of the object is a static dependence of the heat perception signal on the air flow rate в () and has the form of a parabola: where А, В, С -coefficients determined as a result of testing the object.The change in air flow is provided within the range from 0 to 100% according to the position indicator (PI) (i.e.0 100 S .Linear link (LL) of the object is a transfer function along the channel "air flow Gв -heat absorption ps P ": where Т1 and Т2 -time constants determined from the experiment.The linear part of the object is calculated using the Runge-Kutta method.The numerical values of the coefficients LL and NL are summarized in Table .2. The formation of interference during simulation modeling was carried out using the implementation of discrete white noise with a dispersion of DN=100.To ensure the required ratio between the values of the useful signal and random noise, the value of the coefficient b was chosen as follows: At the input to the maximum selection device, a discrete exponential smoothing filter was used to filter the noise, the signal of which at the output has the following form: where m, n -filter parameters (Table 2).The selection of the optimal filter parameters was carried out by direct enumeration of options when the systems operated in three modes: 1) in the absence of interference; 2) in the presence of interference and no filtering; 3) when filtering the introduced noise.The studies were carried out both with a fixed characteristic of the object and with its drift.The numerical values of the dispersions, reflecting the qualitative characteristics of the functioning of the system, are summarized in Table .3. The worst mode of functioning of the system under consideration occurs when the static characteristic of the object under study drifts simultaneously in both horizontal and vertical directions, when the dispersion of the output signal is: () ps DP = 11585 Pa 2 .The advantage of an extremal control system with a variable step in relation to a controller with a constant step of reaching an extremum (CSRE) follows when comparing the dispersion of the output signa .This means that an extreme control system with a signal along ps P and a variable step of searching and maintaining an extremum has a higher quality of control compared to a system with a constant step of reaching an extremum.
Conclusion
The physical principles of constructing a primary measuring transducer for the selective measurement of the concentration of harmful substances (CO) and (CnHm) in exhaust gas mixtures are substantiated based on the spectrophotometric method of measurement in the absorption bands of the IR range, which ensures selectivity in determining the target parameters.An original combined optical scheme of a three-component (hydrocarbons, carbon monoxide and dioxide) gas analyzer based on the infrared (IR) measurement method has been developed.Furtheromre, a meaningful statement is made and a solution is given to the problems of automatic control of the efficiency of the fuel combustion process as an extremal problem of maximum speed, indicating that the efficiency of the system is characterized by the amount of losses due to the search for the optimum of the statistical characteristic of the dynamic object under study.A comparative analysis of methods for solving problems of extreme control by "increment" and by "second difference" indicates that the performance and quality of regulation is not inferior to the well-known traditional systems for controlling the efficiency of fuel combustion in gas-burning furnaces.
вG-
airflow value corresponding to the maximum static characteristic; в G - change in airflow on one cycle of maintaining an extremum; OSEoutput step from extremum; [EMS] -extremum maintenance step; n -number of steps from the point (OSE ↔ EMS) to the maximum of the static characteristic.The numerical values of the steps for maintaining the extremum and Rval are determined individually for each object when conducting basic tests and comparing the static characteristics 1 .The step value is set from the condition that the maximum efficiency of the object falls into the zone of system self-oscillations around the extremum of the statistical characteristic of the object ( p.s 2 () Pf .An increase in the E3S Web of Conferences 431, 02027 (2023) ITSE-2023 https://doi.org/10.1051/e3sconf/202343102027
Table 2 .
Parameters of the links of the modeling object and the shaping filter.
Table 3 .
Simulation results with the drift of the static characteristic of the object.
|
2023-10-15T15:08:56.203Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "dcf48b9b205af6a7d1657ad7fb2752fd7051e94e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/68/e3sconf_itse2023_02027.pdf",
"oa_status": "CLOSED",
"pdf_src": "Anansi",
"pdf_hash": "a6f738d6d7f5d2822dd7362cf492c45ed6910ec6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
16835255
|
pes2o/s2orc
|
v3-fos-license
|
Electrical stimulation enhances cell migration and integrative repair in the meniscus
Electrical signals have been applied towards the repair of articular tissues in the laboratory and clinical settings for over seventy years. We focus on healing of the meniscus, a tissue essential to knee function with limited innate repair potential, which has been largely unexplored in the context of electrical stimulation. Here we demonstrate for the first time that electrical stimulation enhances meniscus cell migration and integrative tissue repair. We optimize pulsatile direct current electrical stimulation parameters on cells at the micro-scale, and apply these to healing of full-thickness defects in explants at the macro-scale. We report increased expression of the adenosine A2b receptor in meniscus cells after stimulation at the micro- and macro-scale, and propose a role for A2bR in meniscus electrotransduction. Taken together, these findings advance our understanding of the effects of electrical signals and their mechanisms of action, and contribute to developing electrotherapeutic strategies for meniscus repair.
inflammatory mediators such as IL-1 and MMPs 36 . However, questions remain as to how electrical signals influence cells and propagate their effects. The adenosine receptors have been implicated in the electrotransduction of pulsed electromagnetic fields in cartilage 37,38 . Stimulation of the high-affinity A 2a and low-affinity A 2b adenosine receptors resulted in elevated cyclic AMP 39 and subsequent activation of anti-inflammatory pathways via protein kinase A and EPAC, which in turn lead to the suppression of NO and PGE 2 40 and downstream feedback inhibition of TNF-a and IL-1b 41 . We therefore hypothesized that electrical stimulation will enhance repair of the meniscus and control the inflammatory events underlying tissue degradation.
Given the wealth of background information on electrical stimulation, inflammation, and adenosine receptor signaling in cartilage repair, it may come as a surprise that very little published information exists in the context of the meniscus. Only recently, meniscus cells from the outer region were found to migrate more quickly in 2-D culture than inner cells, in the presence of static direct current (DC) fields 42 . However, electrical stimulation studies have demonstrated a disparity in migration behavior between 2-D and more physiologically relevant 3-D environments 43 . Moreover, pulsatile electric fields (EFs) are already used in the clinical setting for related conditions 44,45 . At this time, the effects of applied EFs on meniscus cells and subsequent development of novel repair strategies are only beginning to be understood. We investigated the effects of pulsatile direct current electric field stimulation on meniscus cell migration in a micropatterned 3-D hydrogel system. Our micro-scale system also enabled study of the paracrine signaling between meniscus and vascular endothelial cells, in concert with electrical stimulation. Electric fields are known to induce VEGF receptor signaling in endothelial cells 46 , so we expected that patterns of meniscus cell migration that result from the differences in intrinsic vascularity within the tissue will be further enhanced by interactions between electric fields and endothelial cells. Moreover, little is known about the signal transduction pathways that respond to electrical stimuli in meniscus cells. Screening and optimization of stimulation parameters at the microscale enabled us to identify potential pathways involved in electrotransduction. By translating from the micro-to the macro-scale, we established stimulation regimes that enhanced integrative repair of full-thickness defects in meniscus, and demonstrated promising therapeutic effects of electrical stimulation on meniscus healing.
Results
Electrical stimulation has differential effects on meniscus cell migration. Electrical stimulation of the meniscus was investigated using three distinctly different yet related experimental model systems in which inner and outer meniscus cells or explants were subjected to pulsatile direct current electrical stimulation: (a) micropatterned three-dimensional hydrogels with encapsulated inner or outer meniscus cells, (b) micropatterned 3-D hydrogels with spatially distributed meniscus and endothelial cells, and (c) a macroscopic in vitro model of meniscus healing (Fig. 1). When cultured in the micropatterned 3-D hydrogel system, meniscus cells migrated over six days of culture, with the stimulated cells demonstrating enhanced migration relative to non-stimulated control cells (Fig. 2a). Notably, both inner and outer meniscus cells exhibited similar increases in migration with applied electrical signals at 3 V/cm, 1 Hz, 2 ms pulse duration (Fig. 2b), despite the variation in repair response between their respective tissue regions. When injected charge, or the total amount of charge delivered during one stimulus pulse, was maintained at a constant field strength of 3 V/cm, further increases in cell migration were gained as the frequency of stimulation increased to 10 Hz and the pulse duration decreased to 0.2 ms (Fig. 2c). The combinations of 3 V/ cm, 0.1 Hz, 20 ms pulse duration, and 3 V/cm, 100 Hz, 0.02 ms pulse duration were also tested, but the longer pulse duration associated with 0.1 Hz led to a more rounded, quiescent cell appearance rather than the spread-out, migrating cell phenotype seen at the channel edge. The increase in frequency to 100 Hz did not markedly improve the migration behavior of inner or outer meniscus cells, likely a result of too brief of a refractory period for cells to fully respond to subsequent stimulation pulses.
No apparent difference in positive BrdU staining, indicative of cellular proliferation, was apparent between the inner and outer cells, with and without stimulation (Fig. 2d), suggesting that differences in cell motility over six days of culture were not the result of proliferation only. Finally, type I collagen, a key ECM component throughout the meniscus, exhibited trends of elevated gene expression in both inner and outer cells when the frequency of electrical stimulation was increased while maintaining constant injected charge (Fig. 2e).
Cooperative action of electrical stimulation and endothelial cells on meniscus cell migration. Co-culture with human umbilical vein endothelial cells (HUVECs) was investigated in the context of regional variation in healing between the inner and outer meniscus. Notably, HUVECs further potentiated the effects of electrical stimulation on meniscus cell migration (Fig. S1). Stimulating HUVECs alone in the hydrogel system led to increased expression of EDN1, PDGFA, and PDGFB genes (Fig. 3a), which encode key angiogenic factors (endothelin-1, PDGF-A, B) that modulate the behaviors of chondrocytes 47 and meniscus cells 48 , suggesting a dual role of electrical stimulation in upregulating angiogenic factors that specifically enhance meniscus cell migration, in addition to promoting cell migration in general. Although meniscus cells alone exhibited greater migration in response to the 3 V/cm, 10 Hz, 0.2 ms pulse duration regime, meniscus cell migration in coculture with HUVECs at 10 Hz stimulation was not significantly greater than without HUVECs at 10 Hz, but was significantly greater than in non-stimulated meniscus cells with HUVECs (Fig. 3b). The cooperative action of stimulation and co-culture was observed in the 3 V/cm, 1 Hz, 2 ms pulse duration regime (Fig. 3b), suggesting that frequency-dependent interactions between chemical and electrical stimuli require optimization for collaborative effects. In this system, the expression of angiogenic factors by HUVECs was upregulated dramatically with stimulation at 1 Hz as compared to non-stimulated controls, whereas the effect of increasing frequency from 1 to 10 Hz was much smaller (Fig. 3a). These findings may account for the positive trends seen in meniscus cell migration with co-culture at 1 Hz stimulation that were not apparent at 10 Hz.
Using the 1 Hz stimulation regime, gene expression profiles of the meniscus cells were further investigated to elucidate how stimulation and co-culture cooperate to enhance migration. As seen in cultures of meniscus cells with electrical stimulation alone, both inner and outer cells showed increases in COL1A2 expression in response to combined stimulation at 1 Hz and co-culture with HUVECs (Fig. 3c). Although inner and outer meniscus cells demonstrated similar migration behavior in co-culture with HUVECs, they do so in response to different angiogenic factors secreted by endothelial cells, as seen at the gene expression level. Both types of cells responded to stimulation in co-culture with HUVECs by increased expression of EDNRA, encoding endothelin receptor type A (Fig. 3c). However, outer cells responded to the secretion of PDGF isoforms by HUVECs with increased expression of PDGFRA and PDGFRB, encoding PDGF receptors a and b, while no such changes were detected for inner cells. These results demonstrate that electrical stimulation and endothelial cells collaborate to activate meniscus cell receptors at the gene expression level, suggesting potential synergy between the biophysical and biochemical stimuli. defects in explants stimulated at 3 V/cm, 10 Hz, 0.2 ms pulse duration, after six weeks of culture (Fig. 4a). Stimulation at 1 Hz, 2 ms pulse duration initially corresponded to decreases in integration strength over the first four weeks of culture, but surpassed control conditions without stimulation by day 42, albeit to a lesser extent than stimulation at 10 Hz. The underlying basis for the enhanced integration strength of defects in explants stimulated at 10 Hz was further explored by assaying for biochemical content and visualizing the distribution of cells and ECM within the tissue. The overall biochemical content of explants without stimulation decreased throughout six weeks of culture, consistent with previous studies of explant stability in long-term culture, in the absence of growth factor supplementation 49,50 (Fig. 4b). However, the GAG and OHP content of explants stimulated at 10 Hz was not significantly different than at day 0, in comparison to explants without stimulation, which were significantly lower than initial values (Fig. 4b). In general, stimulation led to significant upward trends in DNA and OHP content of explants after six weeks of culture, as compared to explants without stimulation (Fig. 4b).
Histological evaluation revealed that the defect interface appeared more closely apposed in explants stimulated at 10 Hz than in control explants (Fig. 4c). Specifically, newly synthesized matrix was observed at the interface, containing sulfated GAGs and collagens, as evidenced by Alcian blue and Picrosirius red staining, respectively. BrdU labeling was performed to assess the effects of electrical stimulation on cell proliferation, yielding more BrdU-positive cells at the interface of stimulated explants, and indicating a moderate role of electrical stimulation in triggering cell proliferation in explants over long-term culture. Taken together, these data suggest that in addition to tissue repair, electrical stimulation acts to maintain cells and overall ECM composition, and prevent explant degradation in vitro.
Anti-inflammatory effects of electrical stimulation on meniscus explants. Evaluation of the components from media collected Inner or outer meniscus cells were encapsulated on plastic slides in a 1.8% fibrin channel (3.5 3 10 6 cells/mL) and covered by a second layer of 1.8% fibrin to enable migration. After 3 days of pre-culture, slides were transferred into custom bioreactors with carbon electrodes spaced 2.5 cm apart, for 3 days of stimulation. (b) Co-culture of meniscus cells with endothelial cells. Human umbilical vein endothelial cells (HUVECs) and inner or outer meniscus cells (3.5 3 10 6 cells/mL) were individually encapsulated on slides in parallel fibrin channels, left and right, respectively, and cultured for 3 days of pre-culture and 3 days of stimulation. (c) Juvenile bovine meniscus explants were punched with central cores of 1.5 mm diameter and immediately replaced to simulate a full-thickness defect. Explants were stimulated four days a week over six weeks of culture in a custom bioreactor system, consisting of a 5 3 6 array with carbon electrodes spaced 1 cm apart. throughout six weeks of culture revealed increases in endogenous TNF-a and IL-1b production by explants without stimulation, compared to those receiving stimulation at 3 V/cm, 10 Hz, 0.2 ms pulse duration, particularly within the first four weeks (Fig. S2a, b). The elevated cytokine levels in explants without stimulation corresponded with changes in NO production ( Fig. S2c), MMP activity (Fig. S2d), and GAG release (Fig. S2e), which were typical of catabolic degradation: in comparison to stimulated explants, greater NO production, MMP activity, and GAG release were detected in explants without stimulation, suggestive of an anti-inflammatory, anti-catabolic, and stabilizing effect of electrical stimulation in long-term culture in T ) in gene expression of EDN1, PDGFA, and PDGFB in HUVECs at day 6 relative to cells at day 0, respectively, both normalized to GAPDH. Increasing frequency of stimulation with maintenance of injected charge (3 V/cm, 1 Hz, 2 ms or 3 V/cm, 10 Hz, 0.2 ms pulse duration) led to upward trends in gene expression. * p , 0.05 for linear trend; n 5 3-4. (b) Optimization of electrical stimulation parameters for co-culture of HUVECs and meniscus cells. Meniscus cell migration with HUVECs and stimulation at 3 V/cm, 10 Hz, 0.2 ms pulse duration (10 Hz) was greater than with HUVECs alone at day 6 (left). * p , 0.05 vs. 1 HUVEC; n 5 37-154. Meniscus cell migration with HUVECs and stimulation at 3 V/cm, 1 Hz, 2 ms pulse duration (1 Hz) demonstrated cooperative, upward trends with the addition of each stimulus at day 6 (right). * p , 0.05 for linear trend (analysis of groups indicated by the same letter); n 5 24-194. (c) Fold change (2 2DDC T ) in gene expression of COL1A2, EDNRA, PDGFRA, and PDGFRB in meniscus cells at day 6 relative to cells at day 0, respectively, both normalized to GAPDH. Increasing frequency of stimulation with maintenance of injected charge (3 V/cm, 1 Hz, vitro. These changes in explants without stimulation occurred early in culture, within the first three weeks, but their biochemical composition continued to decrease for the remainder of the culture period. Adenosine A 2b receptor plays a role in meniscus electrotransduction. Adenosine receptors in meniscus were identified by gene expression analysis of migrating cells, with and without stimulation at 3 V/cm, 10 Hz, 0.2 ms pulse duration. In hydrogel-encapsulated cells, the expression of ADORA1, ADORA2A, and ADORA3, encoding the adenosine A 1 , A 2a , and A 3 receptors, respectively, was minimal at day 0 (2 2DC T = 10 24 ; n 5 4) and at day 6 after culture, with and without stimulation (2 2DC T = 10 25 ; n 5 4), but ADORA2B encoding A 2b R was upregulated in meniscus cells with electrical stimulation at 3 V/cm, 10 Hz, 0.2 ms, as well as 3 V/cm, 1 Hz, 0.2 ms pulse duration (Fig. 5a). Translation to the protein level was evident through immunofluorescence staining of migrating meniscus cells at day 6, against A 2b R and DAPI for nuclei (Fig. 5b). Cells receiving stimulation demonstrated strong, positive staining for A 2b R, while cells without stimulation exhibited little or none, suggesting that the applied electrical signals may activate A 2b R directly. At the macro-scale level, cells in stimulated explants also stained positively for A 2b R, whereas those not receiving stimulation appeared negative (Fig. 5c), lending further evidence to the potential relationship between electrical signals and activation of the adenosine A 2b receptors in meniscus. Moreover, the consistency in protein expression from the micro-to macro-scale continues to support the use of micro-scale systems for evaluation of a broad range of electrical stimulation regimes, in order to identify the optimal parameters to apply at the macro-scale tissue level.
Discussion
Electrical stimulation is a versatile treatment modality that has yet to be fully explored in the context of injuries to the meniscus, in which negatively charged glycosaminoglycans form the basis for endogenous electrical activity 26,29,30 . We demonstrate that pulsatile direct current electric fields enhance meniscus cell migration in a micropatterned hydrogel system, and the integrative repair of meniscus defects in an in vitro explant model of meniscus healing. Notably, the responses of meniscus cells from the inner and outer regions were comparable, suggesting that the meniscus cells from both regions can be induced to migrate and promote healing by external electrical signals.
In our 3-D systems, electrical stimulation alone induced similar migration behavior in cells isolated from both the inner and outer regions, whereas previous studies have shown that outer cells migrate more quickly than inner cells during 2-D galvanotaxis 42 . This disparity in observation may arise in part from the use of distinct cell populations within the meniscus between the two studies: cells isolated by tissue explant outgrowth in culture in the present study, versus cells isolated by release from tissue matrix via digestion in the galvanotaxis studies 42 . The inner and outer meniscus cells used in our study were previously characterized as a population capable of differentiating along multiple lineages, and may represent tissuespecific stem cells contributing to endogenous repair 51 . These cells, by nature of their isolation via tissue outgrowth, exhibit a predisposition for migration, which may also account for the level of motility observed in cells, even in the absence of electrical stimulation. As such, an enriched population of these cells after passaging may have led to a more robust response to exogenous electrical stimuli than a mixed population obtained by digestion in previous studies. In addition, the applied electric fields in the current study were perpendicular to the direction of cell migration, but no bias in polarity of movement was observed, in contrast to the cathodal migration of meniscus cells during 2-D galvanotaxis 42 . However, the observed behavior is consistent with studies of human fibroblasts on 3-D collagen gels in static DC fields, in which preferred movement occurred perpendicular to the axis of stimulation, but not towards a specific pole 43 .
The fibrin hydrogel used in the micropatterned system represented a simple three-dimensional environment to observe meniscus cell migration phenomena. However, it does not begin to replicate the dense tissue matrix found in the native meniscus. Therefore, it was particularly encouraging to discover that the parameters optimized at the hydrogel level were applicable at the tissue level, inducing the migration of endogenous repair cells to the defect interface. However, migration at the tissue level also necessarily requires remodeling of the dense surrounding matrix to enable movement, which involves the interplay and balance of pro-inflammatory cytokines and MMPs, and anti-inflammatory and anabolic factors. After meniscus injury, experimentally induced or clinically occurring, this balance is tilted in favor of pro-inflammatory, catabolic events, necessitating examination into interventions that can re-initiate synthesis of key ECM components. It has been previously reported that meniscus explants lose their biochemical properties, specifically, DNA 49 and GAG 49,50 content, over time in culture. In this study, a similar loss in DNA and GAG content was seen in explants without stimulation over six weeks, and in addition, we observed a significant loss of collagen content. Interestingly, the explants receiving the optimal stimulation regime maintained their biochemical composition, suggesting a stabilizing effect of electrical stimulation, moving the balance away from the destructive aspect of remodeling towards the constructive side. By applying the findings on the cellular microscale to the tissue macro-scale, an additional benefit of electrical stimulation was revealed, that it may counteract the natural degradation of explant tissue in vitro, in addition to mediating cell migration.
In order to understand the basis of the in vitro degradation, the concentration of pro-inflammatory cytokines, TNF-a and IL-1b, were measured in the media collected during explant culture. The fetal bovine serum used in culture medium contained baseline amounts of both cytokines, but it is notable that statistically significant differences existed between the levels of TNF-a and IL-1b in explant culture media, with and without stimulation, in the absence of exogenous supplementation of either cytokine or other proinflammatory mediators. In clinical observation, synovial fluid from patients who underwent arthroscopy for a meniscus tear contained between 35-85 pg/mL TNF-a, and 90-125 pg/mL IL-1b 52 . The level of TNF-a detected in this system was well above what was found in a pathological state of meniscus injury, but within the working range of exogenous supplementation in other studies of meniscus repair 20 . In contrast, IL-1b was below both pathologic levels and the range used in other meniscus studies 53 . IL-1a levels were not measured in this study, but are relevant due to the differences in relative potency of IL-1a and b in porcine meniscus explants 54 . Moreover, the effects of TNF-a and IL-1 were not independent, and interplay between the two cytokines in activating pro-inflammatory pathways has been documented [14][15][16][17][18][19] . The lower levels of TNF-a and IL-1b in explants receiving the optimal stimulation regime suggest inhibitory effects of electrical stimulation on their production. In contrast, increases in these cytokines were observed in explants without stimulation, with corresponding increases in NO production, MMP activity, and subsequent GAG loss in media. These two observations correlate with previous findings demonstrating that inhibition of MMPs can significantly reduce GAG loss in culture 50 . The loss of explant stability in the absence of electrical stimulation, particularly in the first half of the culture period, corresponds with the observation that meniscus injury disrupts tissue homeostasis and initiates a series of events furthering degradation 8,55-57 .
The migration and anti-inflammatory responses elicited by electrical stimulation occur via the activation of membrane receptors on meniscus cells. It has been previously postulated that electromagnetic fields can serve as first messengers in signaling towards tissue repair 58 . Based upon the findings in this study, a new model of electrotransduction in meniscus could be proposed: after meniscus injury, TNF-a and IL-1 levels are elevated, progressing toward further tissue degradation (Fig. 6a). Within this environment, the application of pulsatile DC electrical stimulation selectively activates the adenosine A 2b receptor, producing the second messenger cAMP, which triggers anti-inflammatory pathways that reduce NO production, MMP activity, and GAG release, and inhibit the initial proinflammatory cytokines (Fig. 6b).
The physiological role of low-affinity A 2b R has been less characterized than the high-affinity A 2a R, which has been studied extensively for anti-inflammatory activity in the context of osteoarthritis and rheumatoid arthritis [59][60][61] . However, activation of the A 2b R, leading to elevated cAMP, has been shown to inhibit MMP-1 production in fibroblast-like synoviocytes from RA patients 62 , and in this study, we report for the first time in meniscus, among the class of adenosine receptors, the increased expression of A 2b R following electrical stimulation at both the micro-and macro-scale. In order to fully elucidate the role of A 2b R in meniscus electrotransduction, further experiments using receptor agonists and antagonists must be tested in meniscus defect models, with the addition of exogenous sources of pro-inflammatory cytokines to the system, and measurement of changes in adenylyl cyclase and cAMP levels. To this end, our micro-scale system for meniscus cell migration will allow for blocking or knock-down of the adenosine A 2b receptor by antagonists or gene silencing, in order to verify the integral role of A 2b R via shortterm experiments, and subsequent application at the macro-scale in long-term culture of explants, for confirmation at the tissue level. Moreover, cells and explants in this study were derived from juvenile bovine meniscus, requiring future studies using adult or even osteoarthritic tissue to confirm and extend findings. While growth factors were not supplemented to medium in this system, the addition of TGF-b3 alone 49 or TGF-b1 in the presence of IL-1 63 has been shown to significantly enhance meniscus repair, meriting the further study of any potential synergy between electrical and chemical stimuli. However, it should be noted that the reparative and protective effects of electrical stimulation alone, as shown in this study, are more immediately translatable to the clinic, without the need for in vivo delivery of growth factors. Moreover, the integration strength of explants stimulated at the optimal regime is among the highest reported in literature for this model of meniscal repair, even in the absence of exogenous growth factors [20][21][22]49,63,64 .
The contribution of growth factors was explored at the micro-scale through the co-culture of endothelial cells, as a model of clinical methods that increase the vascular response to the inner region 65 the meniscus is vascularized in adults 67 , and endothelial cells from the meniscus have been isolated and cultured 68 , the exact percentage of endothelial cells in the tissue remains unknown. However, endothelial cells may potentiate their effects via paracrine signaling, and these angiogenic factors can be supplemented in avascular tissues where they and endothelial cells are natively absent, such as in the inner region of the meniscus. Here we report that the combination of biophysical and biochemical stimuli via EFs and co-culture with HUVECs cooperatively enhanced the migration of meniscus cells in a 3-D environment. Despite the variation in vascularity of their respective tissue regions, inner and outer cells demonstrated trends of increased migration in the presence of both stimuli as compared to either stimulus alone. The cooperative effects of electrical signals and HUVECs appeared to result from the direct effects of electrical signals on HUVECs, which have been shown to regulate HUVEC behavior 46 , and in our system, upregulated the gene expression of angiogenic factors such as endothelin-1 and PDGF isoforms, previously shown to act on chondrocytes 47 and meniscus cells 48 . Curiously, we observed a regional variation in expression of endothelin and PDGF receptors by meniscus cells in co-culture with HUVECs, which is now the focus of ongoing studies to examine the differential paracrine mechanisms of endothelial cells on the meniscus. Specifically, we will investigate if direct interactions between meniscus and endothelial cells are necessary, or if paracrine factors secreted by endothelial cells are sufficient to induce migration behavior in outer cells, which natively co-exist with vasculature, versus inner cells, which do not. The differential expression of PDGF receptor genes in outer versus inner cells in co-culture with HUVECs may arise from the presence of endothelial cells in one region versus the other, a disparity that was further amplified by electrical signals. However, the upregulation of ET A R gene expression in cells of both regions with co-culture suggests that ET-1 signaling is involved in the vascular response of the outer region for healing. Moreover, the limited healing potential of the inner region could be enhanced if ET-1 is supplemented to the avascular tissue, which does not receive these signals natively in the absence of endothelial cells. If these differential paracrine mechanisms of HUVECs on cells from the inner and outer regions are validated in future studies, by applying conditioned medium or specific angiogenic factors, we not only gain a further understanding of the variation in repair response between the two regions, but we also identify the angiogenic factors that are natively absent in the inner region, but can be supplemented in order to promote healing.
In addition, we found that the coordinated action of EFs and HUVECs on meniscus cell migration was more apparent using a different set of stimulation parameters than for EFs alone. This observation, as well as the variation in migration response seen during initial optimization of parameters for meniscus cells only, is in line with previous theories that a window exists in which productive electrical stimulation occurs 33 , and parameters must be optimized for all relevant cell types within the system. Moreover, although the coculture studies were performed at one fixed channel-to-channel distance, the system can be easily modified to vary this distance by simply changing the design of the micropattern, in order to identify a length scale over which paracrine signaling occurs. Although it was not tested in this study, the interaction of electrical stimulation and endothelial cells is pertinent at the tissue level as well, with implications for clinical and adjuvant therapies. However, in vitro models of meniscus repair cannot fully replicate the regional variation in vascular environment within the tissue and its influence on healing 64 . Experimentation in an animal model is the most suitable next step towards validation of electrical stimulation as a modality for treatment of meniscus injury.
In summary, the effects of pulsatile electric field stimulation on meniscus repair were demonstrated for the first time. Optimization of electrical stimulation regimes for cell migration was achieved at the micro-scale with short-term experiments, and then applied at the macro-scale towards significant integrative repair of full-thickness defects in meniscus explants. In situ stimulation of explants revealed another role of electric fields in modulating inflammation and reducing tissue degradation. The dual functions of electrical stimulation in meniscus support its use in the clinical setting, after surgical intervention, to recruit cells for tissue repair and to prevent the matrix degradation that leads to osteoarthritis. Future establishment of therapeutic strategies, by testing stimulation regimes and translating to macro-scale tissue repair, has the potential to overcome current limitations in meniscus healing.
Methods
Cell isolation. Juvenile bovine meniscus cells were isolated from the inner and outer regions as previously described 51 . Briefly, the menisci of calves were obtained from a commercial source (Green Village Packing Company), dissected within 36 h of slaughter, and sectioned into inner (2/3) and outer (1/3) regions. The tissue was then minced into 1-2 mm 3 pieces, and plated on tissue culture plates in basal medium consisting of high glucose DMEM, 13 antibiotic-antimycotic, 10% FBS, and 50 mg/ mL ascorbate 2-phosphate. Over 2-3 weeks, cells migrated out of the tissue pieces, and were expanded to passage 2.
Human umbilical vein endothelial cells (HUVECs) were isolated from fresh umbilical cords of term delivery, collected according to an active IRB at Columbia University (IRB-AAAC4839) 69 . According to the protocol, tissue samples were fully deidentified and there was no patient information available to the investigators. Umbilical cords were rinsed with PBS, and the umbilical vein lumen was infused with a 2.5% trypsin solution and clamped at both ends. Enzymatic digestion occurred at 37uC for 15 min, and the resulting cell digest was collected by rinsing the vein with PBS and centrifugation at 300 3 g for 5 min. The cell pellet was resuspended on 25 cm 2 tissue culture flasks in endothelial growth medium (EGM-2, Lonza), and expanded to passage 5. Explant culture. Juvenile bovine meniscus explants were harvested from the central tissue region using sterile 4 mm Ø biopsy punches, and cut to 1.5 mm height using a custom microtome device. A 1.5 mm Ø central core was punched and immediately replaced into the explant ring to simulate a full-thickness defect. Explants were cultured for 3 days in basal medium prior to the start of each experiment.
Cell migration assay. The micropatterned three-dimensional hydrogel system used in cell migration studies was previously established for the study of human mesenchymal stem cells and HUVECs in co-culture 69 . Briefly, the poly(dimethylsiloxane) (PDMS; 951 elastomer:curing agent; Sylgard 184 silicone elastomer kit, Dow Corning) micropattern consisted of two parallel channels (1 cm length, 1000 mm width, 200 mm height, channel-to-channel distance: 2000 mm; Fig. 1b). Prior to cell encapsulation, the PDMS surface was blocked using a 5% solution of FBS in sterile distilled water.
In single channel studies (Fig. 1a), inner or outer meniscus cells were encapsulated at 3.5 3 10 6 cells/mL each in 1.8% fibrin (bovine fibrinogen, MP Biomedicals; thrombin from bovine plasma, Sigma-Aldrich) and printed into single channels on plastic slides. The cell-fibrin suspension was allowed to polymerize for 15 min, before removal of the PDMS mold and encapsulation in an additional layer of 1.8% fibrin to permit cell migration. In co-culture studies (Fig. 1b), inner or outer meniscus cells and HUVECs were encapsulated at 3.5 3 10 6 cells/mL each in 1.8% fibrin, printed into two parallel channels on plastic slides, and covered by another layer of 1.8% fibrin. Single channels of inner or outer meniscus cells served as controls. Cultures were maintained in EGM-2 over 6 days after encapsulation, and migration was monitored by bright field imaging at days 0 and 6, using an Olympus IX81 microscope with an IX2-UCB digital camera and Metamorph software.
Electrical stimulation of cells. After 3 days of pre-culture without stimulation, the cells were transferred into custom bioreactors with carbon electrodes (Ladd Research Industries) spaced 2.5 cm apart (Fig. 1a) 70 . Bioreactors were connected by platinum wire to an electrical stimulator (Grass Technologies) generating continuous pulses of 7.5 V, corresponding to a field strength of 3 V/cm, and various frequency and pulse duration combinations (0.1, 1, 10 Hz and 2 ms pulse duration; 10 Hz and 0.2 ms pulse duration) for 3 days. Bioreactors not connected to the stimulator served as controls (0 V).
Electrical stimulation of explants. After 3 days of pre-culture, explants were stimulated in a custom bioreactor (Fig. 1c), consisting of a 5 3 6 array outfitted with carbon electrodes that were spaced 1 cm apart, and connected directly to a stimulator generating pulses of 3 V, corresponding to a field strength of 3 V/cm, and either 1 Hz and 2 ms pulse duration or 10 Hz and 0.2 ms pulse duration. Each row was independently stimulated, with six wells per row receiving the same stimulation regime. Rows not connected to the stimulator served as controls (0 V). Explants were stimulated four days a week over six weeks of culture in basal medium, with media collection at each change, and sample collection for mechanical integration testing, biochemical assays, and histological and immunofluorescence staining at biweekly time points.
Evaluation of meniscus cell migration. Bright field images were processed using a custom MATLAB program 71 to track cell migration. Briefly, the program applied the Sobel edge detection algorithm and eliminated small regions of isolated, single meniscus cells. Each image was evenly subdivided into regions corresponding to ,1580 mm of the channel length, for which average channel-to-channel distances were calculated.
Real-time PCR. The total RNA of meniscus cells or HUVECs after six days of culture was extracted by TRIzol reagent (Invitrogen) and reverse transcribed (High Capacity cDNA Reverse Transcription Kit, Applied Biosystems). Meniscus cells and HUVECs were also collected at day 0 prior to encapsulation and processed to facilitate fold change analysis (2 2DDC T ). Real-time PCR was performed on an Applied Biosystems StepOnePlus Real-Time PCR System using Fast SYBR Green Master Mix (Invitrogen), custom bovine primers for COL1A2, EDNRA, PDGFRA, PDGFRB, ADORA1, ADORA2A, ADORA2B, ADORA3, and GAPDH, and custom human primers for EDN1, PDGFA, PDGFB, and GAPDH.
Mechanical integration testing of explants. The integration of full-thickness defects in explants was tested with a custom device consisting of a 1.33 mm Ø indenter in series with a 50 g load cell, placed above a cup with a 2 mm Ø hole. Prior to testing, the height of each sample was measured using a digital caliper. The indenter was displaced at a ramp rate of 0.3% of the sample height per second, until the central core was pushed fully through the outer ring. Integration strength was calculated as the ratio of the peak force to the surface area of contact between the central core and outer ring. After testing, samples were collected for further biochemical analysis.
Biochemical analysis of explants and media. The biochemical composition of explants was evaluated for DNA, sulfated GAG, and OHP content. Briefly, samples were lyophilized for 24 h and digested at 60uC for 16 h in a papain solution containing 125 mg/mL papain (Sigma-Aldrich), 50 mmol phosphate buffer (pH 6.5), and 2 mmol N-acetyl cysteine (Sigma-Aldrich). DNA content of the sample digests was obtained using the PicoGreen dsDNA quantitation kit (Invitrogen), according to manufacturer's protocol. Sulfated GAG content was measured using the 1,9dimethylmethylene blue dye-binding (DMMB) assay 72 . Hydroxyproline (OHP) content was quantified by a modified acid hydrolysis assay 73 . Media collected at each change was assayed for TNF-a, IL-1b, and NO production, MMP activity, and GAG release using the DMMB assay. Briefly, TNF-a and IL-1b production were quantified directly on media samples using the bovine TNF-a (GenWay Biotech) and IL-1b (Thermo Fisher Scientific) ELISA kits, respectively. Prior to NO analysis, media samples were filtered through Vivaspin 500 units (10,000 MWCO PES filters, Sartorius), and total NO production in media samples was determined by quantification of nitrate and nitrite, according to manufacturer's protocol (nitrate/ nitrite colorimetric assay kit, Cayman Chemical Company). The activity of MMP-1, -2, -3, -9, -13, and -14 was assessed via cleavage of a fluorescent peptide substrate (PEPDAB008, BioZyme), adapted from a previously published protocol 53 . Media samples were incubated in either 2.5 mM p-aminophenylmercuric acetate (APMA; pH 7.0-7.5) in assay buffer (200 mM NaCl, 50 mM Tris, 5 mM CaCl 2 , 10 mM ZnSO 4 , 0.01% Brij 35, pH 7.5) or in assay buffer alone at 37uC for 5 h. Samples were then diluted twofold with assay buffer containing 20 mM substrate at 37uC for 2 h, and measured at 485 nm excitation and 530 nm emission. Total MMP activity was calculated as the difference in fluorescence readings between samples incubated with and without APMA.
5-bromo-2-deoxyuridine labeling of cells and explants. At the endpoint of both migration and integration studies, samples were labeled with 5-bromo-2deoxyuridine (BrdU, Invitrogen) to assess cellular proliferation. Single channel studies of meniscus cells were incubated with BrdU labeling reagent (1550) at 37uC for 4 h, and explants (1550) at 37uC overnight. Samples were then fixed for further histological processing.
Histological and immunofluorescence staining of cells and explants. Cell migration samples were fixed in a graded series of formaldehyde from 1% to 4% for 10 min each, and maintained in PBS at 4uC until preparation for staining. Briefly, samples were blocked and permeabilized with agitation in PBS containing 5% BSA and 0.1% Triton X-100 at 25uC for 1 h, and incubated at 4uC overnight with the primary antibody (rabbit anti-adenosine A 2b receptor antibody, 50 mg/mL, Millipore) in PBS with 5% BSA. After 3 washes of 30 min each in PBS at 25uC with agitation, samples were incubated at 4uC overnight with Alexa FluorH 555 goat antirabbit IgG (15200, Invitrogen) and DAPI (15200, Invitrogen), followed by 3 additional washes. Alternatively, samples were directly incubated after blocking and permeabilization with DAPI and mouse anti-BrdU antibody (Alexa FluorH 488 conjugate, 1550, Invitrogen).
Explants were fixed in 4% formaldehyde at 4uC overnight and embedded in 2% low-melting-temperature agarose. All samples were dehydrated in a graded series of ethanol, embedded in paraffin, and sectioned to 8 mm thickness. Sections were stained with hematoxylin and eosin (H&E) for nuclei and cytoplasmic elements, respectively, Alcian blue (pH 1.0) for sulfated GAGs, and Picrosirius red for collagens. For immunofluorescence staining of A 2b R in paraffin-embedded samples, antigen retrieval was performed with 10 mM citrate buffer (pH 6.0). Blocking in PBS with 5% BSA, and primary and secondary antibody incubations in PBS with 5% BSA were completed, as previously described. Immunohistochemical staining of BrdU was conducted, according to manufacturer's protocol (BrdU staining kit, Invitrogen). Stained hydrogels and explants were imaged with an Olympus FSX100 microscope.
Statistical analysis. For cell migration studies, 1-way ANOVA with Tukey post tests or post tests for linear trend were performed with Prism (a 5 0.05). For integration studies, 2-way ANOVA with Bonferroni post tests and 1-way ANOVA with post tests for linear trend were also performed with Prism. All data are presented as mean 6 SEM.
|
2016-05-18T09:06:06.844Z
|
2014-01-14T00:00:00.000
|
{
"year": 2014,
"sha1": "95712ce55c62952564e6a07ef3c1e45c58743202",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/srep03674.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95712ce55c62952564e6a07ef3c1e45c58743202",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5844366
|
pes2o/s2orc
|
v3-fos-license
|
Macroscopic Polarization from Electronic Wavefunctions
The dipole moment of any finite and neutral system, having a square-integrable wavefunction, is a well defined quantity. The same quantity is ill-defined for an extended system, whose wavefunction invariably obeys periodic (Born-von Karman) boundary conditions. Despite this fact, macroscopic polarization is a theoretically accessible quantity, for either uncorrelated or correlated many-electron systems: in both cases, polarization is a rather"exotic"observable. For an uncorrelated-either Hartree-Fock or Kohn-Sham-crystalline solid, polarization has been expressed and computed as a Berry phase of the Bloch orbitals (since 1993). The case of a correlated and/or disordered system received a definitive solution only very recently (1998): this latest development allows us present here the whole theory from a novel, and very general, viewpoint. The modern theory of polarization is even relevant to the foundations of density functional theory in extended systems.
Introduction
The dipole moment of any finite N-electron system in its ground state is a simple and well defined quantity. Given the many-body wavefunction Ψ and the corresponding single-particle density n(r) the electronic contribution to the dipole is: whereR = N i=1 r i (atomic Hartree units are adopted throughout). This looks very trivial, but we are exploiting here an essential fact: the ground wavefunction of any finite N-electron system is square-integrable and vanishes exponentially at infinity; the density vanishes exponentially as well.
Considering now a macroscopic solid, the related quantity is macroscopic polarization, which is a very essential concept in any phenomenological description of dielectric media [1]: this quantity is ideally defined as the dipole of a macroscopic sample, divided by its volume. The point is that, when using Eq. (1), the integral is dominated by what happens at the surface of the sample: knowledge of the electronic distribution in the bulk region is not enough to unambiguosly determine the dipole. This looks like a paradox, since in the thermodynamic limit macroscopic polarization must be an intensive quantity, insensitive to surface effects.
Macroscopic polarization in the bulk region of the solid must be determined by what "happens" in the bulk as well. This is the case if one assumes a model of discrete and well separated dipoles,à la Clausius-Mossotti: but real dielectrics are very much different from such an extreme model. The valence electronic distribution is continuous, and often very delocalized (particularly in covalent dielectrics). Most textbooks attempt at explaining the polarization of a periodic crystal via the dipole moment of a unit cell, or something of the kind [2,3]. These definitions are incorrect [4]: according to the modern viewpoint, bulk macroscopic polarization is a physical observable completely independent from the periodic charge distribution of the polarized crystalline dielectric.
In condensed matter physics the standard way for getting rid of undesired surface effects is to adopt periodic Born-von Kármán boundary conditions (BvK). Indeed, the BvK choice is mandatory in order to introduce even the most elementary topics, such as the free-electron gas and its Fermi energy, or the Bloch theorem [2,3]. Unfortunately, the adoption of BvK does not solve the polarization problem. In fact the dipole cannot be evaluated as in Eq. (1) when the wavefunction obeys BvK: the integrals are ill-defined due to the unbounded nature of the quantum-mechanical position operator.
For this reason macroscopic polarization remained a major challenge in electronic structure theory for many years. The breakthrough came in 1992, when polarization was defined in terms of the wavefunctions, not of the charge. This definition has an unambiguous thermodynamic limit, such that BvK and Bloch states can be used with no harm [5]. In the following months a modern theory of macroscopic polarization in crystalline dielectrics has been completely established [6,7], thanks to a major advance due to R.D. King-Smith and D. Vanderbilt [8], who expressed polarization in terms of a Berry phase [9,10,11]. A comprehensive account of the modern theory exists [7]. Other less technical presentations are available as well [12,13,14]; for an oversimplified nontechnical outline see Ref. [15]. First-principle calculations based on this theory have been performed for several crystalline materials, either within DFT (using various basis sets) or within HF (using LCAO basis sets) [16].
All of the above quoted work refers to a crystalline system within an independent-electron formulation: the single-particle orbitals have then the Bloch form, and the macroscopic polarization is evaluated as a Berry phase of them. The related, but substantially different, problem of macroscopic polarization in a correlated many-electron system was first solved by Ortíz and Martin in 1994 [17]. However, according to them, polarization is defined-and computed [18,19]-by means of a peculiar "ensemble average", integrating over a set of different electronic ground states: this was much later (1998) shown to be unnecessary. In Ref. [20], in fact, a simpler viewpoint is taken: the polarization of a correlated solid is defined by means of a "pure state" expectation value, although a rather exotic kind of one. By the same token, it was also possible to define [11,20,21]-and to compute [22]-macroscopic polarization in noncrystalline systems.
In the present work we take advantage of the most recent developments for reconsidering the whole theory under a new light. At variance with previous presentations we are not going to introduce explicitly the Berry phase concept: in fact, within the formulation of Ref. [20], the Berry phase appears very much "in disguise". Instead, a major role is played by a precursor work [23,24], apparently unrelated either to the polarization problem or to a Berry phase, which will be reexamined here and used to introduce the polarization theory.
Finally, let me just mention a latest development, not to be discussed in the present work, where ideas spawned from the polarization theoryand more specifically from Ref. [20]-are used to investigate wavefunction localization [25].
The "electron in broth"
Adopting a given choice for the boundary conditions is tantamount to defining the Hilbert space where our solutions of Schrödinger's equation live. For the sake of simplicity, I am presenting the basic concept by means of the onedimensional case. For a single-particle wavefunction BvK reads ψ(x + L) = ψ(x), where L is the imposed periodicity, chosen to be large with respect to atomic dimensions. Notice that lattice periodicity is not assumed, and BvK applies to disordered systems as well.
By definition, an operator maps any vector of the given Hilbert space into another vector belonging to the same space: the multiplicative position operator x is therefore not a legitimate operator when BvK are adopted for the state vectors, since x ψ(x) is not a periodic function whenever ψ(x) is such. It is then obvious why Eq. (1) cannot be used in condensed matter theory. Of course, any periodic function of x is a legitimate multiplicative operator in the Hilbert space: this is the case e.g. of the nuclear potential acting on the electrons.
Before switching to the polarization problem, it is expedient to discuss an important precursor work, apparently unrelated to the polarization problem, where nonetheless the expectation value of the position operator plays the key role.
Some years ago, A. Selloni et al. [23] addressed the properties of electrons dissolved in molten salts at high dilution, in a paper which at the time was commonly nicknamed the "electron in broth". The physical problem was studied by means of a mixed quantum-classical simulation, where a lone electron was adiabatically moving in a molten salt (the "broth") at finite temperature. The simulation cell contained 32 cations, 31 anions, and a single electron. KCl was the original case study, which therefore addressed the liquid state analogue of an F center; other systems were studied afterwards [24]. The motion of the ions was assumed as completely classical, and the Newton equations of motion were integrated by means of standard molecular dynamics (MD) techniques, though the ionic motion was coupled to the quantum degree of freedom of the electron. The electronic ground wavefunction was determined solving the time-dependent Schrödinger's equation at each MD time step. As usual in MD simulations, periodic boundary conditions were adopted for the classical ionic motion. Ideally, the ionic motion occurs in a simulation cell which is surrounded by periodic replicas: inter-cell interactions are accounted for, thus avoiding surface effects. Analogously, the electronic wavefunction is chosen in the work of Selloni et al. to obey BvK over the simulation cell, and therefore features periodic replicas as well. A plot of such an electronic distribution, in a schematic one-dimensional analogue, is given in Fig. 1.
One of the main properties investigated in Ref. [23] was the electronic diffusion, where the thermal ionic motion is the driving agent (within the adiabatic approximation). In order to perform this study, one has to identify first of all where the "center" of the electronic distribution is. Intuitively, the distribution in Fig. 1 appears to have a "center", which however is defined only modulo the replica periodicity, and furthermore cannot be evaluated simply as in Eq. (1), i.e. x = dx x|ψ(x)| 2 , precisely because of BvK. Selloni et al. solved the problem by means of a very elegant and far-reaching formula, presented below. The work of Ref. [20] can be regarded as the many-body generalization of it. According to Refs. [20,23,24,25], the key quantity for dealing with the position operator within BvK is the dimensionless complex number z, defined as: whose modulus is no larger than 1. The most general electron density, such as the one depicted in Fig. 1, can always be written as a superposition of a function n loc (x), normalized over (−∞, ∞), and of its periodic replicas: Both x 0 and n loc (x) have a large arbitrariness: we restrict it a little bit by imposing that x 0 is the center of the distribution, in the sense that ∞ −∞ dx x n loc (x) = 0. Using Eq. (3), z can be expressed in terms of the Fourier transform of n loc as: If the electron is localized in a region of space much smaller than L, its Fourier transform is smooth over reciprocal distances of the order of L −1 and can be expanded as: A very natural definition of the center of a localized periodic distribution |ψ(x)| 2 is therefore provided by the phase of z as: which is in fact the formula first proposed by Selloni et al. [23,24]. The expectation value x is defined modulo L, as expected since |ψ(x)| 2 is BvK periodic.
The above expressions imply x ≃ x 0 mod L; in the special case where n loc (x) can be taken as an even (centrosymmetric) function, its Fourier transform is real and Eq. (4) yields indeed x ≡ x 0 mod L. In the case of extreme delocalization we have instead |ψ(x)| 2 = 1/L and z = 0: hence the center of the distribution x , according to Eq. (6), is ill-defined. For a more general delocalized state, we expect that z goes to zero at large L [25].
We have therefore arrived at a definition of x within BvK which has many of the desirable features we were looking for: nonetheless, there is a property that is even more important, and which we are going to demonstrate now. Suppose the potential which the electron moves in has a slow time dependence-as was the case in Ref. [23]-and we wish to follow the adiabatic evolution of the electronic state |ψ . If we call |ϕ j the instantaneous eigenstates at time t, the lowest order adiabatic evolution of the ground-state density matrix is [26]: where the phases have been chosen in order to make |ϕ 0 orthogonal to its time derivative |φ 0 . The macroscopic electrical current flowing through the system at time t is therefore: It is then rather straightforward to prove that j to lowest order in 1/L equals −(1/L) d x /dt, where x is evaluated using in Eq. (6) the instantaneous ground eigenstate: This finding proves the value of the "electron-in-broth" formula, Eqs. (2) and (6) in studying electron transport [23,24].
The main formula: Many electrons
So much about the one-electron problem: we are now going to consider a finite density of electrons in the periodic box. To start with, irrelevant spin variables will be neglected: for the sake of notation simplicity, I will first illustrate the main concepts on a system of "spinless electrons". Even for a system of independent electrons, our approach takes a simple and compact form if a many-body formulation is adopted. BvK then imposes periodicity in each electronic variable separately: Our interest is indeed in studying a bulk system: N electrons in a segment of length L, where eventually the thermodynamic limit is taken: L → ∞, N → ∞, and N/L = n 0 constant. We also assume the ground state nondegenerate, and we deal with insulating systems only: this means that the gap between the ground eigenvalue and the excited ones remains finite for L → ∞.
We start defining the one-dimensional analogue ofR, namely, the multiplicative operatorX = N i=1 x i , and the complex number It is obvious that the operatorX is ill-defined in our Hilbert space, while its complex exponential appearing in Eq. (11) is well defined. The main result of Ref. [20] is that the ground-state expectation value of the position operator is given by the analogue of Eq. (6), namely: a quantity defined modulo L as above.
The right-hand side of Eq. (12) is not simply the expectation value of an operator: the given form, as the imaginary part of a logarithm, is indeed essential. Furthermore, its main ingredient is the expectation value of the multiplicative operator e i 2π LX : it is important to realize that this is a genuine many-body operator. In general, one defines an operator to be one-body whenever it is the sum of N identical operators, acting on each electronic coordinate separately: for instance, theX operator is such. In order to express the expectation value of a one-body operator the full many-body wavefunction is not needed: knowledge of the one-body reduced density matrix ρ is enough: I stress that, instead, the expectation value of e i 2π LX over a correlated wavefunction cannot be expressed in terms of ρ, and knowledge of the N-electron wavefunction is explicitly needed. In the special case of a single-determinant, the N-particle wavefunction is uniquely determined by the one-body reduced density matrix ρ (which is the projector over the set of the occupied single-particle orbitals): therefore the expectation value X , Eq. (12), is uniquely determined by ρ. But this is peculiar to uncorrelated wavefunctions only: this case is discussed in detail below.
As in the one-body case, whenever the many-body Hamiltonian is slowly varying in time, the macroscopic electrical current flowing through the system is given by where X is evaluated using in Eq. (11) the instantaneous ground eigenstate of the Hamiltonian at time t: this result is proved in Ref. [20]. Considering now the limit of a large system, X is an extensive quantity: the macroscopic current J , Eq. (13), goes therefore to a well defined thermodynamic limit. We stress that nowhere in our presentation have we assumed crystalline periodicity. Therefore our definition of X is very general: it applies to any condensed system, either ordered or disordered, either independent-electron or correlated.
Macroscopic polarization
In the Introduction, we have discussed what polarization is not, by outlining some incorrect definitions [2,3] and their problems [4]. We have not stated yet what polarization really is: to this aim, a few experimental facts are worth recalling. The absolute polarization of a crystal in a given state has never been measured as a bulk property, independent of sample termination. Instead, well known bulk properties are derivatives of the polarization with respect to suitable perturbations: permittivity, pyroelectricity, piezoelectricity, dynamical charges, In one important case-namely, ferroelectricity-the relevant bulk property is inferred from the measurement of a finite difference (polarization reversal). In all cases, the derivative or the difference in the polarization is typically accessed via the measurement of a macroscopic current. For instance, to measure the piezoelectric effect, the sample is typically strained along the piezoelectric axis while being shorted out with a capacitor (see Fig. 2).
The theory discussed here only concerns phenomena where the macroscopic polarization is induced by a source other than an electric field. Even in this case, the polarization may (or may not) be accompanied by a field, depending on the boundary conditions chosen for the macroscopic sample. The theory addresses polarization differences in zero field: this concerns therefore lattice dynamics, piezoelectricity (as in the ideal experiment sketched in Fig. 2), and ferroelectricity. Notably, the theory reported here does not address the problem of evaluating the dielectric constant: this can be done using alternative approaches, such as the well-established linear-response theory [13,27], or other more innovative theories [28].
The bulk quantity of interest, to be compared with experimental measurements, is the polarization difference between two states of the given solid, connected by an adiabiatic transformation of the Hamiltonian. The electronic term in this difference is: where J(t) is the current flowing through the sample while the potential is adiabatically varied, i.e. precisely the quantity discussed in the previous Section, Eq. (13). Notice that in the adiabatic limit ∆t goes to infinity and J(t) goes to zero, while Eq. (14) yields a finite value, which only depends on the initial and final states. We may therefore write: where it is understood that Eq. (15) is to be used twice, with the final and with the initial ground states, in order to evaluate the quantity of interest ∆P . Notice that L → ∞ in Eq. (15) is a rather unconventional limit, since the exponential operator goes formally to the identity, but the size of the system and the number of electrons in the wavefunction increase with L.
The case of independent electrons
We now specialize to an uncorrelated system of independent electrons, whose N-electron wavefunction |Ψ is a Slater determinant. As discussed above, in this case the expectation value X , Eq. (12), is uniquely determined by the one-body density matrix. However, the formulation is simpler when expressing X and the resulting polarization P directly in terms of the orbitals. We restore explicit spin variables from now on. Suppose N is even, and |Ψ is a singlet. The Slater determinant has thus the form: where ϕ i are the single-particle orbitals. It is then expedient to define |Ψ = e i 2π LX |Ψ : even |Ψ is indeed a Slater determinant, where each orbital ϕ i (x) of |Ψ is multiplied by the plane wave e i 2π L x . According to a well known theorem, the overlap amongst two determinants is equal to the determinant of the overlap matrix amongst the orbitals. We therefore define the matrix (of size N/2 × N/2): in terms of which we easily get where the factor of 2 accounts for double spin occupancy, and the expression becomes accurate in the limit of a large system. The expression of Eq. (19) goes under the name of "single-point Berry phase" (almost an oxymoron!), and was first proposed by the present author in a volume of lecture notes [11]. Since then, its three-dimensional generalization has been used in a series of DFT calculations for noncrystalline systems [22], and has been scrutinized in some detail in Ref. [21].
The case of a crystalline system of independent electrons is the one which historically has been solved first [5,6,8], though along a very different logical path [7,13,14,15] than adopted here. I am going to outline how the present formalism leads to the earlier results.
For the sake of simplifying notations, I am going to consider the case of an insulator having only one completely occupied band. The single-particle orbitals may be chosen in the Bloch form: indeed, the canonical ones must have the Bloch form. The many-body wavefunction, Eq. (16), becomes in the crystalline case: The lattice constant is a = 2L/N, and the Bloch vectors entering Eq. (20) are equally spaced in the reciprocal cell (0, 2π/a]: Na s, s = 1, 2, . . . , N/2. Owing to the orthogonality properties of the Bloch functions, the overlap matrix elements in Eq. (18) vanish except when q s ′ = q s − 2π/L, that is s ′ = s−1: therefore the determinant in Eq. (19) factors as where ψ q 0 (x) ≡ ψ q N/2 (x) is implicitly understood (so-called periodic gauge). The expression in Eq. (22) is precisely the one first proposed by King-Smith and Vanderbilt [8] as a discretized form for the Berry phase: in fact, the multi-band three-dimensional generalization of Eq. (22) is the standard formula [7,13] implemented in first-principle calculations [16] of macroscopic polarization in crystalline dielectrics, either within DFT or HF.
Critical rethinking of DFT
The modern viewpoint about macroscopic polarization has even spawned a critical rethinking of density-functional theory in extended systems. The debate started in 1995 with a paper by Gonze, Ghosez, and Godby [29], and continues these days [30]. The treatment of polarization provided in the present work allows discussing the issue in a very simple way.
The celebrated Hohenberg-Kohn (HK) theorem, upon which DFT is founded [31], states that there exists a universal functional F [n], which determines the exact ground-state energy and all other ground-state properties of the system. The main hypotheses are that the magnetic field is vanishing, the ground state is non degenerate, and-most important to the present purposes-the system is finite, with a square-integrable ground wavefunction. This is precisely the key point when dealing with an extended system. Ideally, it is possible to refer to a macroscopic but finite system: polarization is then by definition the dipole divided by the volume, and Eq. (1) safely applies. As a consequence, the polarization of the real interacting system is identical to the one of the fictitious noninteracting Kohn-Sham (KS) system. Unfortunately, such polarization depends on the charge distribution both in the bulk and at the surface: according to Ref. [29], this fact implies a possible "ultranonlocality" in the KS potential: two systems having the same density in the bulk region may have qualitatively different KS potentials in the same region, and different polarizations as well. The issue can be formulated in a more transparent way by recasting it in the language of the present work.
As discussed in the Introduction, condensed matter theory invariably works in a different way: one adopts BvK since the very beginning, and the system has no surface by construction. The original HK theorem was formulated for the case where the Schrödinger's equation is solved imposing squareintegrable boundary conditions, but the same theorem holds within BvK, with an identical proof, for a finite N-electron system. We can then define, even within BvK, the fictitious KS system of noninteracting electrons, having the same density as the interacting one: if the system is crystalline, the KS orbitals have the Bloch form. Now the question becomes: do the interacting system and the corresponding KS noninteracting one have the same polarization? The answer is actually "no": in fact, within BvK, polarization is not a function of the density, not even of the one-body density matrix, as stressed above. Numerical evidence of the fact that the two polarizations are not equal has been given [30].
Other important implications concern the occurrence of macroscopic electric fields within DFT: we refer to the original literature [29,30] about this issue, while here we limit ourselves to just remarking an important point. The potential (both one-body and two-body) within the Schrödinger's equation must be BvK periodic, otherwise the Hamiltonian is an ill-defined operator in the Hilbert space. The periodicity of the potential is tantamount to enforcing a vanishing macroscopic electric field: therefore some ad-hoc strategies must be devised in order to cope with nonzero electric fields, as is indeed done in Refs. [29,30].
|
2014-10-01T00:00:00.000Z
|
1999-03-13T00:00:00.000
|
{
"year": 1999,
"sha1": "db9a95545906b37ced2584bcb82bfea4d93e4de1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9903216",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e8dc299b2aa4086bb16bb7c56aa6ec5dd347a7e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
222142407
|
pes2o/s2orc
|
v3-fos-license
|
Finding the Evidence: Localization-aware Answer Prediction for Text Visual Question Answering
Image text carries essential information to understand the scene and perform reasoning. Text-based visual question answering (text VQA) task focuses on visual questions that require reading text in images. Existing text VQA systems generate an answer by selecting from optical character recognition (OCR) texts or a fixed vocabulary. Positional information of text is underused and there is a lack of evidence for the generated answer. As such, this paper proposes a localization-aware answer prediction network (LaAP-Net) to address this challenge. Our LaAP-Net not only generates the answer to the question but also predicts a bounding box as evidence of the generated answer. Moreover, a context-enriched OCR representation (COR) for multimodal fusion is proposed to facilitate the localization task. Our proposed LaAP-Net outperforms existing approaches on three benchmark datasets for the text VQA task by a noticeable margin.
Introduction
Visual Question Answering (VQA) has attracted much interest from the communities and witnessed tremendous progress. However, lacking the ability to generate answers based on texts in the image limits its applications. Recently, many new datasets (Biten et al., 2019a; and new methods Hu et al., 2020) are proposed to tackle this challenge and refer it as text VQA.
The earliest method for text VQA is LoRRA , which provides an optical character recognition (OCR) module for the VQA input and proposes a dynamic copy mechanism to select the answer from both fixed vocabulary and OCR words. The following work M4C (Hu et al., 2020) inspired by LoRRA, uses rich representations of OCR as input and utilizes dynamic pointer network to deal with out-of-vocabulary answers, leading to state-of-the-art performance. However, M4C simply concatenates all modalities as transformer input and does not consider the high-level interaction among modalities of text VQA. Moreover, it is unable to provide evidence for the answer since the text is not localized in the image. Another recent work (Wang et al., 2020) proposes a new dataset for evidence-based text VQA, which suggests Intersection over Union (IoU) based evaluation metric to measure the evidence. Our work follows the spirit of evidence-based text VQA. More specifically, we generate the answer text bounding box during the answer prediction process as supplementary evidence for our answer. We propose a localization-aware answer prediction module (LaAP) that integrates the predicted bounding box with our semantic representation for the final answer. Besides, we propose a multimodal fusion module with context-enriched OCR representation, which uses a novel position-guided attention to integrate context object features into OCR representation.
The contributions of this paper are summarized as follows: 1) We propose a LaAP module, which predicts the OCR position and integrates it with the generated answer embedding for final answer prediction. 2) We propose a context-enriched OCR representation (COR), which enhances the OCR modality and simplifies the multimodal input. 3) We show that the predicted bounding box can provide evidence for analyzing network behavior in addition to improving the performance. 4) Our proposed LaAP-Net outperforms state-of-the-art approaches on three benchmark text VQA datasets, TextVQA , ST-VQA(Biten et al., 2019b) and OCR-VQA (Mishra et al., 2019), by a noticeable margin.
Text Visual Question Answering
Text VQA has attracted much attention from the communities. The predominate method is LoRRA , which takes image features, OCR features and questions to generate the answer. LoRRA mimics the human answering process by providing the image-looking module, text-reading module and answer-reasoning module. The generated answer could be selected from a fixed answer vocabulary or one of the OCR tokens by the copy module. The copy module is further improved by M4C (Hu et al., 2020) using dynamic pointer network. The M4C also proposes a transformer based network with 3 multi-modal input (question, image object features and OCR features). We share the same spirit as M4C but split the network into a clear encoder-decoder structure. We further propose a context-enriched OCR representation to extract OCR related image features.
Evidence-based VQA and Multitask Learning
Evidence-based VQA has been proposed in the recent work (Wang et al., 2020), which suggests to use intersection over union (IoU) to indicate the evidence. Many existing works (Selvaraju et al., 2017;Goyal et al., 2016;Gao et al., 2019) compute the attention scores and build spatial maps on image to highlight regions, which the model focuses on. The spatial maps serve as an evidence and visual explanations of a VQA architecture. Our method further extends this by designing a location predictor to generate a bounding box on the image to explain the answer generated. The bounding box explains that the correct answer generated is based on the analysis of underlying reasoning instead of exploiting the statistics of the dataset. As such, the bounding box becomes evidence of the VQA answer. To achieve the aforementioned target, we design a multitask learning process, which not only generates the answer based on the image and question but also provides the bounding box for the answer. The proposed method improves the interpretation of VQA results and leads to better performance.
LaAP Network Architecture
To better utilize the position information of image texts and enforce the network to better exploit visual features, we propose a localization-aware answer prediction network (LaAP-Net). Our LaAP-Net is built based on the multimodal transformer encoder, transformer decoder and localization-aware prediction network as shown in Figure 1. The transformer encoder takes the question embedding and OCR embedding as input. Question embedding is generated by putting the question through a pretrained BERT-based model, whereas the OCR embedding is generated by our proposed context-enriched OCR representation module. As highlighted in dark yellow in Figure 1, the decoding process starts with the <begin> signal. For each decoded output, we first generate a bounding box. This bounding box will then be embedded and added to the current answer decoder output, which is referred as localization-aware answer representation. Finally, it is fed to the vocabulary score module and OCR score module. The scores are concatenated and the element with the maximum score is selected as the final answer. In the following section, we will present the three components of LaAP-Net: the context-enriched OCR representation, the localization-aware predictor and the transformer with simplified decoder.
Context-enriched OCR Representation
Existing work (Hu et al., 2020) builds a common embedding space for all modalities. However, this common embedding space has difficulty utilizing the image object features. We observe this by training the M4C (Hu et al., 2020) network without the image object modality. The accuracy is almost unaffected. To better exploit the image object modality, we propose the context-enriched OCR representation (COR) module (Shown in Figure 2). Ideally, the answer for text VQA should be found from OCR tokens, thus we Figure 1: An overview of LaAP model. We perform context-enriched OCR representation to extrat object features. Then question words and enriched OCR tokens are input to the transformer encoder and the transformer decoder. Based on the transformer decoder outputs, we first predict the answer localization, and then integrate this localization to the OCR embedding. Decoder output is also equipped with OCR position embedding. The OCR scores and vocabular scores are calculated accordingly to find the answer from an OCR token or a word from the fixed answer vocabulary.
integrate geometric context objects of an OCR token into its representation to improve the discriminative power. Take . We embed the given question into a set of word embedding x ques k (where k = 1, ..., K and K is the number of words) through a pretrained BERT language model (Devlin et al., 2019). All embeddings are then linearly projected to a d-dimensional space.
The detailed computation process for COR is described as follows. Firstly, the position-guided attention score vector att m between the m-th OCR token and the image objects is calculated as where W Q and W K are query projection matrix and key projection matrix respectively. Then the m-th image attended OCR representation is calculated as weighted sum of the N object feature vectors as Note that we omit the multi-head attention mechanism (Vaswani et al., 2017) for simplicity. Finally, each OCR token is represented by aggregating OCR feature embedding, image attended OCR representation and position embedding asx where W ocr is a matrix that linearly project the bounding box coordinate vector to d dimension. With the proposed attention, the image object modality is merged into OCR. We then feedx ocr 1 ,...,x ocr M and x ques 1 , ..., x ques K into the transformer encoder as input. The strengthened OCR representationx ocr m empowers the network to better learn the semantic correlation between OCR tokens and question. Meanwhile, it simplifies the multimodal feature input to improve the localization-aware answer prediction.
Localization-aware Predictor
To exploit the positional information of image features and texts, we design a localization-aware predictor to perform the bounding box prediction. The bounding box is embedded and added to the decoder output to generate the localization-aware answer representation. More specifically, given the answer embedding y dec output from the decoder, we calculate the localization-aware answer representation z ans by fusing y dec with the gated bounding box projection as where W loc and bias loc are weights of a linear layer to project the location bounding box to the same dimension as y dec and • represents element-wise multiplication. g loc is the localization gate. Note that our network update the gate weight automatically through training, so that it implicitly reveals the statistical importance of the localization information. Similarly, we calculate the high-level localizationaware representation z ocr m (where m = 1, ...M ) of each OCR token as where y ocr m , denotes the m-th OCR encoding from the last encoder layer and b ocr m is the corresponding bounding box coordinates. b ocr m goes through the same linear projection layer and localization gate as b pred so that they are projected to the same high-dimensional space. Then similar to (Hu et al., 2020), we obtain the similarity score s ocr m between each OCR representation and the answer representation as s ocr m = (W ans * z ans + bias ans )(W ocr * z ocr m + bias ocr ), m = 1, ..., M where W ans , bias ans , W ocr and bias ocr are parameters of linear projection layers. The localizationaware answer representation z ans is also fed into a classifier to output V scores s voc v (v = 1, ..., V ), where V is the vocabulary size. The final prediction is selected as the element with the maximum score as
Bounding box
Classes Figure 3: An overview of the transformer with simplified decoder (TSD). TSD output is used to generate the bounding box, which is then used for answer prediction Note that the predicted bounding box is not explicitly used in generating the answer. However, localization prediction is a vision task so it can enforce the network to exploit visual features. As a result, it serves as a good complement to the classical vocabulary classification task, which mainly focuses on linguistic semantics. The localization-aware predictor strengthens the learned answer embedding to attend to the correct OCR token, which in turn facilitates the classifier to correctly find the word. Moreover, this localization information improves the performance of position-related questions as shown in Figure 4(a) and 4(c), which will be further discussed in Section 4.2
Loss Design to Incorporate the Evidence Scores
We use the IoU scores as the evidence for the answer generated. Therefore, we propose a multitask loss, which facilitates the answer embedding to learn both the semantics and localization information provided by the OCR tokens. The proposed multi-task loss consists of three individual loss functions: localization loss L l , semantic loss L s and the fusion loss L f .
The answer embedding output from the decoder is fed into a multilayer perceptron (MLP) to directly predict the bounding box location b pred of the answer OCR token. Inspired by (Carion et al., 2020), the localization loss L l is defined as: where b gt denotes the ground truth bounding box, which is obtained by matching the OCR token text to the ground truth answer text. IoU and L 1 calculate the intersection over union and L1 norm respectively between the prediction and ground-truth bounding box. I = 1 if the answer word matches one of the recognized OCR text and 0 otherwise. To accurately answer a question, OCR localization and semantic information are both critical. Thus, we propose a fusion loss L f to couple the localization prediction and semantic representation of the answer. The two aspects of information complement each other in the process of decision making. Formally, given the target scores t ocr m ∈ {0, 1}(m = 1, ...M ), we formulate our fusion loss L f using cross entropy as In order to exploit the linguistic meaning of the answer embedding, we collect a fixed vocabulary of frequently used words. We feed the localization-aware answer representation z ans into a linear classifier to classify answer embedding of each decoding step to one of the vocabulary. Our semantic loss L s is computed as the cross entropy between the classification score vector and the one hot encoding from the ground truth word. The overall multi-task loss of the network is calculated as L = L f + λ l L l + λ s L s , where λ l and λ s are regulation coefficients that determine the importance of localization loss and semantic loss. The value of λ l and λ s are experimentally selected.
Transformer with Simplified Decoder
Existing works (Hu et al., 2020;Gao et al., 2020) use BERT alike transformer architecture, which allows each decoder layer to attend to the same depth encoder layer. However, a deeper encoder layer extracts a more broad view of the input than a shallow layer (Clark et al., 2019). As such, we adopt the standard transformer encoder-decoder structure as shown in Figure 3. Here, we use the transformer with simplified decoder (TSD) by removing the decoder self-attention to save the computational cost. We experimentally find that only using the encoder-decoder attention can maintain the same performance. The multimodal inputs are encoded by L stacked standard transformer encoder layers. The embedding of the last encoder layer is fed into each of the L decoder layers. The answer word is generated in an auto-regressive manner, i.e. for each decoding step, we take the predicted answer embedding from the previous step as the decoder input and obtain the answer embedding as the decoder output. The decoding process is performed by the proposed localization-aware prediction module as shown in Figure 1 and discussed in Section 3.3.
Experiments
We evaluate our LaAP-Net on the three challenging benchmark datasets: TextVQA , ST-VQA (Biten et al., 2019a) and OCR-VQA (Mishra et al., 2019). We show that the proposed LaAP-VQA network outperforms state-of-the-art works on these datasets. We further perform the ablation study to investigate the proposed context-enriched OCR representation (COR) and the localization-aware answer prediction (LaAP) on TextVQA dataset.
Implementation Details
For a fair comparison with the state-of-the-art methods, we follow the same multimodal input as M4C (Hu et al., 2020). More specifically, we use a pretrained BERT (Devlin et al., 2019) model for question encoding, the Rosetta-en OCR system (Borisyuk et al., 2018) for OCR representation and a Faster- RCNN (Ren et al., 2015) based image feature extraction. The OCR tokens are represented by a concatenation of the appearance features from Faster R-CNN , FastText embeddings (Bojanowski et al., 2017) , PHOC feature (Almazán et al., 2014) and bounding box (bbox) embedding. We set the common dimensionality d = 768 and the number of transformer layers L = 4. More details of training configuration are summarized in the supplementary material.
Evaluation on TextVQA Dataset
The TextVQA dataset contains 28408 images with 34602 training, 5000 validation and 5734 testing question-answer pairs. We compare our result on TextVQA to the newest SOTA method SMA (Gao et al., 2020) and other existing works like LoRRA , MSFT VTI (MSFT-VTI, 2019), and M4C (Hu et al., 2020). The proposed LaAP-Net achieves a 40.68% validation accuracy and a 40.54% testing accuracy, which improves the SOTA by 1.10% (absolute) and 0.25% (absolute).
Methods
Val Acc. Test Acc. Note that we only compare with SMA results using the same set of features to show the advantage of the network structure itself. We also train our network with additional data from ST-VQA dataset following M4C and boost the test accuracy by 0.95% (absolute).
Ablation Study on Network Components. Context-enriched OCR representation (COR) and
Localization-aware predictor (LaP) are the two key features of our network. We investigate the importance of both components by progressively adding them on our transformer with simplified decoder (TSD) backbone. First, we remove COR and LaP from our network and feed image object feature directly into the encoder as in M4C. The answer prediction part is also strictly following M4C. This configuration is denoted as TSD in Table.1. Then we add COR on TSD, which is denoted as TSD+COR. The third ablation is adding only LaP to TSD (TSD+LaP). Each component demonstrates a contribution to performance improvement as shown in Table 1. To further prove the effectiveness of COR and LaP, we add them on our baseline network M4C. COR and LaP individually lead to an accuracy improvement of 0.38 and 0.95 respectively. COR and LaP together boost the accuracy by 1.33. Note that our network without COR, i.e. TSD+LaP surfers from performance retrogress. The rationale behind is that flat multimodal feature used in place of COR contains both objects and OCR tokens. Object's position embedding introduces much noise for the localization task. COR absorbs context object features in OCR representation and improves its discriminating power. Meanwhile, the encoder multimodal input is sim-Method Val. Acc.
We restrict the answer generation source to study the effect of our method on word semantic learning and OCR selection. As shown in Table.2, our model significantly improves the accuracy when we only predict the answer from vocabulary. It implies that our localization prediction module enhances the network's capacity for learning the semantics of OCR tokens, which coincides with our qualitative analysis.
Evidence-based Qualitative Analysis on TextVQA Dataset. One challenge for the existing VQA system is that the correct answer generated is hard to tell whether the answer is based on the analysis of underlying reasoning or through exploiting the statistics of the dataset. As such, Intersection-over-Union (IoU) (Wang et al., 2020) is recommended to measure the evidence for the answer generated. The IoU result of our bounding box is shown in Figure 4. For example, in Figure 4(b), two IoU results (0.84, 0.68) explain the reason for the answer "startling stories". Higher IoU indicates better evidence. Furthermore, these IoU scores show the answer is generated by exploiting the image features instead of exploiting the statistics of the data set, i.e. a coincidental correlation in the data. Furthermore, we observe that most of the text VQA errors come from inaccurate OCR result. e.g. in Figure 4(d), the OCR token "intel)" is recognized wrongly, which results in the false answer of M4C. Due to the localization prediction, our method generates the correct answer even in such case (4(d)). Since localization tends to use visual features of OCR tokens rather than their text embedding, it can better determine the attended OCR token in spite of the text recognition result. With the predicted OCR bounding box, the answer generation problem is converted to a conditioned classification process P (text|predicted box) to recognize the text from the vocabulary. More examples supporting our analysis can be found in Figure 4.
Our localization predictor also shows the capability of understanding position and direction as shown in Figure 4(a, c). Our network learns to understand position in training because the ground-truth position is provided straight to guide the localization prediction, while in previous works, positional information is put through several layers of encoder and decoder without explicit guidance.
Evaluation on ST-VQA Dataset.
We evaluate the proposed model on the open vocabulary task of ST-VQA (Biten et al., 2019b), which contains 18921 training-validation images and 2971 test images. Following previous works (Hu et al., 2020;Gao et al., 2020), we split the images into training and validation set with size of 17028 and 1893 respectively.
We report both accuracy and ANLS score (default metric of ST-VQA) in Table 4. Our LaAP-Net surpasses the SOTA method by a large margin on both metrics. Note that SMA improves its baseline method M4C by only 0.004 in testing ANLS score while we boost the result by 0.019. Evidence-based Qualitative Analysis on ST-VQA Figure 5 shows IoU scores, our predicted bounding box and answer. In those examples, our proposed localization-aware answer predictor not only generates correct answer, but also predicts exact bounding box(drawn in blue) of the corresponding OCR token. Similar conclusion can be drawn from the result as discussed for TextVQA dataset. In Figure 5(a), our network correctly attends to the middle sign designated by the question, where our reference method M4C fails. In Figure 5(c), our network manages to predict the word 'river' even though it is not recognized by the OCR system. More qualitative examples can be found in the supplementary material.
Evaluation on OCR-VQA Dataset
Unlike TextVQA and ST-VQA that contain "in the wild" images, OCR-VQA dataset consists of 207572 images only of book covers. Thus, the image object modality is less important in OCR-VQA. Moreover, since questions are about the title or author of a book, it is relatively difficult to determine the location. Even so, our model still achieves the state-of-the-art result, 64.1% accuracy as shown in Table 5.
Failure Analysis
Two failure cases are shown in Figure 6. As discussed in Section 4.2, our model is sensitive to positional instruction in a question. However, in Figure 6(a), the question asks about relative position, which our network does not gain knowledge on. In Figure 6(b), the position "right" is indicated by an arrow, but our network locates the road sign on the right of the image. In this case, question answering requires reasoning in addition to text reading function, which we will investigate in our future work.
Conclusion
This paper proposes a localization-aware answer prediction network (LaAP-Net) for text VQA. Our LaAP-Net not only generates the answer to the question, but also provides a bounding box as an evidence of the answer generated. Moreover, a context-enriched OCR (COR) representation is proposed to integrate object related features. The proposed LaAP-Net outperforms existing approaches on three benchmark datasets for the text VQA task by a noticeable margin with new state-of-the-art performance: TextVQA 41.41% , ST-VQA 0.485 (ANLS) and OCR-VQA 64.1%. Figure 1: Qualitative examples from TextVQA dataset. We display predicted answers (Yellow for word generated from OCR and blue for vocabulary) of our LaAP-Net and the ground-truth (GT). Our predicted bounding box (blue box) is also depicted in the images to compare to the GT box (red box). Note that some images do not contain GT bounding box while some images contain more than one GT bounding box Figure 2: Qualitative examples from ST-VQA dataset. We display predicted answers (Yellow for word generated from OCR and blue for vocabulary) of our LaAP-Net and the ground-truth (GT). Our predicted bounding box (blue box) is also depicted in the images to compare to the GT box (red box).
|
2020-10-07T01:00:46.156Z
|
2020-10-06T00:00:00.000
|
{
"year": 2020,
"sha1": "7a65b38abc44bd6acb06ebc87e7e4765609d88a3",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.coling-main.278.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "e826c328aee9c8db6025d58076ce04bc9e9df377",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
5000871
|
pes2o/s2orc
|
v3-fos-license
|
Drawing Trees with Perfect Angular Resolution and Polynomial Area
We study methods for drawing trees with perfect angular resolution, i.e., with angles at each vertex, v, equal to 2pi/d(v). We show: 1. Any unordered tree has a crossing-free straight-line drawing with perfect angular resolution and polynomial area. 2. There are ordered trees that require exponential area for any crossing-free straight-line drawing having perfect angular resolution. 3. Any ordered tree has a crossing-free Lombardi-style drawing (where each edge is represented by a circular arc) with perfect angular resolution and polynomial area. Thus, our results explore what is achievable with straight-line drawings and what more is achievable with Lombardi-style drawings, with respect to drawings of trees with perfect angular resolution.
Introduction
Most methods for visualizing trees aim to produce drawings that meet as many of the following aesthetic constraints as possible: 1. straight-line edges, 2. crossing-free edges, 3. polynomial area, and 4. perfect angular resolution around each vertex.
These constraints are all well-motivated, in that we desire edges that are easy to follow, do not confuse viewers with edge crossings, are drawable using limited real estate, and avoid congested incidences at vertices. Nevertheless, previous tree drawing algorithms have made various compromises with respect to this set of constraints; we are not aware of any previous tree-drawing algorithm that can achieve all these goals simultaneously. Our goal in this paper is to show what is actually possible with respect to this set of constraints and to expand it further with a richer notion of edges that are easy to follow. In particular, we desire tree-drawing algorithms that satisfy all of these constraints simultaneously. If this is provably not possible, we desire an augmentation that avoids compromise and instead meets the spirit of all of these goals in a new way, which, in the case of this paper, is inspired by the work of artist Mark Lombardi [17].
Problem Statement. The art of Mark Lombardi involves drawings of social networks, typically using circular arcs and good angular resolution. Figure 1 shows such a work of Lombardi that is crossing-free and almost a tree. Note that it makes use of both circular arcs and straight-line edges. Inspired by this work, let us define a set of problems that explore what is achievable for drawings of trees with respect to the constraints listed above but that, like Lombardi's drawings, also allow curved as well as straight edges. [17].
Given a graph G = (V, E), let d(u) denote the degree of a vertex u, i.e., the number of edges incident to u in G. For any drawing of G, the angular resolution at a vertex u is the minimum angle between two edges incident to u. A vertex has perfect angular resolution if its minimum angle is 2π/d(u), and a drawing has perfect angular resolution if every vertex does. Drawings with perfect angular resolution cannot be placed on an integer grid unless the degrees of the vertices are constrained, so we do not require vertices to have integer coordinates. We define the area of a drawing to be the ratio of the area of a smallest enclosing circle around the drawing to the square of the distance between its two closest vertices.
Suppose that our input graph, G, is a rooted tree T . We say that T is ordered if an ordering of the edges incident upon each vertex in T is specified. Otherwise, T is unordered. If all the edges of a drawing of T are straight-line segments, then the drawing of T is a straight-line drawing. We define a Lombardi drawing of a graph G as a drawing of G with perfect angular resolution such that each edge is drawn as a circular arc. When measuring the angle formed by two circular arcs incident to a vertex v, we use the angle formed by the tangents of the two arcs at v. Circular arcs are strictly more general than straight-line segments, since straight-line segments can be viewed as circular arcs with infinite radius. Figure 2 shows an example of a straight-line drawing and a Lombardi drawing for the same tree. Thus, we can define our problems as follows: 1. Is it always possible to produce a straight-line drawing of an unordered tree with perfect angular resolution and polynomial area? 2. Is it always possible to produce a straight-line drawing of an ordered tree with perfect angular resolution and polynomial area? 3. Is it always possible to produce a Lombardi drawing of an ordered tree with perfect angular resolution and polynomial area?
(a) Straight-line drawing for an unordered tree (b) Lombardi drawing for an ordered tree Fig. 2: Two drawings of a tree T with perfect angular resolution and polynomial area as produced by our algorithms. Bold edges are heavy edges, gray disks are heavy nodes, and white disks are light children. The root of T is in the center of the leftmost disk.
Related Work. Tree drawings have interested researchers for many decades: e.g., hierarchical drawings of binary trees date to the 1970's [23]. Many improvements have been proposed since this early work, using space efficiently and generalizing to non-binary trees [2,5,[12][13][14][20][21][22]. These drawings do not achieve all the constraints mentioned above, however, especially the constraint on angular resolution. Alternatively, several methods strive to optimize angular resolution of trees. Radial drawings of trees place nodes at the same distance from the root on a circle around the root node [10]. Circular tree drawings are made of recursive radial-type layouts [19]. Bubble drawings [15] draw trees recursively with each subtree contained within a circle disjoint from its siblings but within the circle of its parent. Balloon drawings [18] take a similar approach and heuristically attempt to optimize space utilization and the ratio between the longest and shortest edges in the tree. Convex drawings [4] partition the plane into unbounded convex polygons with their boundaries formed by tree edges. Although these methods provide several benefits, none of these methods guarantees that they satisfy all of the aforementioned constraints.
The notion of drawing graphs with edges that are circular arcs or other nonlinear curves is certainly not new to graph drawing. For instance, Cheng et al. [6] used circle arcs to draw planar graphs in an O(n) × O(n) grid while maintaining bounded (but not perfect) angular resolution. Similarly, Dickerson et al. [7] use circle-arc polylines to produce planar confluent drawings of non-planar graphs, Duncan et al. [8] draw graphs with fat edges that include circular arcs, and Cappos et al. [3] study simultaneous embeddings of planar graphs using circular arcs. Finkel and Tamassia [11] use a forcedirected method for producing curvilinear drawings, and Brandes and Wagner [1] use energy minimization methods to place Bézier splines that represent express connections in a train network. In a separate paper [9] we study Lombardi drawings for classes of graphs other than trees.
Our Contributions. In this paper we present the first algorithm for producing straightline, crossing-free drawings of unordered trees that ensures perfect angular resolution and polynomial area. In addition we show, in Section 3, that if the tree is ordered (i.e., given with a fixed combinatorial embedding) then it is not always possible to maintain perfect angular resolution and polynomial drawing area when using straight lines for edges. Nevertheless, in Section 4, we show that crossing-free polynomial-area Lombardi drawings of ordered trees are possible. That is, we show that the answers to the questions posed above are "yes," "no," and "yes," respectively.
Straight-line drawings for unordered trees
Let T be an unordered tree with n nodes. We wish to construct a straight-line drawing of T with perfect angular resolution and polynomial area.
The main idea of our algorithm is, similarly to the common bubble and balloon tree constructions [15,18], to draw the children of each node of the given tree in a disk centered at that node; however, our algorithm differs in several key respects in order to achieve the desired area bounds and perfect angular resolution.
Heavy Path Decomposition
The initial step before drawing the tree T is to create a heavy path decomposition [16] of T . To make the analysis simpler, we assume T is rooted at some arbitrary node r. We let T u represent the subtree of T rooted at u, and |T u | the number of nodes in T u . A node c is the heavy child of u if |T c | ≥ |T v | for all children v of u. In the case of a tie, we arbitrarily designate one node as the heavy child. We refer to the non-heavy children as light and let L(u) denote the set of all light children of u. The light subtrees of u are the subtrees of all light children of u. We define l(u) = 1 + ∑ v∈L(u) |T v | to be the light size of u. An edge is called a heavy edge if it connects a heavy child to its parent; otherwise it is a light edge. The set of all heavy edges creates the heavy-path decomposition of T , a disjoint set of (heavy) paths where every node in T belongs to exactly one path, see Figure 3. The heavy path decomposition has the following interesting property. If we treat each heavy path as a node, and each light edge as connecting two heavy-path nodes, we obtain a tree H(T ). This tree has height h(T ) ≤ log 2 n since the size of each light child is less than half the size of its parent. We refer to the level of a heavy path as the depth of the corresponding node in the decomposition tree, where the root has depth 0. We extend this notion to nodes, i.e., the level of a node v is the level of the heavy path to which v belongs.
Drawing Algorithm
Our algorithm draws T incrementally in the order of a depth-first traversal of the corresponding heavy-path decomposition tree H(T ), i.e., given drawings of the lower-level heavy paths (the light children and their descendents) connected to a heavy path P in H(T ) we construct a drawing of P and its subtrees. Let P = (v 1 , . . . , v k ) be a heavy path. Then we draw each node v i of P in the center of a disk D i and place smaller disks containing the drawings of the light children of v i and their descendents around v i in two concentric annuli of D i . We guarantee perfect angular resolution at v i by connecting the centers of the child disks with appropriately spaced straight-line edges to v i . Next, we create the drawing of P and its descendents within a disk D by placing D 1 in the center of D and D 2 , . . . , D k on concentric circles around D 1 . We show that the radius of D is linear in the number n(P) of nodes descending from P and exponential in the level of P. In this way, at each step downwards in the heavy path decomposition, the total radius of the disks at that level shrinks by a constant factor, allowing room for disks at lower levels to be placed within the higher-level disks. Figure 2a shows a drawing of an unordered tree according to our method. Before we can describe the details of our construction we require the following simple geometric property. We define an (R, δ )-wedge, δ ≤ π as a sector of angle δ of a radius-R disk, see Figure 4.
Proof. The largest disk inside the (R, δ )-wedge touches the circular arc and both radii of the wedge. Thus we immediately obtain a right triangle formed by the apex of the wedge, the center of the disk we want to fit, and one of its tangency points with the two radii of the wedge, see Figure 4. This triangle has one side of length r and the hypothenuse of length R − r. From sin(δ /2) = r R−r we obtain r = R sin(δ /2) 1+sin(δ /2) . In the next lemma we show how to draw a single node v of a heavy path P given drawings of all its light subtrees.
Lemma 2.2.
Let v be a node of T at level j of H(T ) with degree d(v) ≥ 2 and two incident heavy edges. For each light child u ∈ L(v) assume there is a disk D u of radius r u = 2 · 8 h(T )− j−1 |T u | that contains a fixed drawing of T u with perfect angular resolution and such that u is in the center of D u . Then we can construct a drawing of v and its light subtrees inside a disk D such that the following properties hold: 1. the edge between v and any light child u ∈ L(v) is a straight-line segment that does not intersect any disk other than D u ; 2. the heavy edges do not intersect any disk D u ; 3. any two disks D u and D u for u = u are disjoint; 4. the angular resolution of v is 2π/d(v); 5. the angle between the two heavy edges is at least 2π/3 and at most 4π/3; 6. the disk D has radius Proof. We assume that the heavy edge to the parent of v is directed horizontally to the left. We draw a disk D with radius r v centered at v and create d(v) spokes, i.e., rays extending from v including the fixed heavy edge and being equally spaced by an angle of 2π/d(v). Obviously, every neighbor of v must be placed on a distinct spoke in order to satisfy properties 3 and 4. The main difficulty is that there can be child disks that are too large to place without overlap on adjacent spokes inside D.
Let D max be the largest disk D u of any u ∈ L(v) and let r max be its radius. We split D into an outer annulus A and an inner disk B by a concentric circle of radius R = r v − 2r max , see Figure 5. We define a child u ∈ L(v) to be a small child, if its radius 1+sin(π/d(v)) , and to be a large child otherwise. We further say D u is a small (large) disk if u is a small (large) child. We denote the number of small children as n s and the number of large children as n l . By Lemma 2.1 we know that any small disk D u can be placed inside an (R, 2π/d(v))-wedge. This means that we can place all n s small disks on any subset of n s spokes inside B without violating property 3. So once we have placed all large disks correctly then we can always distribute the small children on the unused spokes.
We place all large disks in the outer annulus A. Observe that i.e., we can place all light children on the diameter of a disk of radius at most r v /4. If we order all light children along that diameter by their size we can split them into one disk containing the large disks and one containing the small disks, see Figure 5a. Assume that the large disks are arranged on the horizontal diameter of their disk and that this disk is placed vertically above v and tangent to D as shown in Figure 5a. Since that disk has radius at most r v /4 we can use Lemma 2.1 to show that it always fits inside an (r v , π/4)-wedge. If we now translate the large disks vertically upward onto a circle centered at v with radius r v − r max then they are still disjoint and they all lie in the intersection of A and the (r v , π/4)-wedge. We now rotate them counterclockwise around v until the leftmost disk D max touches the horizontal heavy edge. Thus all large disks are placed disjointly inside a π/4-sector of A. However, they are not centered on the spokes yet. Beginning from the leftmost large disk, we rotate each large disk D u and all its right neighbors clockwise around v until D u snaps to the next available spoke. Clearly, in each of the n l steps we rotate by at most 2π/d(v) in order to reach the next spoke.
We now bound the number n l of large children. By definition a child is large if 1+sin(π/d(v)) nodes. This yields ) .
From this we obtain that for d(v) ≥ 5 we have n l < 3d(v)/8. So for d(v) ≥ 5 we can always place all large disks correctly on spokes inside at most half of the outer annulus A since we initially place all large disks in a π/4-wedge and then enlarge that wedge by at most 3d(v)/8 · 2π/d(v) = 3π/4 radians. For d(v) = 2 there are no light children, for d(v) = 3 we immediately place the single light child on its spoke without intersecting the two heavy edges, and for d(v) = 4 we place the two light children on opposite vertical spokes separated by the two heavy edges, which does not produce any intersections either.
Since we require at most half of A to place all large children, we can assign the remaining heavy edge to the spoke exactly opposite of the first heavy edge if d(v) is even. If d(v) is odd, we choose one of the two spokes whose angle with the fixed heavy edge is closest to π. Finally, we arbitrarily assign the n s small children to the remaining free spokes inside the inner disk B.
By construction the drawing for v and its light subtrees obtained in this way satisfies Final transformation of the drawing. Fig. 6: Constructing the heavy path drawing by appending drawings of its heavy nodes.
Lemma 2.2 shows how to draw a single heavy node v and its light subtrees. It also applies to the root of T if we ignore the incoming heavy edge, and to the root node v 1 of a heavy path P = (v 1 , . . . , v k ) at level l ≥ 1 if we consider the light edge uv 1 to its parent u as a heavy edge for v 1 . We note that the last node v k of P is always a leaf that is trivial to draw. For drawing an entire heavy path P = (v 1 , . . . , v k ) we need to link the drawings of the heavy nodes into a path. Lemma 2.3. Given a heavy path P = (v 1 , . . . , v k ) and a drawing for each v i and its light subtrees inside a disk D i of radius r i , we can draw P and all its descendents inside a disk D such that the following properties hold: 1. the heavy edge v i v i+1 is a straight-line segment that does not intersect any disk other than D i and D i+1 ; 2. the light edge connecting v 1 and its parent does not intersect the drawing of P; 3. any two disks D i and D j for i = j are disjoint; 4. the drawing has perfect angular resolution; 5. the radius r of D is r = 2 ∑ k i=1 r i .
Proof. Let v 1 be the root of P and let u be the parent of v 1 (unless P is the heavy path at level 0). We place the disk D 1 at the center of D and assume that the edge uv 1 extends horizontally to the left. We create k − 1 vertical strips S 2 , . . . , S k to the right of D 1 , each S i of width 2r i , see Figure 6a. Each disk D i will be placed inside its strip S i . So from v 1 we extend the ray induced by the stub reserved for the heavy edge v 1 v 2 until it intersects the vertical line bisecting S 2 . We place v 2 at this intersection point. By property 5 of Lemma 2.2 we know that the angle between the two heavy edges incident to a heavy node is between 2π/3 and 4π/3. Thus v 2 is inside a right-open 2π/3-wedge W that is symmetric to the x-axis. Now for i = 2, . . . , k we extend from v i the stub of the heavy edge v i v i+1 into a ray and place v i+1 at the intersection of that ray and the bisector of S i+1 . Since at each v i we can either place D i or its mirror image we know that one of the two possible rays stays within W . Since each disk D i is placed in its own strip S i no two disks intersect (property 3) and since heavy edges are straight-line segments within two adjacent strips they do not intersect any non-incident disks (property 1). The light edge uv 1 is completely to the left of all strips and thus does not intersect the drawing of P (property 2). Since we were using the existing drawings (or their mirror images) of all heavy nodes, their perfect angular resolution is preserved (property 4).
The current drawing has a width that is equal to the sum of the diameters of the disks D 1 , . . . , D k . However, it does not yet necessarily fit into a disk D centered at v 1 whose radius equals that sum of the diameters. To achieve this we create k −1 annuli A 2 , . . . , A k centered around v 1 , each A i of width 2r i . Then from i = 2, . . . , k we either shorten or extend the edge v i−1 v i until D i is contained in its annulus A i , see Figure 6b. At each step i we treat the remaining path (v i , . . . , v k ) and its disks D i , . . . , D k as a rigid structure that is translated as a whole, see the translation vectors indicated in Figure 6b. In the end, each disk D i is contained in its own annulus A i and thus all disks are still pairwise disjoint. Since we only stretch or shrink edges of an x-monotone path but do not change any edge directions, the whole transformation preserves the previous properties of the drawing. Clearly, all disks now lie inside a disk D of radius r = r 1 + 2 ∑ k i=2 r i ≤ 2 ∑ k i=1 r i (property 5).
Combining Lemmas 2.2 and 2.3 we now obtain the following theorem: Theorem 2.4. Given an unordered tree T with n nodes we can find a crossing-free straight-line drawing of T with perfect angular resolution that fits inside a disk D of radius 2 · 8 h(T ) n, where h(T ) is the height of the heavy-path decomposition of T . Since h(T ) ≤ log 2 n the radius of D is no more than 2n 4 .
Proof. From Lemma 2.2 we know that for each node v of a heavy path P at level j the radius of the disk D containing v and all its light subtrees is r v = 8 h(T )− j l(v). So if P = (v 1 , . . . , v k ) Lemma 2.3 yields that P and all its descendents can be drawn in a disk of radius r where n(P) is the number of nodes of P and its descendents. This holds, in particular, for the heavy patĥ P at the root of H(T ), which proves the theorem.
Straight-line drawings for ordered trees
In many cases, the ordering of the children around each vertex of a tree is given; that is, the tree is ordered (or has a fixed combinatorial embedding). In the previous section we rely on the freedom to order subtrees as needed to achieve a polynomial area bound. Hence that algorithm cannot be applied to ordered trees with fixed embeddings. As we now show, there are ordered trees that have no straight-line crossing-free drawings with polynomial area and perfect angular resolution.
Specifically we present a class of ordered trees for which any straight-line crossingfree drawing of the tree with perfect angular resolution requires exponential area. Figure 7a shows a caterpillar tree, which we call the Fibonacci caterpillar because of its simple behavior when required to have perfect angular resolution. This tree has as its spine a k-vertex path, each vertex of which has 3 additional leaf nodes embedded on the same side of the spine. When drawn with straight-line edges, no crossings, and with perfect angular resolution, the caterpillar is forced to spiral (a single or a double spiral). The best drawing area, exponential in the number of vertices in the caterpillar, is achieved when the caterpillar forms a symmetric double spiral; see Figure 7c. The Fibonacci caterpillar shows that we cannot maintain all constraints (straightline edges, crossing-free, perfect angular resolution, polynomial area) for ordered trees. However, as we show next, using circular arcs instead of straight-line edges allows us to respect the remaining three constraints. See, for example, Figure 7b.
Lombardi drawings for ordered trees
In this section, let T be an ordered tree with n nodes. As we have seen in Section 3, we cannot find polynomial area drawings for all ordered trees using straight-line edges. An augmentation of the straight-line edge requirement is the use of circular arcs as edges. Circular arcs are curves that are not only still easy to follow visually but they also let us achieve all remaining three constraints, i.e., we can find crossing-free circular arc drawings with perfect angular resolution and polynomial area. We call a drawing with circular arcs and perfect angular resolution a Lombardi drawing, so in other words we aim for crossing-free Lombardi drawings with polynomial area.
The flavor of the algorithm for Lombardi tree drawings is similar to our straightline tree drawing algorithm of Section 2: We first compute a heavy-path decomposition H(T ) for T . Then we recursively draw all heavy paths within disks of polynomial area. Unlike before, we need to construct the drawing in a top-down fashion since the placement of the light children of a node v now depends on the curvature of the two heavy edges incident to v.
Our construction in this section uses the invariant that a heavy path P at level j is drawn inside a disk D of radius 2 · 4 h(T )− j n(P), where n(P) = |T v | for the root v of P.
Drawing heavy paths
Let P = (v 1 , . . . , v k ) be a heavy path at level j of the heavy-path decomposition that is rooted at the last node v k . We denote each edge v i v i+1 by e i . Recall that the angle in an intersection point of two circular arcs is measured as the angle between the tangents to the arcs at that point. We define the angle α(v i ) for 2 ≤ i ≤ k − 1 to be the angle between e i−1 and e i in node v i (measured counter-clockwise). The angle α(v k ) is defined as the angle in v k between e k−1 and the light edge e = v k u connecting the root v k of P to its parent u. Due to the perfect angular resolution requirement for each node v i , the angle α(v i ) is obtained directly from the number of edges between e i−1 and e i and the degree d(v i ).
Lemma 4.1. Given a heavy path P = (v 1 , . . . , v k ) and a disk D i of radius r i for the drawing of each v i and its light subtrees, we can draw P with each v i in the center of its disk D i inside a large disk D such that the following properties hold: 1. each heavy edge e i is a circular arc that does not intersect any disk other than D i and D i+1 ; 2. there is a stub edge incident to v k that is reserved for the light edge connecting v k and its parent; 3. any two disks D i and D j for i = j are disjoint; 4. the angle between any two consecutive heavy edges e i−1 and e i is α(v i ); 5. the radius r of D is r = 2 ∑ k i=1 r i . Proof. We draw P incrementally starting from the leaf v 1 by placing D 1 in the center M of the disk D of radius r = 2 ∑ k i=1 r i . We may assume that D 1 is rotated such that the edge e 1 is tangent to a horizontal line at v 1 and that it leaves v 1 to the right. All disks D 2 , . . . , D k will be placed with their centers v 2 , . . . , v k on concentric circles C 2 , . . . ,C k around M. The radius of C i is r 1 + 2 ∑ i−1 j=2 r j + r i so that D i−1 and D i are placed in disjoint annuli and hence by construction no two disks intersect (property 3). Each disk D i will be rotated around its center such that the tangent to C i at v i is the bisector of the angle α(v i ).
We now describe one step in the iterative drawing procedure that draws edge e i and disk D i+1 given a drawing of D 1 , . . . , D i . Disk D i is placed such that C i bisects the angle α(v i ) and hence we know the tangent of e i at v i . This defines a family F i of circular arcs emitted from v i that intersect the circle C i+1 , see Figure 8. We consider all arcs from v i until their first intersection point with C i+1 . Observe that the intersection angles of F i and C i+1 bijectively cover the full interval [0, π], i.e., for any angle α ∈ [0, π] there is a unique arc in F i that has intersection angle α with C i+1 . Hence we choose for e i the unique circular arc that realizes the angle α(v i+1 )/2 and place the center v i+1 of D i+1 at the endpoint of e i . We continue this process until the last disk D k is placed. This drawing of P realizes the angle α(v i ) between any two heavy edges e i−1 and e i (property 4). Note that for the edge from v k to its parent we can only reserve a stub whose tangent at v k has a fixed slope (property 2). Figure 10 shows an example.
Note that each edge e i is contained in the annulus between C i and C i+1 and thus does not intersect any other edge of the heavy path or any disk other than D i and D i+1 (property 1). Furthermore, the disk D with radius r = 2 ∑ k i=1 r i indeed contains all the disks D 1 , . . . , D k (property 5). Lemma 4.1 shows how to draw a heavy path P with prescribed angles between the heavy edges and an edge stub to connect it to its parent. Since each heavy path P (except the path at the root of H(T )) is the light child of a node on the previous level of H(T ) that light edge is actually drawn when placing the light children of a node, which we describe next. Fig. 8: Any angle α ∈ [0, π] can be realized. Fig. 9: Placing a single disk D in the extended small zone of D i (shaded gray). Fig. 10: Drawing a heavy path P on concentric circles with circular-arc edges. The angles α(v i ) are marked in gray; the edge stub to connect v 7 to its parent is dotted.
Drawing light children
Once the heavy path P is drawn as described above, it remains to place the light children of each node v i of P. For each node v i the two heavy edges incident to it partition the disk D i into two regions. We call the region that contains the larger conjugate angle the large zone of v i and the region that contains the smaller conjugate angle the small zone. If both angles equal π, then we can consider both regions small zones.
For a node v i at level j of H(T ) we define the radius r i of D i as All light children of v i are at level j + 1 of H(T ) and thus by our invariant every light child u of v i is drawn in a disk of radius r u = 2·4 h(T )− j−1 |T u |. Thus we know that r u ≤ r i /2; in fact, we even have ∑ u∈L(v i ) r u ≤ r i /2.
Light children in the small zone. Depending on the angle α(v i ), the small zone of a disk D i might actually be too narrow to directly place the light children in it. Fortunately, we can always place another disk D of radius at most r i /2 in an extension of the small zone along the annulus of D i in the drawing of P such that D touches e i−1 and e i and does not intersect any other previously placed disk, see Figure 9. If there is a single child u in the small zone then D = D u and we are done. The next lemma shows how to place more than one child. Proof. The idea of the algorithm for placing the l disks is to first place the disk D in the small zone as before. The disks D 1 , . . . , D l will then be placed within D so that no additional space is required.
In the first step of the recursive placement algorithm we either place D 1 or D l (whichever has smaller radius) and a disk D containing the remaining sequence of disks D 2 , . . . , D l or D 1 , . . . , D l−1 , respectively. Without loss of generality, let r 1 ≤ r l and thus in particular r 1 ≤ r /2. In order to fit inside D the disks D 1 and D must be placed with their centers on a diameter of D , see Figure 11a. The degree of freedom that we have is the rotation of that diameter around the center of D . Then the locus of the tangent point of D 1 and D is a circleĈ of radius r − 2r 1 around the center of D , see Figure 11b. There are exactly two circular arcs a 1 and a 2 tangent toĈ that are also tangent to v i with the slope required for the edge to D 1 . Let the two points of tangency onĈ be p 1 and p 2 . Now we rotate D 1 and D such that their point of tangency coincides with either p 1 or p 2 depending on which of them yields the correct embedding order of D 1 and D around v i . Clearly, a 1 or a 2 are also tangent to D 1 and D now. Assume we choose p 1 and the corresponding arc a 1 as in Figure 11b. Then we can connect any point in D 1 to v i with the unique circular arc of the required slope in v i . We will describe the exact placement of that arc later. Any such edge clearly stays inside the horn-shaped region that encloses D 1 and is formed by a boundary arc of the small zone and a 1 . Since a 1 separates D 1 from D , neither the new edge nor D 1 can interfere with any of the disks D 2 , . . . , D l and their respective edges as long as they stay inside D or connect to points in D .
For placing D 2 , . . . , D l we recursively apply the same procedure again, now using D as the disk D and a 1 as one of the boundary arcs. Then after l steps, we have disjointly placed all disks D 1 , . . . , D l inside the disk D such that their order respects the given tree order and no two edges intersect. Figure 11c gives an example.
Note that we require that the edges e i−1 and e i are tangent to D , which is possible only for an opening angle α of the small zone of at most π. For any angle α ≤ π the arcs a 1 and a 2 always stay within the extended small zone and form at most a semi-circle. This does not hold for α > π.
Light children in the large zone. Placing the light children of a vertex v i in the large zone of D i must be done slightly different from the algorithm for the small zone since Lemma 4.2 holds only for opening angles of at most π. On the other hand, the large zone does not become too narrow and there is no need to extend it beyond D i . Our approach splits the large zone into two parts that again have an opening angle of at most π so that we can apply Lemma 4.2 and draw all children accordingly.
Let l be the number of light children in the large zone of D i . We first place a disk D of radius at most r i /2 such that it touches v i and such that its center lies on the line bisecting the opening angle of the large zone. The disk D is large enough to contain the disjoint disks D 1 , . . . , D l for the light children of v i along its diameter. We need to distinguish whether l is even or odd. For even l we create a container disk D 1 for disks D 1 , . . . , D l/2 and a container disk D 2 for D l/2+1 , . . . , D l . Now D 1 and D 2 can be tightly packed on the diameter of D . Using a similar argument as in Lemma 4.2 we separate the two disks by a circular arc through v i that is tangent to the bisector of α(v i ) in v i . Since D is centered on the bisector this is possible even though the actual opening angle of the large zone is larger than π. If l is odd, we create a container disk D 1 for disks D 1 , . . . , D l/2 and a container disk D 2 for D l/2 +1 , . . . , D l . The median disk D l/2 is not included in any container. Then we apply Lemma 4.2 to D and the three disks D 1 , D l/2 , D 2 along the diameter of D , see Figure 12a. The separating circular arcs in v i are again tangent to the bisector of α(v i ), which is, since l is odd, also the correct slope for the circular arc connecting v i to the median disk D l/2 .
In both cases we split the large zone and the sequence of light children to be placed into two parts that each have an opening angle at v i of at most π between a separating circular arc and the edge e i−1 or e i , respectively. Next, we move D 1 and D 2 along the separating circular arcs keeping their tangencies until they also touch the edge e i−1 or e i , respectively. Then we can apply Lemma 4.2 to both container disks and thus place all light children in the large zone, see Figure 12b.
Drawing light edges The final missing step is how to actually connect a heavy node v i to its light children given a position of v i and positions of all disks containing its light subtrees. Let u be a light child of v i and let D u be the disk containing the drawing of T u . When placing the disk D u in the small or large zone of v i we made sure that a circular arc from v i with the tangent required for perfect angular resolution at v i can reach any point inside D u without intersecting any other edge or disk.
On the other side, we know by Lemma 4.1 that u is placed in the outermost annulus of D u and that it has a stub for the edge e = uv i . This stub is the required tangent for e in order to obtain perfect angular resolution in u. Let C u be the circle that is the locus of u if we rotate D u and the drawing of T u around the center of D u .
v There is again a family F of circular arcs with the correct tangent in u that lead towards D u and intersect the circle C u . As observed in Lemma 4.1 the intersection angles formed between F and C u bijectively cover the full interval [0, π], i.e., for any angle α ∈ [0, π] there is a unique circular arc in F that has an intersection angle of α with C u . In order to correctly attach u to v i we first choose the arc a in F that realizes an intersection angle of α(u)/2 with C u , where α(u) is the angle between e and the heavy edge from u to its heavy child that is required for perfect angular resolution in u. Let p be the intersection point of that arc with C u . Then we rotate D u and the drawing of T u around the center of D u until u is placed at p, see node v 7 in Figure 10. Since the stub of u for e also has an angle of α(u)/2 with C u , the arc a indeed realizes the edge e with the angles in both u and v i required for perfect angular resolution. Furthermore, a does not enter the disk bounded by C u and hence it does not intersect any part of the drawing of T u other than u.
We can summarize our results for drawing the light children of a node as follows: Let v be a node of T at level j of H(T ) with two incident heavy edges. For every light child u ∈ L(v) assume there is a disk D u of radius r u = 2 · 4 h(T )− j−1 |T u | that contains a fixed drawing of T u with perfect angular resolution and such that u is exposed in the outer annulus of D u . Then we can construct a drawing of v and its light subtrees inside a disk D, potentially with an extended small zone, such that the following properties hold: 1. the edge between v and any light child u ∈ L(v) is a circular arc that does not intersect any disk other than D u ; 2. the heavy edges do not intersect any disk D u ; 3. any two disks D u and D u for u = u are disjoint; 4. the angular resolution of v is 2π/d(v); 5. the disk D has radius r v = 4 h(T )− j l(v).
By combining Lemmas 4.1 and 4.3 we obtain the following theorem: Theorem 4.4. Given an ordered tree T with n nodes we can find a crossing-free Lombardi drawing of T that preserves the embedding of T and fits inside a disk D of radius 2 · 4 h(T ) n, where h(T ) is the height of the heavy-path decomposition of T . Since h(T ) ≤ log 2 n the radius of D is no more than 2n 3 . Figure 2b shows a drawing of an ordered tree according to our method. We note that instead of asking for perfect angular resolution, the same algorithm can be used to construct a circular-arc drawing of an ordered tree with any assignment of angles between consecutive edges around each node that add up to 2π. The drawing remains crossing-free and fits inside a disk of radius O(n 3 ).
Conclusion and Closing Remarks
We have shown that straight-line drawings of trees can be performed with perfect angular resolution and polynomial area, by carefully ordering the children of each vertex and by using a style similar to balloon drawings in which the children of any vertex are placed on two concentric circles rather than on a single circle. However, using our Fibonacci caterpillar example we showed that this combination of straight lines, perfect angular resolution, and polynomial area could no longer be achieved if the children of each vertex may not be reordered. For trees with a fixed embedding, Lombardi drawings in which edges are drawn as circular arcs allow us to retain the other desirable qualities of polynomial area and perfect angular resolution. In the appendix we report on a basic implementation and some practical improvements of the straight-line drawing algorithm.
Our work opens up new problems in the study of Lombardi drawings of trees, but much remains to be done in this direction. In particular, our polynomial area bounds seem unlikely to be tight, and our method is impractically complex. It would be of interest to find simpler Lombardi drawing algorithms that achieve perfect angular resolution for more limited classes of trees, such as binary trees, with better area bounds.
(a) The unmodified straight-line tree drawing algorithm. Though the area is bounded it is not quite desirable.
(b) A space-optimized drawing that still maintains the stated guarantees.
A Implementation Details
Although theoretically interesting, tree drawings with perfect angular resolution are also of practical importance. To that end, we have implemented a basic version of our straight-line drawing algorithm. The algorithm, though polynomially bounded, from a practical viewpoint is still far from desirable. In particular, as Figure 13a illustrates, there is significant space left between sibling nodes as our algorithm essentially focuses on providing a guaranteed bound. As Fig 13b demonstrates, with some simple heuristical refinements, however, far better use of space can be achieved.
We highlight three key improvements that we made to the algorithm that do not affect the overall layout and so still provide the same guaranteed bound as in the regular algorithm with additional quite simple improvements in space efficiency.
-In the construction, only large nodes are placed on the outer region. The remaining small nodes are placed inside the inner annulus. There is no reason not to place further small nodes in the outer region as well. As a result, we continue with the greedy approach and repeatedly insert the next largest in the outer region, skipping the spoke associated with the heavy edge, until no more nodes fit. We fill the remaining spokes with the smaller children. We also note that in many cases, all children fit inside the outer region, as the largest light child nodes are often small enough to fit in one wedge region. Figure 14 illustrates the improvement. -The radii for the light subtrees are increased to allow the disk to fit maximally within its wedge region. To ensure that the subtrees are still constrained initially to the primary layout algorithm, we defer the scaling of the nodes until after the layout has completed. This has the effect of using considerably more of the allocated space as demonstrated again by Figure 14.
(a) A portion of an unmodified straight-line tree drawing algorithm that only placed large nodes on the outside annulus.
(b) The same tree but with space-filling optimization in place.
Fig. 14: A partial snapshot of a tree drawing demonstrating filling the disk associated with the light subtree.
-The heavy path does not completely fill the disk associated with its head node. As a result, we also increase this radius as a constant factor after having laid out the main drawing, see Figure 15. -There were other improvements possible as well. One notable intentional omission was to use the heavy path breakdown for a subtree only if the entire subtree could not fit within the nodes' light-children radius. In many cases, the heavy path is small enough to still fit within this radius. We kept the path present to highlight the key feature in our algorithm that allows for the bounded area construction. The path itself could also be designed to use more of its underlying region, however, we do not see any easy way to do this effectively and avoid intersecting the path at a later point. Nonetheless, it is a promising area for space improvement.
(a) A portion of an unmodified straight-line tree drawing of a caterpillar-like tree.
(b) The same tree but with space-filling optimization in place.
|
2010-09-02T21:17:19.000Z
|
2010-09-02T00:00:00.000
|
{
"year": 2010,
"sha1": "85f3c2137ea89c9c796d56976ff7688b85a487c5",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00454-012-9472-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e23cbbb93ea7681f3cd790a4cd71ec1c0eaf28b1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
239049646
|
pes2o/s2orc
|
v3-fos-license
|
A Neural-Network-defined Gaussian Mixture Model for particle identification applied to the LHCb fixed-target programme
Particle identification in large high-energy physics experiments typically relies on classifiers obtained by combining many experimental observables. Predicting the probability density function (pdf) of such classifiers in the multivariate space covering the relevant experimental features is usually challenging. The detailed simulation of the detector response from first principles cannot provide the reliability needed for the most precise physics measurements. Data-driven modelling is usually preferred, though sometimes limited by the available data size and different coverage of the feature space by the control channels. In this paper, we discuss a novel approach to the modelling of particle identification classifiers using machine-learning techniques. The marginal pdf of the classifiers is described with a Gaussian Mixture Model, whose parameters are predicted by Multi Layer Perceptrons trained on calibration data. As a proof of principle, the method is applied to the data acquired by the LHCb experiment in its fixed-target configuration. The model is trained on a data sample of proton-neon collisions and applied to smaller data samples of proton-helium and proton-argon collisions collected at different centre-of-mass energies. The method is shown to perform better than a detailed simulation-based approach, to be fast and suitable to be applied to a large variety of use cases.
Introduction
In large high-energy physics experiments, particles deposit energy in several detectors, some of which are specialized in charged or neutral particle identification (PID). Usually a combination of techniques among Cherenkov and transition radiations, ionization loss, time-of-flight measurements and calorimetry are simultaneously employed to guarantee redundancy and a wide kinematic coverage. To exploit all the available information, multivariate classification techniques are typically used, combining the response of all involved detectors into variables, called PID classifiers in the following, optimizing the separation among different particle species. Once a PID classifier has been built, predicting its performance is crucial to physics analysis. In general, this implies the knowledge of its probability density function (pdf ) for each particle species in the multivariate space spanning all the features that can affect the detector response. These include the kinematic properties of the particle under scrutiny, but also the underlying event, i.e. the signals due to other particles in the same event. As the number of these features increases, the problem quickly becomes intractable with simple methods. The detailed simulation of the detector response for the physical process of interest provides a way to predict the pdf from first principles. However, the amount of simulated data needed to accurately cover the multivariate feature space is often prohibitive and the imperfection of the simulation needs to be determined through a comparison to real events. Data-driven modelling is thus preferred: calibration channels where the particle species is determined by their kinematics, independently on the response of the detectors used for PID, are used. An example is the Λ → pπ − decay, where a high-purity sample of events can be selected thanks to the isolated decay vertex and the narrow mass peak, by which the proton and the pion can be distinguished by their kinematic properties. For a given decay mode, the distribution of the selected particles in the feature space will differ from that of the calibration channels. Ideally one would like to determine from the control sample, for each particle species, the marginal pdf of the PID classifier(s) as a function of the relevant experimental features, in order to predict it for the physics channel under study. In this paper, we propose a method to approach this problem based on machine-learning techniques. Control channels in data are used to identify the experimental features affecting the PID classifier and to derive a statistical model of its pdf . The method is conceived to be generic, not relying on a specific set of experimental feature variables, to be trainable in a relatively short time, allowing the analyst to easily modify the feature space and tune the hyperparameters for best performance, and to use a state-of-the-art neural network (NN) architecture. After discussing the general idea of this approach, a concrete implementation is discussed as a benchmark case, namely the modelling of charged particle identification in the fixed-target data collected by the LHCb experiment. We conclude by summarizing the performance and general applicability of the proposed method.
The method
The basic idea of the proposed approach is to empirically describe the marginal pdf of a PID classifier x, depending on a set of features θ, through a Gaussian Mixture Model (GMM) as x p ∼ Ng,p j=1 α j,p (θ)G(x, µ j,p (θ), σ j,p (θ)) (1) where the index p refers to the particle species, G is a Gaussian distribution with mean µ and standard deviation σ, and N g is the number of Gaussian functions in the model. The parameters α, µ, σ, whose values can have a non-linear dependence on θ, are estimated from a maximum likelihood fit to a set of n p training events for each particle species. The initialization is performed using a set of Multi Layer Perceptron (MLP) neural networks, where the loss function is the negative logarithm of the likelihood function. This definition is part of the novelty of the presented approach and, in contrast to other generative models used in similar applications such as GANs and VAEs, aims to explicitly model a probability density function. Training events from the calibration channels can be contaminated by unwanted background, which can be subtracted using the sPlot method [1], evaluating a weight w for each event to quantify its probability to belong to the considered signal. Weighting the training events can also be useful to modify the distribution in feature space of the training sample, when there are large differences with respect to the target physics sample. In this way the model will be more accurate in the feature space regions where it is more useful to the application. When weights are used, the loss function becomes [2][3][4] The model can be readily extended, as in the case of our benchmark example, to a set x of (approximately) linearly correlated target variables, by replacing the one-dimensional Gaussian functions with multivariate normal distributions. With N g large enough, the model can reproduce any smooth function with exponential-like tails, which is the typical behaviour of PID classifiers. The main advantage of the chosen functional form is the speed of normalization when performing the fit. This offers the possibility to use a simple NN design with relatively short training time and to tune the hyperparameters of the model in a reasonable time. We make use of state-of-the-art open-source machinelearning libraries to implement the model, namely scikit-learn [5] and Keras [6] with TensorFlow [7] backend. These offer support for the NN backward error propagation and auto-differentiation and can exploit GPU acceleration. Input variables are preprocessed to ease the training phase. A linear transformation is applied to the target variables x, to map the range of their values to the interval [0, 1), using the MinMaxScaler algorithm in scikit-learn. For the other features, whose pdf is not aimed to be reproduced by the model, a transformation is applied converting their pdf to a Gaussian distribution with µ = 0 and σ = 1 implemented by the QuantileTransformer algorithm in scikit-learn. Equalizing the range and, for the features, the functional form of the pdf , allows the NN layer for a standardized initialization and speeds up the numerical estimation of the derivatives in the loss function minimization procedure. The chosen NN architecture, implemented using Keras/TensorFlow, consists in three tanh-activated layers with a constant number of nodes and an output layer matched to the shape of the predicted parameters. Each training requires a sensible choice for the following hyperparameters: • the number of nodes in the NN hidden layers; • the number of epochs, namely the iterations in the training process; • the batch size, namely the number of training events to be considered for each iteration of the minimization procedure; • the learning rate, namely the initial size of the step on the parameter values to be applied on each epoch (which is then linearly decreased with the epoch to ease the convergence).
To speed up the training, it is performed in two steps and a crude estimation of the parameters is firstly obtained by considering them independent on the features θ. This first fit is performed with a reduced number of epochs, and its result is used to initialize the second one. During this second step, the parameters are monitored along the iterations to check that their values converge smoothly. This is indeed a first cross-check against a possible inappropriate value of the learning rate, leading to an oscillating behaviour, or to effects of overtraining, leading to abrupt variations of the parameters. The training dataset is then split in ranges of the features with similar population and the corresponding x histograms are compared to the fitted pdf , as obtained separately for each feature interval. A good agreement between the data and predicted distributions for all the feature values validates the trained model. A concrete implementation of the proposed approach on data collected by the LHCb experiment is presented in the following Section and available on GitLab [8].
Charged PID in the LHCb fixed-target data
The LHCb detector, described in detail in Refs. [9,10], is a single-arm forward spectrometer conceived for heavy-flavour physics in pp collisions at the CERN LHC. Charged particles reconstructed by the spectrometer are dominated by three hadron species: pions, kaons and protons 1 . Charged particle identification is provided by two Ring Imaging Cherenkov (RICH) detectors [11] for momenta ranging from 2 to 100 GeV/c. The RICH1 detector is optimized for the lower momentum region, covering the angular acceptance of [25,300] mrad, while the RICH2 was designed for the forward particles ([15,120] mrad) with a higher momentum. The momentum thresholds for the Cherenkov light emission are 2.5, 9.3 and 17.7 GeV/c in RICH1 and 4.4, 15.6 and 29.6 GeV/c in RICH2 for pions, kaons and protons, respectively. For each track, the yields and positions of the recorded photons, whose emission angle depends on the incident particle velocity and thus on the particle mass for a given momentum, are checked against the three particle species hypotheses. Two PID classifiers are built as logarithms of the likelihood ratio between the proton and pion (DLL p,π ), the proton and kaon (DLL p,K ) or the kaon and pion (DLL K,π ) hypotheses. A finely segmented scintillating detector (SPD) is positioned downstream of the RICH2 to provide a measurement of the detector occupancy already at the earlier stages of the reconstruction procedure. Since 2015, the LHCb experiment also operates in fixed-target mode, recording collisions between the LHC beams and gas targets injected into the LHC beam pipe [12]. Fixed-target runs of limited duration with different beam energy and target gas species have been collected, as summarized in Fig. 1. The first physics result Figure 2: Two-dimensional template fit to the PID distribution of negatively charged tracks for a particular bin (21.4 < p < 24.4 GeV/c, 1.2 < p T < 1.5 GeV/c). The (DLL p,K , DLL p,π ) distribution, shown in the left plot, is fitted to determine the relative contribution of π − , K − and p particles, using simulation to determine the template distributions and the fraction of fake tracks (which are barely visible). In the right plot, the result of the fit is projected into the variable arg (DLL p,K + i DLL p,π ) [14]. obtained in fixed-target configuration was the measurement of antiproton production from a proton-helium (pHe) sample acquired in 2016 [14]. The antiprotons were distinguished from the negatively charged pions and kaons through a template fit to the two-dimensional space (DLL p,π , DLL p,K ), as illustrated in Fig. 2. The templates were obtained from a detailed simulation of the detector and from the Λ → pπ + , K 0 S → π − π + and φ → K − K + calibration channels in data for antiprotons, pions and kaons, respectively. The data available statistics was limited, while the abundant samples recorded in pp collisions, which are routinely used to determine the PID performance in LHCb results [15], only presented a small overlap in feature space with pHe data. Indeed, the RICH detectors response is strongly affected by the detector occupancy, which is much larger on average in pp than pHe collisions and by the particle kinematics, which also differs between the two samples. Moreover, while the origin of pp collision is defined within a few centimetres, the recorded beam-gas collisions spread over about one metre along the beam line. The limited accuracy of the PID response modelling turned out to be one of the dominant uncertainties in the antiproton production measurement [14]. A larger sample of proton-neon (pNe) collisions with a nucleon-nucleon centre-of-mass energy of √ s N N = 68 GeV was recorded in 2017, providing a source of calibration events acquired in conditions more similar to the other fixed-target samples. We apply here the method proposed in Section 2 to model the PID response using the pNe dataset and to predict the classifier pdf for track selections performed on smaller pHe and pAr data samples, taking into account the different feature distributions due to the different particle selection criteria and experimental conditions such as the gas target and beam energy. The underlying hypothesis is that the response of the RICH detectors is stable across the different data takings.
Calibration channels
Three decays of light hadrons abundantly produced in the recorded collisions are used to model the PID response to particles of known species: Λ → pπ + for antiprotons, K 0 S → π − π + for pions, φ → K − K + for kaons. Antiprotons in Λ decays are identified solely from kinematics, as they always have higher momentum than the pions. In all cases, the control decays are distinguished from the background using a set of requirements on the quality and the kinematics of the reconstructed tracks and requiring that the reconstructed invariant mass of the two-track combination is compatible with the decaying particle mass. The association of the final-state particle with signals in the RICH subdetectors is also imposed. For φ → K − K + decays, where background contamination is more abundant, one of the two candidate final-state particles is required to be positively identified as a kaon, and only the other one is used in the control sample. The invariant mass distributions for the three channels after selection are shown in Fig. 3. The purity for the Λ and K 0 S decay samples is measured to be larger than 99%, while a significant residual background is present in the φ sample. The sPlot method is used to compute a weight for each event representing the probability it belongs to the signal category based on a fit describing the invariant mass distribution as the sum of a signal and a background component, as illustrated on the left plot of Fig. 4. The signal component is parametrized as a convolution of a Breit-Wigner and a Gaussian distribution, while a firstorder polynomial function is used to model the background. The weights obtained with the sPlot technique can be then applied to all variables uncorrelated with the invariant mass, as is the case, to an excellent approximation, of the PID classifiers, in order to remove the background. To verify the procedure, the variable arg (DLL p,π + i DLL p,K ), which, as shown in the right plot of Fig. 4, exhibits three distinct peaks corresponding to the particle species, is plotted before and after the sPlot weighting. The pion contamination from background events is clearly suppressed.
Feature selection
The RICH performance strongly depends on the particle kinematics, with fast variations when crossing the Cherenkov threshold values or the angular acceptance boundaries of the two detectors. The number and topology of the other tracks radiating in the detectors is also critical to the PID performance. The track reconstruction quality is also relevant, as it affects the determination of the centre of its Cherenkov ring. The choice of the features Figure 4: Application of the sPlot technique to the φ → K − K + decay. The left plot shows the fit of the φ candidates mass distribution; the right one the validation of the evaluated weights by comparing a combination of the PID variables where the three particle species can be easily distinguished before (in blue) and after (in red) the weights application.
to be considered is a key point in the application of the method: enough features should be used to describe all the variance in the detector response, while their numbers should be limited to achieve a reasonable training time. To the purpose of this study, we consider those describing: • the particle momentum p, its longitudinal (p z ) and transverse (p T ) components and the pseudorapidity η; • the track position inside the PID detectors: the three coordinates of the track position closest to the beam and the two slopes with respect to the z axis; • the occupancy in the detector: the number of all reconstructed tracks through the spectrometer (nT racks) and the number of energy deposits (hits) in the two RICH (nRICH1Hits and nRICH2Hits) and in the SPD detectors (nSP DHits); • the quality achieved in the reconstruction of the track: the fit χ 2 per degree of freedom (track χ 2 /ndf) or the number of hits in the tracking detectors consistent with the track (track ndf ).
As a first assessment of the relevance of these features to the description of the PID response, the distribution of the classifier is studied for pions of the pHe dataset for different ranges of each feature. After applying the requirement arg(DLL K,π + iDLL p,π ) < −1 to reject kaons and protons, for each considered feature the dataset is split in subsets of similar population. The Kolmogorov-Smirnov distance (KS) between the x distributions is computed for each pair of subsets, taking the maximal distance as a measure of the dependence of the x variables on the considered feature. An example is shown in Fig. 5, where the dependence on the pion pseudorapidity (KS = 0.54) is found to be clearly more important than that on the y position of the pHe collision (KS = 0.01). The most relevant features, with KS larger than 0.20, are listed in Tab. 1. Some of these features are known to be redundant or highly correlated with each other. The longitudinal and transverse momentum p z and p T will not be considered, as these features are determined from η and p, which exhibit a larger KS. The SPD occupancy is also dropped, as it is highly correlated with the occupancy in the RICH detectors, which are causally connected with the RICH response. On the other hand, variables with low KS value can still be important for explaining differences between the training and validation samples. As an example, the distribution of the longitudinal position of the tracks origin in the training sample, originating from decays, strongly differs from tracks promptly produced at the collision vertex. For this reason, all coordinates of the track position of closest approach to the beam (denoted with "poca" in the following text and figures) are included, despite a KS value of 0.06, 0.01 and 0.10 for the x, y and z coordinates, respectively.
Model training
The two PID classifiers DLL p,π and DLL p,K in the training samples show a linear correlation to a good extent. Therefore, we model their distribution according to Eq. 1 replacing the Gaussian with a two-dimensional multivariate normal distribution: where the vectors x represent the two PID classifiers (DLL p,π , DLL p,K ) and Σ is their covariance matrix For each component in the GMM, the free parameters are thus the weight α, the central values µ, the standard deviations (σ 1 , σ 2 ) and the correlation coefficient ρ. The model is trained separately for each of the three particle species following the procedure outlined in Section 2 and with the hyperparameters listed in Tab. 2. The choice of the input parameters reflects a higher complexity for the K 0 S → π − π + and the φ → K − K + calibration channels. Indeed, as a consequence of the many threshold effects involved in the process, the distributions of these classifiers may result significantly different from a multivariate normal distribution. For example, in Fig. 6 the distribution of the DLL p,π classifier computed on the calibration pions is shown. Comparing the distributions in four different intervals of the pion momentum, a second minor peak, arising at lower momentum, can be clearly seen. For the φ line, where the weight for the signal hypothesis obtained with the sPlot technique is applied, only a fraction of events gives a large variation of the loss function, resulting in a training more prone to statistical fluctuations. To compensate for this effect, a higher batch size is chosen. To differentiate the components in the GMM, the parameters are firstly randomly initialized from uniform distributions in the ranges: x 2 − x 2 for the mean values µ, being · the average. For the φ calibration channel, the background-subtracted distributions are considered; • [0, 2π] for the correlation angle; x 2 − x 2 for the widths σ 1,2 ; The training is performed using a mini-batches gradient descent optimized with the RMSProp algorithm. Gaussian weights are left free to fluctuate towards negative values in order to enhance the stability of the training procedure. The evolution of the loss function with the number of epochs is monitored and Fig. 7 shows the resulting curves for the three calibration channels, all presenting a first steep decrease followed by a slow one and finally gentle oscillations near the minimum. The trained models present O(10 5 ) parameters, depending on the chosen complexity for the NN and the GMM. For the K 0 S calibration line and the parameters listed in Tab. 2, this corresponds to a measured training time of approximately 10 hours on a NVIDIA K80 GPU. The dependence of the Gaussian parameters as a function of the features is verified at the end of each training procedure to check for the presence of possible overtraining effects, which would manifest as their rapid oscillations to adapt to statistical fluctuations in the training sample. Fig. 8 shows an example of this for the Λ → pπ + calibration channel, where the expected smooth and monotonic behaviour can be observed. The generalization from the training to the application samples described in Sec. 3.5 definitively excludes overtraining effects.
Validation
To verify that the model properly learns the non-trivial correlations between the PID classifiers and the considered features, its predictions are compared to the actual distributions for control data in two-dimensional intervals of all possible pairs of features θ. The interval boundaries for each feature are set to roughly contain the same number of calibration entries. For each interval, a prediction is obtained by randomly choosing calibration entries in the selected interval and generating values of the target variables according to the fitted model for the corresponding values of θ. Fig. 9 shows an example of this validation for the K 0 S → π − π + calibration channel in intervals of the π − particle momentum and transverse momentum, Figs. 10 and 11 for the proton and kaon ones considering other feature pairs, respectively. The curves illustrate the non-trivial correlations of the target variables with the features and the excellent agreement between the data and the predictions demonstrates that the correlations are correctly learned by the model. For the pion calibration channel, the validation is also repeated considering an interval with low pNe statistics and Fig. 12 shows that, also in this bin, the trained model is able to generate a smooth template based on the parametric interpolation of the available data.
Application
The PID model, trained separately for each calibration channel selected from the pNe data sample, is applied to two independent lower-statistics samples of pHe and proton-Argon for the classifiers pdf is produced with the same method adopted for the validation in Sec. 3.4, but considering the differently-distributed pHe or pAr features as an input. To evaluate the model performance and the possible improvement over the PID model adopted so far in antiproton production studies in fixed-target data [14], the data-driven templates for the DLL p,π and DLL p,K variables are compared with those obtained from the detailed simulation of the detector. For the pAr sample, the application is performed on the DLL p,π and DLL K,π variables. Simulated data samples are generated for fixed-target collisions with EPOS-LHC [16], the interaction of the generated particles with the detector and its response are implemented using the Geant4 toolkit [17] as described in Ref. [18]. The simulation provides an overall good description of the input features and the small observed residual disagreements with respect to data can be corrected with reweighting techniques. For the pHe sample, the same approach of Ref. [14] is followed and simulated events are reweighted in two steps, according to the detector occupancy (using nSP DHits) and the track transverse momentum. To verify that the simulation-based PID model Figure 11: Comparison for the φ → K − K + calibration channel in the track fit χ 2 /ndf ∈ [1.5, 4) -η ∈ [1.8, 3.7) bin between the bidimensional x distributions in (red) the pNe data and (blue) predicted with the trained model (top plot), its projections onto the two axes (second row) and onto the DLL p,K axis in intervals of the DLL p,π variable (third row).
is not limited by this simple reweighting technique, a more sophisticated approach is adopted for the pAr sample, where a boosted decision tree algorithm [19] is used to determine a single event weight depending on p T and the number of hits in the SPD and RICH1 subdetectors. After the reweighting, simulation-based binned templates for the three particle species are obtained. A fourth particle category for tracks reconstructed from hits belonging to different simulated particles and accounting for a few per cent of the candidate tracks, called ghost in the following, is considered. The corresponding template can only be obtained from simulation and is used also in the data-driven model. Template fits are finally performed on the pHe and pAr data, using the simulation-based or the data-driven templates. Binned maximum likelihood fits to the bidimensional target variable distribution are done in kinematic intervals, leaving the relative abundances of the particle species as free parameters of the fit. Examples of the fit projections to the pHe data are shown in the same kinematic intervals in Fig. 13 for the simulation-based templates and in Fig. 14 for the data-driven templates and directly compared to data in Fig. 15. Similar plots of fit projections in five other kinematic intervals are shown for the pAr sample in Figs. 16-18. In general, the data-based method is found to provide a more accurate prediction of the PID classifiers distribution than the full-simulation one. As expected from the larger size of the training dataset and from the smooth function assumed in the model, it is less affected by statistical fluctuations and also appears to be less biased. To better quantify the comparison between the two sets of templates, the fit quality is measured from the two-dimensional KS distance between the fitted and the actual data distribution, in kinematic bins. The difference between the values obtained with the simulation-based and data-driven templates is shown in Figs. 19 for the pHe (top) and pAr (bottom) data samples, respectively. In both cases, the observation of a positive difference in most of the bins demonstrates that the templates produced with the method presented in this work better describe the data than those based on a detailed simulation.
Systematic uncertainties
Data-driven methods for modelling the detector response such as the one proposed in this paper, while unaffected by biases related to the unavoidable imperfections of the simulation, depend on the selection and peculiarities of the calibration samples employed. We show in this section how the proposed model can be used to investigate these systematic effects. In our benchmark example, the PID response to kaons is the one which is expected to be more difficult to model reliably using the φ → K − K + decay because of the sizeable background contamination and the presence of the tag kaon, which is typically produced within a small angle with respect to the probe track and can thus bias the PID response. Indeed, for pp LHCb data, kaon PID calibration typically relies on the D * + → D 0 (→ K − π + )π + channel [15], whose statistics is too limited to be useful in fixed-target data. To evaluate the possible bias due to the correlation between the tag and probe kaons in the φ channel, a comparison between the PID responses of kaons from the two calibration channels in 2017 pp data is performed in momentum and pseudorapidity bins. A clear difference, notably in the DLL K,π variable, is observed in some kinematic bins. We then train our model for the kaon response using the D * + → D 0 (→ K − π + )π + decay and use the model to predict the templates for kaons in the φ channel. The result, shown in Fig. 20 for the kinematic bin featuring the largest difference among the two samples, demonstrates that the difference is mostly explained by our model, taking into account the residual difference in track and event topology between the two decays within the kinematic bin. As the model trained on the D * + channels ignores the presence of the companion kaon and is not affected by the background of the φ channel, the systematic effects that have been investigated are found to be minor. This exercise shows that the proposed model can accurately take into account the different distributions of all the relevant experimental features among different data samples, well beyond the simple equalization of kinematic bins. In general, training the model on different calibration channels, also resorting to data recorded at different energy and collision systems, provides a way to evaluate the systematic effects related to the choice of a particular calibration dataset.
Conclusions
In summary, we presented a novel approach based on machine-learning techniques to model particle identification classifiers in high energy physics experiments. The GMM model, fitted to calibration channels using a set of MLP NNs, can be used to identify the relevant experimental features to be considered and to provide a smooth multivariate parametric representation of the PID response, making optimal use of the available training data. State-of-the-art machine-learning software libraries and computing resources provide the possibility to train models with O(10 5 ) parameters in a relatively short time, depending on the chosen complexity for the NN and the GMM. The method was demonstrated on a concrete case, namely the modelling of the particle identification response in the LHCb experiment, where it was shown to be able to predict non-trivial correlations among experimental features and to describe data more accurately than a detailed first-principle simulation of the detector. The method is expected to be employable on a larger variety of use cases dealing with experimental observables, not necessarily related to particle identification, depending on a sizeable number of experimental features. Figure 19: Difference of the Kolmogorov-Smirnov distance for the modelling of the bidimensional (top) pHe or (bottom) pAr target variables distribution composing templates produced with a full simulation approach or with the machine-learning method employing the pNe calibration data discussed in this document. In all bins presenting a positive value a more precise description of the data is achieved with the latter approach.
|
2021-10-22T01:15:30.868Z
|
2021-10-19T00:00:00.000
|
{
"year": 2022,
"sha1": "eb651fa6ef5c05643bc6637b38fda86c66cb0eb6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-0221/17/02/p02018",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "eb651fa6ef5c05643bc6637b38fda86c66cb0eb6",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
234937032
|
pes2o/s2orc
|
v3-fos-license
|
Cross-cultural adaptation and content validity of the Activity Card Sort for Brazilian Portuguese
Introduction: Occupations are the core domain of occupational therapy. They give value and meaning to one's life, and they are influenced by individual characteristics and by the culture. The occupations people are engaged in at a specific point in time compose their occupational repertoire. Nowadays, no measure in Brazil can capture the breadth of occupation in which older adults participate. Objective: To describe the process of transcultural adaptation and face validity of the Activity Card Sort (ACS) for Brazilian Portuguese. Method: In first-stage, three bilingual independent researchers translated the ACS; the back translation was performed by two Americans fluent in Portuguese and synthesized, creating the Brazilian version. Four experts and 20 older adults analyzed the relevance of the activities and the clearness of the ACS photographs. Descriptive analyses were performed, Fleiss Kappa analyzed agreement between the older adults, and the assessment data were organized and analyzed. Results: The ACS-Brazil is composed of 83 activities. Comparing to the original ACS, it is divided into the areas of instrumental, social, high-demand leisure, and low-demand leisure activities. Cultural and semantics adjustments were made to the photographs to reflect the Brazilian environment. Conclusion: The ACS-Brazil is a tool that can access the Brazilian older adults' occupational repertoire. It can help occupational therapist's intervention, guiding them through an occupation-based, client-centered approach.
Introduction
The occupations in which people engage in daily life give meaning and purpose to their lives (Fox et al., 2017), reflecting the values and influences of a social group and a culture, which favors the construction or affirmation of their identity (Polatajko et al., 2013a).Occupations also have the potential to influence health and well-being, with engagement and performance in occupations considered the central objective of occupational therapy (Polatajko et al., 2013b).
It is important to emphasize that this construction and modification of the occupational repertoire throughout life is influenced by personal and environmental factors and by the characteristics and functions of the occupations (Baum et al., 2005).Thus, when considering the dynamic nature of this triad (person-environment-occupation), it is important to analyze all dimensions that can predict the ability to perform activities, engage in occupations, and participate in social life (Engel-Yeger & Rosenblum, 2017).For that, occupational therapists choose to use different assessment instruments to guide their analysis of the elements involved in human occupation.
Among the existing measures, the Activity Card Sort (ACS) stands out.It is an instrument based on occupation (here understood as participation in a set of activities that make up the individual's occupational repertoire) and centered on the client.Carolyn Baum and Dorothy Edwards originally developed this tool in the United States to measure engagement in daily activities by elderly people with cognitive deficits (Baum & Edwards, 2001).The instrument is in its second edition (Baum & Edwards, 2008) and has been used not only with the population with dementia but also with healthy individuals (Hamed et al., 2011), people with Parkinson's (Poerbodipoero et al., 2016), with sensory limitations (Roets-Merken et al., 2013;Engel-Yeger & Rosenblum, 2017), multiple sclerosis (Orellano et al., 2012), post-stroke (Spitzer et al., 2011), among others.
In this evaluation based on self-report, we can identify data related to the participation -current and previous -of the elderly people in instrumental activities of daily living, leisure (high and low demand), and social activities (Chan et al., 2006).
Unlike the existing assessment instruments, the ACS uses photographs that represent the elderly people in different activities (Baum & Edwards, 2008).There are three versions of the ACS that vary with the context, that is, the evaluation formats directed to those who are institutionalized (form A), in rehabilitation (form B), or to those who live in the community (form C).The evaluated individuals inform, as response categories, if they have never participated in the activity, if they participate less, if they remain engaged, or if they have stopped doing them today (compared to a past or after an event) (Kniepmann & Cupler, 2014).In this way, we can determine the level of participation through the percentage of activities that are preserved, compared to a previous situation (before the disease, hospitalization, or a certain time, for example) (Orellano et al., 2014).
Clinically, ACS has been used during the initial assessment as a basis for developing an appropriate treatment plan for the individual and/or as a result of the occupational therapy intervention programs (Sabari et al., 2015).This assessment tool has good acceptability, usability, reliability, and validity in the different countries that use it (Engel-Yeger & Rosenblum, 2017), which justifies the importance of developing the ACS version for Brazilian culture.The instrument also covers most (Baum & Edwards, 2008) of the domains of activity and participation provided for in the International Classification of Functionality, Disability, and Health (CIF), of the World Health Organization (Organização Mundial de Saúde, 2003).
Considering the great relevance of using an instrument capable of evaluating participation and engagement in activities and the scarcity of specific instruments of occupational therapy to measure the participation of the elderly population in occupations in Brazil, this study aimed to describe the process of cross-cultural adaptation and content validation of the Activity Card Sort to Brazilian Portuguese.
Study design and ethical considerations
This is a cross-sectional methodological study, in which the cross-cultural adaptation of the Activity Card Sort (ACS) was carried out.The development of the Brazilian version of the ACS was authorized by the Occupational Therapist, Professor Carolyn M. Baum, holder of the copyright.
The Research Ethics Committee approved research by Resolution 466, of December 12, 2012 (Brasil, 2012), under opinion 2,773,267.Participation was voluntary and the elderly participants signed an informed consent form.To ensure anonymity, each research participant received a numeric code.
Procedures
The process of cross-cultural adaptation followed the recommendations of Beaton et al. (2000) and occurred in five stages.In the first stage, the ACS was translated into Brazilian Portuguese, and three bilingual translators participated independently.In the second stage, the researchers compared the translated versions and synthesized the translation, producing a single version of the Brazilian ACS translation.
In the third stage, a back-translation was carried out independently, with two professionals whose native language was English and who were fluent in Brazilian Portuguese.They had no contact with the original assessment instrument.In this stage, the back-translation was compared with the original version of the ACS and the synthesis of the back-translation was produced.With these data, the researchers created the first version of the ACS instrument for the Brazilian context.
The fourth stage was the analysis of the instrument by the expert committee, who analyzed the clarity, relevance, and equivalence between the translated versions (1 st version) and the original version.The committee was composed of four occupational therapists, with a doctorate and a superior experience for five years in the gerontological occupational therapy area.Each expert received the manual with instructions on how to apply the instrument, the first Brazilian version of the ACS, and a form for assessing equivalences.Equivalent items were those that obtained a 75% agreement between the experts.In these cases, the items were modified, deleted, adapted, or maintained, according to the suggestions proposed by the committee.The results are shown here.This process resulted in the Brazilian version of the ACS, which was submitted to the elderly population.
The evaluation produced by the experts was applied to 20 elderly people who lived in the community (fifth stage -pre-test) to analyze the understanding of the instrument and eliminate items not understood.The choice for this number of elderly people was based on the reliability test of the original ACS (Baum & Edwards, 2001).The elderly sample was selected for convenience, as they attended projects aimed at longevity at the educational institution where this research was developed.All the elderly participants lived in their communities, in the West Zone of Rio de Janeiro.To characterize the sample, we collected data related to age, gender, and education.In a complementary way, a cognitive function was tracked, through the Mini-Mental State Examination -MEEM, 2 nd edition (Spedo et al., 2018).
For the analysis of content validity, the ACS translated into Brazilian Portuguese was presented to the elderly participants and they answered a 04-point Likert scale to inform about the representativeness of the images and the relevance of the activities described in the instrument.The participants opted for a minimum value of 1 point when informing that "the representation of the image with the description of the action is not clear" or "this activity is not relevant for the Brazilian context"; the value of 2 points, for the answers that informed that it is "unclear or relevant"; 3 points for "partially clear or relevant; up to a maximum value of 4 for those cards where "the representation of the image was very clear" or because "the activity is very relevant".The researchers and two members of the expert committee analyzed the results of the elderly participants and adopted the same recommendation by the expert committee for the analysis of equivalences.Activities were maintained in which at least 50% of participants agreed to be relevant, according to the study carried out in the United Kingdom (Laver-Fawcett & Mallinson, 2013).
After all the phases of the process of cross-cultural adaptation and content validity, we produced a Brazilian version of the ACS and sent it to the author of the instrument who approved the ACS -Brazil.
Data analysis
The data were stored in a Microsoft Excel® spreadsheet and later transported and analyzed in a database in SPSS (Statistical Package for Social Sciences) for Windows, version 21.0.
We used descriptive statistics in the characterization of elderly participants, including indexes of central tendency (mean) and dispersion (standard deviation) for the age of the participants.To calculate the Z score and percentile of the cognitive tracking test, we used the normative data from the 2 nd edition of the Mini-Mental by Spedo et al. (2018).Thus, participants who had up to -1.5 standard deviation did not show a significant reduction in values that could suggest a cognitive decline.
The means and standard deviation were calculated in all items of the assessment instrument to determine the activities commonly performed by the elderly participants in the pre-test.The Kapiss de Fleiss test measured the agreement among the elderly participants regarding the relevance of the item.
In addition to the structured questions, in the questionnaire, there was a field in which the participants could suggest modifications to the photographs, insertions, and exclusions of activities.These data were compiled and analyzed by the first, third, fourth, and fifth authors.
Results
Table 1 shows the areas and examples of some activities in the first Brazilian version of the assessment instrument.OACS was translated by three Brazilian professionals fluent in English.All were occupational therapists with more than five years of experience.The back-translation was carried out by two American women fluent in Portuguese, one of whom had a background in health and the other in humanities.
In the translation process, eight items from the original ACS underwent language adjustments to improve the understanding of the actions described on the cards.Thus, verbal expressions (playing or doing, for example) were inserted in the translations.The back-translation synthesis was compared to the original version of the ACS and 14 items showed differences in English expressions, but with no change in the meaning or content of the actions described since the words in English were synonymous with the words that were on the original cards.After analysis, the first Brazilian version of the ACS was formed.
In the construction of the first Brazilian version, in the cards in which the activities were separated by "slash -/", the researchers chose to insert the expression "or" to clarify that the participant could perform one of the tasks or one of the examples inserted in each card.This version was submitted to the expert committee and to the pre-test with the elderly participants to perform the semantic, idiomatic, and conceptual equivalences.
The elderly who participated in the 5 th stage -pre-test -were 60 years old or over and lived in the city of Rio de Janeiro.The elderly participants were active and attended activities aimed at active aging that were offered at the educational institution to which the first author is linked.For these participants, we applied Form C of the first Brazilian version of the Activity Card Sort, that is, the version of the ACS made for elderly people living in the community.
Regarding the characterization of the elderly participants in the pre-test stage, 20 people participated, 14 of whom were female (70%) and 06 male (30%), with a mean age of 68.1 years old (SD = ± 7.14).The educational level of the participants varied between eight and 12 years, with 8 years of study (n = 01.5%),11 years of study (n = 08, 40%), or above 12 years of study (n = 11, 55%).
By the analysis of the MMSE, no participant had scores that suggested cognitive decline.Even for those who scored below the expected lower limit for age and education, this value was not significant.Thus, it was not suggestive of a decline in the cognitive functions assessed by the instrument.These data ensure that the participants would understand the questions asked in the pre-test (Table 2).
In the expert committee's analysis of the 89 images representing the ACS's activities, occupational therapists agreed (100%) with the relevance of the activities on 46 (51.68%) cards and on one activity (1.1%) all agreed as not relevant (elderly Brazilians going to the casino).In 18 more (20, 22%) cards, the agreement was 75% among the judges.In the remaining 24 activities, the agreement between experts was between 25 and 75%.In line with the recommendations of ACS validation studies carried out in the United Kingdom, activities were maintained in which at least 50% of the participants agreed to be relevant (Laver-Fawcett & Mallinson, 2013).
The elderly who participated in the semantic and cultural equivalence considered that all the activities on the cards represented described activities, with only two elderly people indicating that the activity of taking care of children was not well represented.The Kappa de Fleiss test compared the agreement of the sample of 20 elderly people regarding the relevance of the ACS-Brazil items.The classification suggested for the coefficient value ranges followed the interpretation of the study by Altman (1999).The test indicated reasonable agreement between the assessment of the elderly, k = 0.317 (95% CI, 0.302 to 0.332), p <0.001.After the 4 th and 5 th stages, during the analysis of the qualitative data of the interviews, the participants suggested removing cards, adapting environments and objects in the photographs, modifying the terms, separating cards, and inserting new activities.Table 3 describes the suggestions for insertion and removal of cards by experts and the elderly participants.
In instrumental activities, most experts suggested replacing the card "cooking dinner" (card nº 07) with "cooking" because many people can cook at lunch or dinner time.For this evaluation instrument, it is important to verify the engagement in this occupation regardless of the period in which this activity is carried out; therefore, the card was changed.Experts and the elderly participants suggested removing the card to "fuel" the car (card No. 11) because, in Brazil, this activity is carried out by third parties.To complement the list of instrumental activities, the experts suggested the insertion of the cards "medication care" and "use public or private transportation (taxi or mobile transport application)", activities commonly performed in Brazilian culture and which were included in the version end of ACS -Brazil.
In low-demand leisure activities, the elderly participants and experts did not consider the cards "to write letters" (card No. 43) and "to go to the casino" (card No. 48) to be useful, as they were associated with activities currently unusual or prohibited in the country, respectively.On the other hand, in the cards "manual works" (card nº 25) and "playing cards" (card nº 31), they suggested to remove the examples "quilting" and "bridge" described in the cards, respectively because they are activities usually not performed by Brazilians."Knitting or embroidery" remained to exemplify handicrafts with needles, as well as "patience or poker" for the card that represents card games.
In the "board games" card (card nº 27) the expression "chess" was inserted next to the already existing example "checkers", as well as the expression "computer games" (card nº 29) was expanded to "computer, cellphone or tablet games", to contemplate those who engage in games, but use other technological devices.
For leisure in high demand, most experts and elderly participants pointed out that "playing golf" (card No. 60), "garden games" (card No. 68), "to go camping" (card No. 69), and "canoeing/boating/sailing" (card no.70) were not representative of the Brazilian context.Despite recognizing that these activities exist in Brazil -except garden games -, the activities mentioned above were associated with people with a very privileged economic situation (such as playing golf and sailing, for example) or because they are activities that are not very carried out by the elderly people in Brazil (camping, for example).Also, most experts (75%) still assessed that the activity "playing tennis or another sport with a racket" was not commonly performed in the country in general, due to the high cost necessary to practice this sport (card nº 65).Thus, they accepted the suggestions for removing such items from the assessment instrument.
On the other hand, all items in social activities were considered useful.However, the elderly participants suggested that the card "dating/spending time with friends" (card nº 88) to separate them as "spending time with friends" and the activity "dating" would be included in the card "to be with the spouse or partner" (card No. 87).
Experts and the elderly participants also suggested modifications to the photographic images to contemplate objects that are most recently used and adapt the environments to the Brazilian context.For example, in the card about the practice of team sports, the suggestion was to remove the image of the elderly playing baseball to insert an image of a soccer game.In other images, they suggested more modern objects such as current models of television or the use of cell phones to take photographs, for example.Thus, all the photographs were redone by a professional photographer and, subsequently, approved by the Occupational Therapist, Professor Carolyn Baum, author of the original version of the ACS.
The expert committee also suggested adapting the response categories of the ACS instrument, replacing the terms "illness/injury" with "health problem/illness" and the term "admission" to be replaced by "hospitalization/start of treatment".In version C of ACS-Brazil, they also suggested inserting the response category "new activity" to capture engagement in occupations that were not part of the elderly's repertoire previously and that are part of the current repertoire.After applying the pre-test, the researchers decided to expand the card "does not do it since the age of 60" to "does not do it since the age of 60 or never done", because this way avoided the doubt of which category of answer would be chosen if had engaged in the activity.The process of crosscultural adaptation was concluded, and the assessment instrument was called ACS-Brazil.The version was sent to the author of the instrument, who approved the entire process of cross-cultural adaptation.This tool resulted in a list of 83 activities, divided into four distinct areas: instrumental activities, low-demand leisure, high-demand leisure, and social activities, shown in Table 4. Finally, Table 5 shows the categories of responses investigated according to forms A (for institutionalization), B (for people recovering from a health/illness problem), or C (for people living in the community) that can be used in the ACS-Brazil instrument.
Form A
He did it before the health problem/illness or hospitalization/start of treatment.
Institutional Version
He didn't do it before the health problem/illness or hospitalization/start of treatment.
Form B
He didn't do it before the current health problem/illness.
Recovery Version
He continued to do it during a health problem/illness.He has been doing less since the health problem/illness.He gave up doing it due to his health/illness problem.New activity since the health problem/illness.
Form C
He hasn't been doing it since he turned 60 or never done.
Version Living in the community
He does it now (at the same level as before).
He does it less.He gave up the activity.New activity.
Discussion
In the occupational therapy process, assessments are the legitimate resources that assist in decision-making during the development of therapeutic plans, and measure the effects of interventions (Katz & Baum, 2012;Sabari et al., 2015), demonstrating part of the professional identity and, consequently, providing data for the professional category to inform about the need to create and organize the offer of its services to the population (Law et al., 2005;Fox et al., 2017).
In the assessment, opting for the use of standardized instruments seems to increase the reliability and rigor for the identification of the variables that these tools propose to analyze.Besides having the standard for their administration and scoring, these measures provide information on reliability and validity, data that are fundamental for the interpretation of results (Unsworth, 2000).
The Activity Card Sort, an instrument translated and adapted into Portuguese in this study, was also translated, validated, and adapted in different countries such as Spain, Puerto Rico, United Kingdom, Israel, Hong Kong, Jordan, Singapore, Holland, Korea South and Australia (Alegre-Muelas et al., 2019;Gustafsson et al., 2017).It is still in the process of cross-cultural adaptation and validation in Germany, Belgium, and Austria.Studies point to excellent validity, internal consistency, and reliability (Katz et al., 2003;Poerbodipoero et al., 2016;Gustafsson et al., 2017).
Considering that the instrument has good psychometric properties and knowing that its validity and usefulness are culturally dependent (Engel-Yeger & Rosenblum, 2017), research on cross-cultural adaptation of the ACS for the Brazilian context was of singular importance.
The implementation of the Brazilian version of the Activity Card Sort (ACS -Brazil) in clinical practice can facilitate the relationship between an occupational therapist and his patient, as the assessment instrument will contribute to a vast knowledge of the individual's occupational repertoire, his level of engagement (current and previous) and participation in instrumental, leisure and social activities (McNamara et al., 2016;Fox et al., 2017;Tsé et al., 2017).Also, the instrument considers the activities that are most significant for that individual, which favors the implementation of an intervention that values the collaborative process among the elderly, their caregivers/family members, and occupational therapists (Nielsen et al., 2019).
In the world scenario, participation has been a concept widely discussed by the ICF as fundamental to support health (Organização Mundial de Saúde, 2003) and that is fully based on engagement in occupations.Thus, the ACS can be a tool that occupational therapists can use to understand the person-occupation-environment triad and understand how health can be maintained, promoted, or improved through engaging in an occupation.
The translation and adaptation process followed the reference of Beaton et al. (2000) and the one recommended by the authors of the original version of the ACS (Baum & Edwards, 2008).Of the 89 activities of the original version of the assessment, eight were removed due to suggestions from experts and the elderly participants.Most items were removed by a cultural relevance as the central reason.Activities such as playing golf or canoeing, although performed in Brazil, are infrequent.A study carried out in Brazil with the pediatric version of the ACS (PACS) also showed that the practice of these sports is not relevant in the Brazilian population (Pontes et al., 2016).
During the translation and adaptation process, the removal of items was common to other versions, as well as the Brazilian version.In the Israeli version of the ACS, two activities were removed, hunting and bowling (Katz et al., 2003).In the Hong Kong version, activities such as writing letters, riding bicycles, and sewing a quilt were removed (Chan et al., 2006).In the version of the ACS developed in Puerto Rico (Orellano et al., 2012), six activities were removed (among them, hunting, garden games, bird watching, and hiking), while in the Arabic version religious aspects influenced the removal of some items such as betting or going to casinos (Hamed et al., 2011).
On the other hand, during the translation/creation process, the inclusion of new items was carried out in all versions of the evaluation.In the Australian ACS, 12 items were included, such as bingo, surfing the internet, playing bocce, or taking a day trip (Gustafsson et al., 2017).In the Arabic version of the ACS, 19 items were included, six related to religious activities, such as going to the mosque (Hamed et al., 2011).The UK version of ACS included seven new items, which were not present in any previously developed version, such as voting, meditating, and attending evening classes, for example (Laver-Fawcett & Mallinson, 2013).The Spanish version also had items not included in other versions, such as going out for drinks, napping, and looking for a job (Alegre-Muelas et al., 2019).The findings are consistent with the ACS-Brazil, which included two activities, one of which is unprecedented compared to the other versions (medication care).
Regarding the number of items in the evaluation, the Brazilian version has 83 items, a number similar to the original version (89 items), the Australian version (82 items), the Israeli version (88 items), Puerto Rican version (82 items) and the Spanish version (79 items).The highest number of activities was found in the UK ACS (91 items), while the lowest number was found in the Hong Kong ACS, with only 65 items.Again, the cultural factor influenced the number of activities.Since the United Kingdom is composed of four countries, a greater number of activities was necessary to list the occupational repertoire of the population to be assessed.
For the adaptation and validation process, we need further studies to analyze the psychometric characteristics of the ACS-Brazil, verifying its convergent, divergent, and discriminative validity.However, the steps taken have shown that the evaluation can be configured as a good instrument to measure the repertoire of occupations of elderly Brazilians.
Final Considerations
The cross-cultural adaptation process of the Activity Card Sort was adjusted so that the evaluation represented the activities in which the elderly participate in the Brazilian culture.This tool can contribute to the improvement of the provision of occupational therapy services in the country, as this instrument proposes to understand the central domain of this profession: the engagement and participation of individuals in occupations.Also, the ACS-Brazil will compare the occupational repertoire of the elderly people within the national territory and between international studies, as well as it can be useful to outline the treatment plan or to evaluate the intervention programs, over time.
Table 2 .
Characterization of the cognitive function of the pre-test participants.
Table 3 .
Suggestions for Activity Card Sort -Brazil.
Table 4 .
Example of activities in the Final version of Activity Card Sort -Brazil.
Table 5 .
Response categories according to ACS-Brazil Forms A, B, or C.
|
2021-04-28T16:35:56.756Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "09ad80a4297f8b7140fe983d9a43c7fa6747b560",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cadbto/v28n4/en_2526-8910-cadbto-2526-8910ctoAO2051.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "00b5650fbd9d04d1bfabafc6aa51c6e2744183ff",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
44241828
|
pes2o/s2orc
|
v3-fos-license
|
Properties of aged GFRP reinforcement grids related to fatigue life and alkaline environment
In recent years, even if Fiber Reinforced Polymer (FRP) composites have been widely used for strengthening of civil buildings, a new generation of materials has been studied and proposed for historical masonry construction. These buildings, mainly made of stone work, are common in many areas of Europe and Asia and recent earthquakes has been the cause of many catastrophic failures. The brittleness of unreinforced historic masonry can be considerably reduced using new retrofitting lighter-weight materials such FRP, even if limitations were evidenced due to material and mechanical compatibility with poor substrates. Thus, fibrous reinforcements were used as long fibres incorporated into a cement or lime matrix, which better match with the properties of ancient masonry. The use of low strength fibers such as glass and basalt, respect to carbon, in presence of an alkaline matrix brought out durability issues, due to the chemical vulnerability of common glass and basalt fibres. The objective of this research is to explore the effects of selected aqueous environments and fatigue loading on the mechanical and physical properties of composite grids, made of E-CR (Electrical/Chemical Resistance) glass fibers and epoxy-vinylester resin, used as tensile reinforcement in new composite reinforced mortar systems. Glass-fiber-reinforced polymer (GFRP) coupons were subjected to tensile testing and a severe protocol of durability tests, including alkaline environment and fatigue tensile loads. Accelerated ageing tests were used to simulate long-term degradation in terms of chemical attack and consequent reduction of tensile strength. The ageing protocol consisted of immersion at 40 °C in alkaline bath made by deionized water and Ca(OH)2, 0.16% in weight, solution for 30 days. GFRP specimens aged and unaged were also tested under tensile fatigue cycles up to 1,000,000 cycles and a nominal frequency of 7.5 Hz. After this severe conditioning the tests indicate a good tensile strength retention of the GFRP in absence of fatigue loads, while a significant loss in fatigue life was experienced when both alkaline exposure and fatigue loads were applied.
Introduction and Research Significance
Masonry buildings are characteristic of the architectural heritage throughout most of Europe, North Africa and Asia.However, masonry constructions are notoriously vulnerable to earthquakes due to the almost total lack of tensile strength of the masonry material.Traditional methods for retrofitting and restoration of masonry constructions have sometimes demonstrated to be inappropriate: Reinforced Concrete (RC) roofs and floors, concrete jacketing of wall panels can highly enhance the structural behavior of masonry buildings, but they also produce an increment in dead loads, increase the structural stiffness reducing ductility, present problems of oxidation of steel reinforcement.
From the 1990s, there has been some movement away from traditional construction materials for retrofitting applications, toward lighter-weight solutions.The use of FRP (Fiber Reinforced Polymers) materials are particularly suited to strengthening pre-existing masonry constructions and can be used to extend the fatigue life or provide an increase in strength of a structure [1][2][3][4].
However, there is limited guidance available on the use of FRPs in strengthening of masonry structures.The long term behavior of FRPs materials used in masonry strengthening needs to be investigated, especially when composite materials are in the form of thin sheets or grids.FRPs are usually bonded to the external surface of an existing masonry structure: bonding is often achieved using epoxy adhesives; recently, however, FRP materials has been employed also in combination with inorganic matrix such as lime mortars and cement, to fulfill new needs in retrofit of historical constructions [5][6][7][8][9][10][11].
In such situation, the analysis of the vulnerability of FRPs to environmental damage is critical and should be proved.The life cycle of FRPs must be competitive with traditional materials, because of the limited available resources to maintain and protect architectural heritage structures.
There are many areas where knowledge of the behavior of FRPs for masonry applications is currently lacking.Perhaps one of the more important areas is the performance of these materials under combined effects of fatigue and ageing.In this paper, a durability study has been conducted in order to investigate the residual mechanical propertied of GFRP grids subjected to exposure in alkaline bath and fatigue cycles.
The effects of fatigue have been extensively studied for many composite products [12], mainly in areas outside of civil engineering structures (automotive and aerospace engineering).However there is little information about the products used in retrofitting of masonry structures.Notoriously, fatigue failure occurs due to the application of fluctuating stresses that are lower than the stress needed to produce failure during a monotonous loading application.The study reported in [13] considered the fatigue effects in bonded regions of FRP plates used for small bridge applications, but only few information are found in terms of residual strength of the fibers [14][15][16] and FRP-reinforced elements [17,18].In these studies the fatigue effects caused a reduction of the static strength as expected.An aggressive chemical environment such as high alkaline pore solution is another threat for the glass fibers, when Alkali Resistant (AR) solutions are not used.A significant degradation in modulus and strength can be produced by the chemical sensitivity of E-glass (alumino-borosilicate glass), ordinary basalt fibers and their organic matrices.According to the authors of [19,20] this can be addressed to three different mechanisms.They include the effect of free hydroxyl OH − ions and water molecules which cause corrosion of the fibers; a second effect due to the precipitation of hydration products, which may reduce the flexibility of the fibers and change the behavior at the interface with the inorganic matrix; the presence of chemical products which cause a densification of the matrix at the interface level which may produce a bending effect in the fibers.
Several studies demonstrated the vulnerability of glass fibers (mainly E-glass) by showing a reduced tensile strength after ageing [21][22][23][24][25].In these studies, it has been noted that the diffusion of the OH − ions out of the glass structures (leaching) is the most common dissolution reaction of the glass material.Few studies have investigated the combination of alkaline attack with fatigue cycles [26][27][28][29].According to the state of the art in the field, a need to expand the knowledge in the field of the long-term behavior of composite reinforcement used in civil engineering, is diffusely felt.The use of fibers which may meet harsh environment and potentially detrimental chemical agents should be investigated.The possible combination of mechanical and chemical agents meets a strong interest in the research community for the provision of design guidelines.According to this needs and aiming to expand the frontiers of the knowledge in the field, this study presents durability evaluations to provide new information in the field.The results of an experimental program are presented and discussed, showing the residual tensile properties of GFRP materials subjected to alkaline exposure and fatigue loading.
GFRP Specimens and Ageing Protocol
The GFRP reinforcement that was investigated in this experimental research is industrially produced in forms of 0 • /90 • grids, starting from glass fibers yarns that are impregnated at a temperature of 120 • C. The spacing of the grids in both direction is 66 mm (see Figure 1).Each strand of fibers used as raw material is made by four filaments having a TEX of 2400 each.From a chemical point of view the fibers used are E-CR-glass (Electrical/Chemical Resistance) which are basically E-glass fibers modified by the presence of alumino-lime silicate which reduce the chemical vulnerability in alkaline environment.
GFRP Specimens and Ageing Protocol
The GFRP reinforcement that was investigated in this experimental research is industrially produced in forms of 0°/90° grids, starting from glass fibers yarns that are impregnated at a temperature of 120 °C.The spacing of the grids in both direction is 66 mm (see Figure 1).Each strand of fibers used as raw material is made by four filaments having a TEX of 2400 each.From a chemical point of view the fibers used are E-CR-glass (Electrical/Chemical Resistance) which are basically Eglass fibers modified by the presence of alumino-lime silicate which reduce the chemical vulnerability in alkaline environment.The matrix used for the impregnation was an epoxy-vinylester resin.Due to the used technology, the fibers amount is not the same in the two directions.The GFRP net presents a different geometry in the two directions, since the fibers are cured in forms of a flat plate in 0° direction and in forms of a twisted cord in 90° direction (see Figure 2).The effective amount of fibrous reinforcement in the cross section was quantified by applying two different methods: (1) the technique recommended by the technical guidelines CNR-DT 203 [30], ACI 440.3R [31], ISO 10406-1 [32], which consists of a measurement by using a graduated cylinder; (2) an hydrostatic weighting by measuring the volume of the specimens, which is an alternative method that have higher accuracy.This because the use of the first technique may be affected by an uncertainty that undergoes from 20 to 40%.It is expressed as percentage of the average value of the measuring sample.The use of the second technique may reduce the uncertainty which is in the range of 1-2%.By using the first method the equivalent cross section was respectively 8.9 (flat) and 6.6 mm 2 (twisted); by using the second method the equivalent cross section was respectively 9.2 mm 2 (flat) and 6.7 mm 2 (twisted).The matrix used for the impregnation was an epoxy-vinylester resin.Due to the used technology, the fibers amount is not the same in the two directions.The GFRP net presents a different geometry in the two directions, since the fibers are cured in forms of a flat plate in 0 • direction and in forms of a twisted cord in 90 • direction (see Figure 2).The effective amount of fibrous reinforcement in the cross section was quantified by applying two different methods: (1) the technique recommended by the technical guidelines CNR-DT 203 [30], ACI 440.3R [31], ISO 10406-1 [32], which consists of a measurement by using a graduated cylinder; (2) an hydrostatic weighting by measuring the volume of the specimens, which is an alternative method that have higher accuracy.This because the use of the first technique may be affected by an uncertainty that undergoes from 20 to 40%.It is expressed as percentage of the average value of the measuring sample.The use of the second technique may reduce the uncertainty which is in the range of 1-2%.By using the first method the equivalent cross section was respectively 8.9 (flat) and 6.6 mm 2 (twisted); by using the second method the equivalent cross section was respectively 9.2 mm 2 (flat) and 6.7 mm 2 (twisted).
GFRP Specimens and Ageing Protocol
The GFRP reinforcement that was investigated in this experimental research is industrially produced in forms of 0°/90° grids, starting from glass fibers yarns that are impregnated at a temperature of 120 °C.The spacing of the grids in both direction is 66 mm (see Figure 1).Each strand of fibers used as raw material is made by four filaments having a TEX of 2400 each.From a chemical point of view the fibers used are E-CR-glass (Electrical/Chemical Resistance) which are basically Eglass fibers modified by the presence of alumino-lime silicate which reduce the chemical vulnerability in alkaline environment.The matrix used for the impregnation was an epoxy-vinylester resin.Due to the used technology, the fibers amount is not the same in the two directions.The GFRP net presents a different geometry in the two directions, since the fibers are cured in forms of a flat plate in 0° direction and in forms of a twisted cord in 90° direction (see Figure 2).The effective amount of fibrous reinforcement in the cross section was quantified by applying two different methods: (1) the technique recommended by the technical guidelines CNR-DT 203 [30], ACI 440.3R [31], ISO 10406-1 [32], which consists of a measurement by using a graduated cylinder; (2) an hydrostatic weighting by measuring the volume of the specimens, which is an alternative method that have higher accuracy.This because the use of the first technique may be affected by an uncertainty that undergoes from 20 to 40%.It is expressed as percentage of the average value of the measuring sample.The use of the second technique may reduce the uncertainty which is in the range of 1-2%.By using the first method the equivalent cross section was respectively 8.9 (flat) and 6.6 mm 2 (twisted); by using the second method the equivalent cross section was respectively 9.2 mm 2 (flat) and 6.7 mm 2 (twisted).In order to detect the effects of alkaline pore solutions, which may arise within lime mortars in presence of high humidity, an aggressive aqueous solution was prepared as ageing bath.The specimens were immersed for 718 h in an alkaline aqueous solution made by a 0.16% (weight) of Ca(OH)2, simulating a lime-based environment.The target pH of the solution was 12.6, which was monitored during the exposure period as shown in Figure 3.A controlled temperature of 40 ± 2 °C was maintained in the thermostatic bath during the exposure period in order to accelerate the diffusion effects of the aggressive solution inside the GFRP.The constant temperature of the solution was produced and ensured by using two containers (identified as "tank 1" and "tank 2", with a capacity of 120 L and 100 L respectively) which were placed inside a close box in which the heated bottom acted as heating unit.The uniformity of the temperature in the cabinet was optimized using a ventilation system (internal recirculation), also below the bottom of containers.After an initial period of 300 h (during which the temperatures were always within the expected rank of 40 ± 2 °C), a constant temperature has been assured by implementing a solution recirculation between the two containers, with a flow of 0.5% /min (recirculation of 0.5% of the total solution volume in 1 min).
The temperature was monitored using a thermometer at different locations and depths in the two containers and recording the maximum and minimum values detected.Additionally, the monitoring was also made in correspondence of the surface by using another thermometer.
A correlation between the real service life and the period of accelerated ageing in laboratory (days) was provided for a cementitious environments in [33], as a first hypothesis this correlation may be considered also valid for the presented case: In order to detect the effects of alkaline pore solutions, which may arise within lime mortars in presence of high humidity, an aggressive aqueous solution was prepared as ageing bath.The specimens were immersed for 718 h in an alkaline aqueous solution made by a 0.16% (weight) of Ca(OH) 2 , simulating a lime-based environment.The target pH of the solution was 12.6, which was monitored during the exposure period as shown in Figure 3.A controlled temperature of 40 ± 2 • C was maintained in the thermostatic bath during the exposure period in order to accelerate the diffusion effects of the aggressive solution inside the GFRP.In order to detect the effects of alkaline pore solutions, which may arise within lime mortars in presence of high humidity, an aggressive aqueous solution was prepared as ageing bath.The specimens were immersed for 718 h in an alkaline aqueous solution made by a 0.16% (weight) of Ca(OH)2, simulating a lime-based environment.The target pH of the solution was 12.6, which was monitored during the exposure period as shown in Figure 3.A controlled temperature of 40 ± 2 °C was maintained in the thermostatic bath during the exposure period in order to accelerate the diffusion effects of the aggressive solution inside the GFRP.The constant temperature of the solution was produced and ensured by using two containers (identified as "tank 1" and "tank 2", with a capacity of 120 L and 100 L respectively) which were placed inside a close box in which the heated bottom acted as heating unit.The uniformity of the temperature in the cabinet was optimized using a ventilation system (internal recirculation), also below the bottom of containers.After an initial period of 300 h (during which the temperatures were always within the expected rank of 40 ± 2 °C), a constant temperature has been assured by implementing a solution recirculation between the two containers, with a flow of 0.5% /min (recirculation of 0.5% of the total solution volume in 1 min).
The temperature was monitored using a thermometer at different locations and depths in the two containers and recording the maximum and minimum values detected.Additionally, the monitoring was also made in correspondence of the surface by using another thermometer.
A correlation between the real service life and the period of accelerated ageing in laboratory (days) was provided for a cementitious environments in [33], as a first hypothesis this correlation may be considered also valid for the presented case: The constant temperature of the solution was produced and ensured by using two containers (identified as "tank 1" and "tank 2", with a capacity of 120 L and 100 L respectively) which were placed inside a close box in which the heated bottom acted as heating unit.The uniformity of the temperature in the cabinet was optimized using a ventilation system (internal recirculation), also below the bottom of containers.After an initial period of 300 h (during which the temperatures were always within the expected rank of 40 ± 2 • C), a constant temperature has been assured by implementing a solution recirculation between the two containers, with a flow of 0.5% /min (recirculation of 0.5% of the total solution volume in 1 min).
The temperature was monitored using a thermometer at different locations and depths in the two containers and recording the maximum and minimum values detected.Additionally, the monitoring was also made in correspondence of the surface by using another thermometer.
A correlation between the real service life and the period of accelerated ageing in laboratory (days) was provided for a cementitious environments in [33], as a first hypothesis this correlation may be considered also valid for the presented case: where: N = age in natural days T = conditioning temperature expressed in • F C = number of days of accelerated exposure at temperature T The typical aspect the GFRP specimens after alkaline exposure is illustrated in Figure 4.After the artificial exposure in alkaline environment, corresponding to 2.66 years in real conditions according to Equation (1), the aged specimens were washed with water and dried at room temperature (21 • C).The drying was performed under a forced air oven without increasing the temperature in order to avoid post-cure effects which may modify the real results.A white coating is visible on the surface of the GFRP grids after exposure, which is due to the precipitated products produced by the presence of the alkaline ions.
Appl.Sci.2017, 7, 897 5 of 13 where: N = age in natural days T = conditioning temperature expressed in °F C = number of days of accelerated exposure at temperature T The typical aspect the GFRP specimens after alkaline exposure is illustrated in Figure 4.After the artificial exposure in alkaline environment, corresponding to 2.66 years in real conditions according to Equation (1), the aged specimens were washed with water and dried at room temperature (21 °C).The drying was performed under a forced air oven without increasing the temperature in order to avoid post-cure effects which may modify the real results.A white coating is visible on the surface of the GFRP grids after exposure, which is due to the precipitated products produced by the presence of the alkaline ions.
Testing Program
The lack of a validated and comprehensive data base for the durability of GFRP grids as related to reinforcement applications for masonry structures has been recognized as a critical barrier to the use of these composite materials.In this study environments of interest are alkalinity and fatigue effects.For many masonry structures subject to cyclic loading (for example bridge structures, smokestacks and slender towers), ageing and fatigue are an important limit state that must be considered in the design process.When embedded into a lime-or cement-based mortar, GFRP grids come in contact with alkaline media.To study the potential effects (hydrolysis, pitting, hydroxylation, etc.) of degradation due to mortar pore water solution, GFRP specimens were first aged in alkaline solution and subsequently subjected to tension-tension axial fatigue tests.In this research the fatigue protocol resulted much more severe than those situations detected in real service conditions, but this allowed to better highlight the sensitivity respect to this type of ageing.
In this study the specimens that were investigated respect to long-term properties were cut from the GFRP grids along the 0° direction, meaning that the tensile properties were measured for the flat GFRP bars (see Figure 2a).In the following all the descriptions and results are related to the flat bars.
Firstly mechanical tests were performed to measure the tensile properties of the GFRP specimens, such as tensile strength, elastic modulus and ultimate strain.Tensile tests were also performed after the ageing/fatigue protocols.Tensile tests were run firstly on twenty unconditioned specimens, which provided the characterization of the GFRP materials, without any mechanical and chemical ageing.The preliminary tensile tests were run by using a Zwick Roell universal testing machine, having a capacity of 100 kN.Tensile tests were run under displacement control with a controlled rate equal to 0.2 mm/min, at a room temperature of 23 °C and relative humidity of 50%.An electric extensometer was used to measure the strain.ASTM D3039 [34] standard was used as reference for static tests.Furthermore ten specimens were cut from the same grid, subjected only to
Before ageing
After ageing
Testing Program
The lack of a validated and comprehensive data base for the durability of GFRP grids as related to reinforcement applications for masonry structures has been recognized as a critical barrier to the use of these composite materials.In this study environments of interest are alkalinity and fatigue effects.For many masonry structures subject to cyclic loading (for example bridge structures, smokestacks and slender towers), ageing and fatigue are an important limit state that must be considered in the design process.When embedded into a lime-or cement-based mortar, GFRP grids come in contact with alkaline media.To study the potential effects (hydrolysis, pitting, hydroxylation, etc.) of degradation due to mortar pore water solution, GFRP specimens were first aged in alkaline solution and subsequently subjected to tension-tension axial fatigue tests.In this research the fatigue protocol resulted much more severe than those situations detected in real service conditions, but this allowed to better highlight the sensitivity respect to this type of ageing.
In this study the specimens that were investigated respect to long-term properties were cut from the GFRP grids along the 0 • direction, meaning that the tensile properties were measured for the flat GFRP bars (see Figure 2a).In the following all the descriptions and results are related to the flat bars.
Firstly mechanical tests were performed to measure the tensile properties of the GFRP specimens, such as tensile strength, elastic modulus and ultimate strain.Tensile tests were also performed after the ageing/fatigue protocols.Tensile tests were run firstly on twenty unconditioned specimens, which provided the characterization of the GFRP materials, without any mechanical and chemical ageing.
The preliminary tensile tests were run by using a Zwick Roell universal testing machine, having a capacity of 100 kN.Tensile tests were run under displacement control with a controlled rate equal to 0.2 mm/min, at a room temperature of 23 • C and relative humidity of 50%.An electric extensometer was used to measure the strain.ASTM D3039 [34] standard was used as reference for static tests.Furthermore ten specimens were cut from the same grid, subjected only to the alkaline ageing protocol, and tested under an uniaxial monotonic tensile force as same as done for reference specimens, without any fatigue treatment.This was done in order to detect the chemical sensitivity in presence of alkaline ions, and in absence of any other agent or harsh environment.
Then a set of sixteen specimens was cut from the same GFRP grid of the unconditioned specimens, to be subjected to fatigue loads with and without previous alkaline exposure.Totally eight specimens were subjected to alkaline exposure before the fatigue cycles, while the other eight were fatigued without any chemical ageing.
Fatigue tests were performed on a servo-hydraulic E3000 Instron machine (Instron, Norwood, MA, USA) with a 3 kN capacity load cell.Surface temperature was monitored using a voltmeter with a sensibility of 0.15 • C. The load amplitude and test frequency of the fatigue treatment were constant (1.0 kN and 7.5 Hz, respectively).Tests were conducted at a room temperature of 21 • C in stress control using a normalized maximum tensile stress (σ max /σ ult ) of approx.0.25, 0.35, 0.45 and 0.55 (corresponding to maximum tensile loads of 1, 1.5, 2, and 2.5 kN).Table 1 summarizes the experimental program of tensile static tests, while in Table 2 the fatigue test protocol is illustrated, including the number of specimens tested, the maximum fatigue load and number of scheduled cycles.In Figures 5 and 6 the fatigue machine and the loading histories used in this study are illustrated.
,000,000 16AG-F For fatigue tests, GFRP specimens were dry cut from the grid with final dimensions of approx.190 mm using a diamond saw.A clear distance between grips of approximately 90 mm was used.
The specimen shape was bar form and, within the length of the GFRP specimens, two grid joints were included as the grid spacing was 66 mm.
Strength and modulus degradation was studied by performing monotonic tensile static tests at the end of the ageing and fatigue treatments, for those specimens which did not fail before the scheduled cycles number.In total, thirty specimens were tested under uniaxial tension without fatigue loads, while totally sixteen specimens were subjected to fatigue loads; four specimens that did not fail during the fatigue tests were subjected to further tensile tests until failure.An alphanumeric code was used for sample identification (ID).This was made of a progressive number ranging from 1 to 16 and a two-letter designation (NT for unconditioned and AG for aged specimens).The final letter F was used for those specimens that were subjected to fatigue loads.
Experimental Results
The results presented herein will show how the tensile strength and the Young's modulus were affected by comparing results that were obtained, for specimens subjected to alkaline ageing, fatigue cycles, fatigue cycles and alkaline ageing and unconditioned ones.
Tensile Tests of Unconditioned and Aged Specimens
The experimental results related to the tensile properties for a total number of 20 unconditioned reference specimens are shown in Table 3.As it can been observed the standard deviation and coefficient of variation are within a range of about 10% for the elastic modulus and about 20% for the tensile strength.This can be considered compatible with the experimental error.The typical failure modes and details of fiber breakage are reported in Figure 7.An alphanumeric code was used for sample identification (ID).This was made of a progressive number ranging from 1 to 16 and a two-letter designation (NT for unconditioned and AG for aged specimens).The final letter F was used for those specimens that were subjected to fatigue loads.
Experimental Results
The results presented herein will show how the tensile strength and the Young's modulus were affected by comparing results that were obtained, for specimens subjected to alkaline ageing, fatigue cycles, fatigue cycles and alkaline ageing and unconditioned ones.
Tensile Tests of Unconditioned and Aged Specimens
The experimental results related to the tensile properties for a total number of 20 unconditioned reference specimens are shown in Table 3.As it can been observed the standard deviation and coefficient of variation are within a range of about 10% for the elastic modulus and about 20% for the tensile strength.This can be considered compatible with the experimental error.The typical failure modes and details of fiber breakage are reported in Figure 7.An alphanumeric code was used for sample identification (ID).This was made of a progressive number ranging from 1 to 16 and a two-letter designation (NT for unconditioned and AG for aged specimens).The final letter F was used for those specimens that were subjected to fatigue loads.
Experimental Results
The results presented herein will show how the tensile strength and the Young's modulus were affected by comparing results that were obtained, for specimens subjected to alkaline ageing, fatigue cycles, fatigue cycles and alkaline ageing and unconditioned ones.
Tensile Tests of Unconditioned and Aged Specimens
The experimental results related to the tensile properties for a total number of 20 unconditioned reference specimens are shown in Table 3.As it can been observed the standard deviation and coefficient of variation are within a range of about 10% for the elastic modulus and about 20% for the tensile strength.This can be considered compatible with the experimental error.The typical failure modes and details of fiber breakage are reported in Figure 7.In all cases the tensile failure of the fibers was accompanied by an intra-laminar cracking occurred in the polymeric matrix that developed along the direction of the applied load.The breakage of the glass fibers occurred in different positions, even if the complete failure was mainly located in those regions nearby the nodal junctions.The stress-elongation behavior is linear or pseudo-linear up to failure, as shown in Figure 8.In all cases the tensile failure of the fibers was accompanied by an intra-laminar cracking occurred in the polymeric matrix that developed along the direction of the applied load.The breakage of the glass fibers occurred in different positions, even if the complete failure was mainly located in those regions nearby the nodal junctions.The stress-elongation behavior is linear or pseudo-linear up to failure, as shown in Figure 8. Aged specimens subjected to alkaline environment exhibited the same failure modes and the same linear behavior in terms of stress-strain curves.The elastic modulus and ultimate tensile properties are shown in Table 4. Also in this case the standard deviation and coefficient of variation values were maintained within the range of the experimental error.From a comparison between aged and reference specimens no reduction was found in terms of elastic modulus, while a slight reduction of 18% was found in terms of tensile strength.This can be attributed to the weakening of the fibers that were put in contact with the alkaline ions, and at the same time to the damage of the matrix that was not totally impermeable to the diffusion of the harsh aqueous solution.
The diffusion of alkaline ions and the breaking of the Si-O-Si bond, which is the main structure of the glass fiber, causes the damage of the fibers.The degradation mechanisms affecting glass fibers are different and combined.A filament corrosion due to OH − ions may occur, moreover a sort of static fatigue can cause the breaking due to the growth of existing surface defects, related to the OH − ions attack, furthermore a growth of densification products due to the precipitation of crystals in the interstices between the filaments of the fiber may cause a localized damage.Static fatigue failure is the most common mode.In general, fiberglass damage is closely related to the chemical composition of the alkaline solution to which it is subject.As for the matrix, it is important that it does not undergo degradation in such a way as to protect and transmit the loads to the fibers.However, the matrix, if subjected to alkaline solution, may undergo plasticization and/or swelling due to its diffusion inside, resulting in a reduction in the matrix's ability to perform its role.After the performance of this testing program, even if a reduction of mechanical properties was expected, it is felt that the tested specimens Aged specimens subjected to alkaline environment exhibited the same failure modes and the same linear behavior in terms of stress-strain curves.The elastic modulus and ultimate tensile properties are shown in Table 4. Also in this case the standard deviation and coefficient of variation values were maintained within the range of the experimental error.From a comparison between aged and reference specimens no reduction was found in terms of elastic modulus, while a slight reduction of 18% was found in terms of tensile strength.This can be attributed to the weakening of the fibers that were put in contact with the alkaline ions, and at the same time to the damage of the matrix that was not totally impermeable to the diffusion of the harsh aqueous solution.
The diffusion of alkaline ions and the breaking of the Si-O-Si bond, which is the main structure of the glass fiber, causes the damage of the fibers.The degradation mechanisms affecting glass fibers are different and combined.A filament corrosion due to OH − ions may occur, moreover a sort of static fatigue can cause the breaking due to the growth of existing surface defects, related to the OH − ions attack, furthermore a growth of densification products due to the precipitation of crystals in the interstices between the filaments of the fiber may cause a localized damage.Static fatigue failure is the most common mode.In general, fiberglass damage is closely related to the chemical composition of the alkaline solution to which it is subject.As for the matrix, it is important that it does not undergo degradation in such a way as to protect and transmit the loads to the fibers.However, the matrix, if subjected to alkaline solution, may undergo plasticization and/or swelling due to its diffusion inside, resulting in a reduction in the matrix's ability to perform its role.After the performance of this testing program, even if a reduction of mechanical properties was expected, it is felt that the tested specimens have shown a high retention of the mechanical properties, which are strongly related to the design values that should be assumed in the long-term.
Specimens Subjected to Tension-Tension Fatigue
The results of the fatigue tests are presented in this section in terms of number of cycles to failure and residual life of GFRP.In Table 5 the fatigue test results are illustrated, in addition to the tensile strength that was measured under monotonic load for those specimens that did not fail after the final loading history (10 6 cycles).From Table 5 it can be noted that only for a small quantity of GFRP specimens the number of scheduled loading cycles was completed.The most part of specimens, both aged or unconditioned, failed in tension during the fatigue cycles.However, some interesting observations can be raised.The failure modes in the broken cross sections were similar to the failure modes of the static experiments carried out on unconditioned specimens: all failures occurred in the area around the joint, demonstrating the small deviations of glass fibers from the direction of the tensile load may represent a critical defect.Glass fibers in both grid directions 0 • /90 • have to deviate in the joint area and this may produce significant reductions in tensile strength.Furthermore, the tensile failures were brittle without announcement by large deformations or visible cracks on the specimen surfaces.In Table 6 representative failed specimens and failure details are illustrated for GFRP specimens failed under the application of the fatigue cycles.
Of particular interest was the number of loading cycles sustained before failure.In fact, the number of loading cycles before failure has been used to indicate the mechanical degradation.Results were obtained for a range of loading cycles between 23,539 and 638,157.It is evident that the GFRP specimens are sensitive to the tension-tension fatigue treatment.The most part of aged specimens sustained a lower number of loading cycles before tensile failure, compared to unconditioned specimens, indicating a degradation effect.The stress-strain response was almost linear-elastic in the load range used for fatigue testing.
Furthermore, when tension-tension fatigue tests had a lower maximum load, GFRP specimens could sustain a larger number of loading cycles, indicating the significant effects of the fatigue test.In detail, only unconditioned specimens with a maximum fatigue load of 1 kN (corresponding to a normalized maximum tensile stress (σ max /σ ult of approximate 0.25) completed the scheduled number of cycles.
Because GFRP specimens were made of a organic polymeric matrix with elastic glass fibers, the typical degradation, commonly designated as plasticity for metals, cannot occur.However other irreversible mechanisms may be activated during a fatigue test, such as matrix cracking and fiber/matrix debonding.From the experimental data it was seen that fatigue load up to 1.0 kN did not produce important degradation.After fatigue test the static tensile test for specimens 13NT-F and 14NT-F monotonic tests showed an ultimate tensile load that was comparable with that of unconditioned specimens.For the unique specimen that was fatigued up to 2.0 kN, remaining unbroken, the residual tensile strength strongly decreased up to about the 25% of the reference GFRP specimen.The specimen 16AG-F can be considered as a singularity result since the residual strength is located in the maximum quartile of the strength values of the reference GFRP specimens.In all cases it is strongly felt that the peak load in the fatigue history plays a decisive role in determining the residual life of the tested GFRP reinforcement.
Conclusions
The results of an experimental program that studied the durability of GFRP reinforcement respect to alkaline environment and fatigue loads were presented and discussed in the paper.Coupons cut from GFRP pre-cured grids made by ECR glass fibers and epoxy-vinylester matrix were tested as reference and aged coupons.Alkaline ageing and fatigue loads were studied alone and in combination to each other.Severe protocols were applied to emphasize the sensitivity of the tested materials to the studied chemical and mechanical agents.
After conditioning in alkaline bath with pH 12.6 for 718 h at 40 • C it was found that the loss in terms of elastic modulus was negligible, while the maximum reduction in terms of residual strength was 18%.In this perspective the tested GFRP have exhibited a high chemical strength.
When fatigue loads were applied it was seen that the maximum load level applied during the fatigue tension-tension cycles played a significant role.The number of cycles up to failure significantly decreased when the maximum loads up to 2.5 kN were applied.For peak loads of 1.0 kN it was seen that unaged specimens were able to show a residual life, and a slight reduction of about 25% was found in terms of tensile strength respect to the reference GFRP specimens.The combination of fatigue and alkaline ageing was found to be very detrimental, since the fatigue life, in general, decreased after alkaline exposure.This because the micromechanical damage due to the chemical attack is strongly enhanced by the presence of cyclic loads.
Future research is needed in the field, even if it is felt that important information are provided to researchers and practitioners through the results presented herein.
Figure 6 .
Figure 6.Test protocol for fatigue cycles with different amplitudes.
Figure 6 .
Figure 6.Test protocol for fatigue cycles with different amplitudes.
Figure 6 .
Figure 6.Test protocol for fatigue cycles with different amplitudes.
Figure 7 .
Figure 7.Typical tensile failures of GFRP specimens.(a) and (b) near the joint; (c) tensile failure; (d) detail of the specimen's section after tensile failure.
Figure 7 .
Figure 7.Typical tensile failures of GFRP specimens.(a) and (b) near the joint; (c) tensile failure; (d) detail of the specimen's section after tensile failure.
Table 1 .
Monotonic tensile tests matrix.NT for unconditioned and AG for aged specimens.
Table 2 .
Fatigue tests matrix.F was used for those specimens that were subjected to fatigue loads.
Table 3 .
Tensile tests results for unconditioned specimens.
Table 3 .
Tensile tests results for unconditioned specimens.
Table 4 .
Tensile tests results for aged specimens in alkaline bath.
Table 4 .
Tensile tests results for aged specimens in alkaline bath.
Table 5 .
Fatigue tests results of reference and aged GFRP.
Table 6 .
Failure modes of unaged and aged GFRP under fatigue loads.
|
2017-10-08T13:54:28.624Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "2a7285bb3095ec5e0051a3ce26c2bc9aef95dc21",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/7/9/897/pdf?version=1504275428",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7a2928602e21e665f47a51627ada2de4093126c4",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
40481554
|
pes2o/s2orc
|
v3-fos-license
|
Gauge group and reality conditions in Ashtekar's complex formulation of canonical gravity
We discuss reality conditions and the relation between spacetime diffeomorphisms and gauge transformations in Ashtekar's complex formulation of general relativity. We produce a general theoretical framework for the stabilization algorithm for the reality conditions, which is different from Dirac's method of stabilization of constraints. We solve the problem of the projectability of the diffeomorphism transformations from configuration-velocity space to phase space, linking them to the reality conditions. We construct the complete set of canonical generators of the gauge group in the phase space which includes all the gauge variables. This result proves that the canonical formalism has all the gauge structure of the Lagrangian theory, including the time diffeomorphisms.
I. INTRODUCTION
In recent papers [1][2][3] we have discussed some special features exhibited by the gauge groups in Einstein and Einstein-Yang-Mills theories and in a real triad approach to general relativity when their formulations are brought from configuration-velocity space (the tangent bundle T Q) to phase space (the cotangent bundle T * Q). Our viewpoint is that the configuration-velocity space and phase space formulations are equivalent (see [4]). We found that some of the generators of the diffeomorphism group in the tangent bundle are not projectable to the cotangent bundle. To make them projectable, the otherwise arbitrary functions in the gauge group generators must depend on the field variables, particularly on the lapse function and shift vector of the metric-though this dependence still allows all infinitesimal diffeomorphisms to be represented. In Einstein-Yang-Mills and triad theories, diffeomorphisms must be accompanied by other gauge transformations in order to be projectable. When projectability is achieved, we have the full proof that indeed the gauge group is the same in configuration-velocity space as in phase space; this identity of the gauge group is not widely recognized.
Here we study in detail the issue of the gauge group in the Ashtekar complex formulation [5][6][7] of canonical gravity. Ashtekar's use of a self-dual connection makes this formulation very similar to a Yang-Mills theory, and so we expect to get and do get results similar to our previous results. However, a somewhat unusual aspect of this program is the use of a complex Lagrangian and a complex Hamiltonian. The fact that Ashtekar's connection is complex introduces essential novelties. To recover real gravity, reality conditions must be imposed, and we make a thorough examination of them. These conditions are not constraints in a Dirac sense [8,9]. We develop the theoretical framework for a stabilization algorithm to maintain the reality conditions under time evolution. This algorithm is different from the Dirac stabilization algorithm for constraints because of the complex character of the Hamiltonian, though our treatment is conceptually close to Dirac's method.
Recently generalization's of Ashtekar's complex formalism have been introduced. In one approach it has been shown that general relativity can be reformulated as a one-parameter family of real connections [10][11][12]. When the otherwise real parameter takes the value i, one recovers the Ashtekar complex connection. However, one apparent drawback to this real approach is that the scalar constraint loses the simple form it assumes in the complex regime. This could constitute a serious obstacle for the quantization program, though it is true that difficulties in constructing a Hilbert space satisfying the reality conditions in the complex Ashtekar program are thereby circumvented. A second approach undertakes a generalized Wick transform of the complex connection to a real connection [13,14]. This transform has been shown under certain circumstances to be equivalent to an analytic continuation to imaginary time [15], and thus to a spacetime with Riemannian signature. The advantage one hopes to gain through this transform is that it may be possible to solve the simpler scalar constraint in the Lorentzian sector and then implement the Wick transform, thus satisfying the reality conditions.
The argument we put forth here is that the relevance of the complex Ashtekar approach has certainly not diminished. A major theme in this paper is the relation of the scalar constraint to spacetime diffeomorphisms.
Our purposes in this paper are twofold: On the one hand, we will clarify the structure of the generators of the gauge group in the complex Ashtekar formulation of canonical gravity. On the other hand, we will discuss fully the stabilization algorithm for the reality conditions. It is not surprising-perhaps-that both aspects, gauge group and reality conditions, are related: Any symmetry, including gauge symmetries must preserve the reality conditions. We will exhibit the links that exist between these conditions and the conditions of projectability from configuration-velocity to phase space of gauge variations. We distinguish between metric reality conditions (only the full spacetime metric itself must be real) and triad reality conditions (the spatial orthonormal triad vectors, as well as the metric, must be real) as in [16,17]. We will see that the rotation gauge group (for the triads) is reduced from SO(3, C) to SO(3, R) to fulfill the triad reality conditions. Our results concerning the reality conditions do agree with those of [17]; our contribution is that we make clear when the stabilization algorithm for the reality conditions is terminated and how it applies in a general sense. Also, we give a thorough discussion of the elimination of part of the gauge freedom when we extend reality conditions from metric to triad.
We explicitly assume that the connection A i µ is complex but also consider the possibility that all variables in phase space are complex. It is significant that all the gauge variables, that is the lapse , the shift , and the time component of the connection A i 0 , are retained as canonical variables in the analysis of gauge symmetries which we will present. In particular, it could well prove useful in quantum gravity to retain A i 0 as an operator. We would thus contemplate holonomies, parallel transporters of SU(2), in directions off the constant-time hypersurfaces. We presume that all functions, including the Hamiltonian, are analytic, and that phase space has a standard Poisson bracket structure. Physical reasons require that some of the variables must be real. Then it is necessary to impose restrictions on the initial conditions and to restrict gauge freedom in such a way that time evolution will keep real these variables. These restrictions are called the reality conditions. This paper is organized as follows: The stabilization algorithm for the reality conditions is presented in Section II. The algorithm is general in the sense that it can be applied to any complex theory in which physical reasons require that some of the variables be real. In Section III, the Ashtekar approach is succinctly introduced with some results and notations. The canonical approach is undertaken in Section IV, and in Section V we apply the reality condition algorithm to the case of Ashtekar canonical gravity. In Sections VI and VII we solve the problem of finding the projectable gauge transformations and their canonical generators, finding in the process some interesting relations with the reality conditions. We discuss the counting of degrees of freedom in Section VIII. We devote Section IX to conclusions.
II. STABILIZATION ALGORITHM FOR REALITY CONDITIONS-GENERAL THEORY
In this Section we provide the theoretical setting for what properly must be called the stabilization algorithm for the reality conditions. This setting is applicable to any dynamical theory that makes use of complex variables but requires that some of these variables be real to be physically acceptable. In other words, initial conditions must fix real values for these variables, and time evolution must preserve the reality.
Reality conditions are not constraints in the Dirac sense. The difference comes from the fact that reality conditions do not place restrictions on the variables of the formalism but only on the values of some real or imaginary parts of these variables. The difference is made even more clear when we consider stabilization procedures. If the Dirac Hamiltonian is, say, H, the stabilization of a (time independent) Dirac-type constraint φ is to require the tangency of the dynamical vector field {−, H} on the surface defined by φ = 0: This requirement may introduce new constraints or the determination some arbitrary functions in H. The stabilization of a Dirac constraint follows this procedure whether H is real or complex.
Instead, if we have a (time independent) reality condition, such as the vanishing of the imaginary part of a quantity f , ℑf = 0, its stabilization involves, at least, the requirement ℑ{f, H} = 0 . This is not a tangency condition. Moreover, the expression {ℑf, H} makes no sense at all in the formalism, because the bracket is defined for complex phase space variables and cannot be applied to real or imaginary parts of these variables.
Before developing the correct stabilization for reality conditions, we briefly review the basics of the stabilization algorithm for Dirac constraints. Similarities and differences between the two stabilization procedures will become evident.
A. Stabilization of Dirac constraints
Dirac's method applies both to the Lagrangian and Hamiltonian formalisms, but here we will only consider its implementation in the latter case. Consider a dynamical evolution in phase space with some gauge freedom. We start with the canonical Hamiltonian H c , whose pullback to configuration-velocity space is the Lagrangian energy where L is the Lagrangian, which we take to be timeindependent, {q i } are the configuration components, anḋ is d/dt. The Dirac Hamiltonian is the φ µ are the primary constraints, µ = 1, . . . , n, and λ µ are Lagrange multipliers (arbitrary functions in principle) that describe the gauge freedom available to this system. The first step in Dirac's method is to ask for the dynamics to result in trajectories tangent to the primary constraint surface. This requirement of tangency may lead to the determination of some of the multipliers λ µ and the appearance of new constraints. The next step is again to require that the trajectories be tangent to the new constraint surface. The stabilization procedure continues and eventually is completed. We analyze this procedure from the point of view of finite time evolution for application in Subsection II C. To make things simpler, as an example, we assume that none of the multipliers λ µ are determined at any step of the above procedure. Then, as far as the time-evolution of the constraints is concerned, we can use the timeindependent H c as the dynamical generator. We start with the primary constraints φ µ . The time evolution operator from time zero to time t is with the expansion , where x(t) := (q(t), p(t)) is the trajectory in phase space satisfying the equationṡ To preserve the primary constraints under finite evolution we must require for any t. This is the same as the infinite set of restrictions note that n = 0 corresponds to the primary constraints φ µ = 0. In general, the n = 1 level of stabilization in (2.4), {φ µ , H c } = 0, may introduce new independent constraints (secondary constraints) φ µ , H c } = 0, which is Dirac's requirement that the vector field {−, H c } be tangent to the new constraint surface (defined by all the primary and secondary constraints). It is worth noticing that in general the algorithm to get new constraints will eventually stop, and only a finite number of the requirements in (2.4) will be relevant.
For instance, if there are no tertiary constraints, the n = 2 level of stabilization is satisfied when the primary and secondary constraints are taken into account. Then, {φ (1) µ , H c } is a linear combination of the primary and secondary constraints. All other terms in (2.4) vanish under the condition that all of the primary and secondary constraints are satisfied. There are exceptions to this casual statement, in particular when some of the constraints are not effective (an effective constraint has nonvanishing differential on the constraint surface), and we discuss them in the next Subsection. With these exceptions, the stabilization procedure terminates when we find a level of stabilization that is already satisfied under the requirements introduced in the previous levels.
The general situation is when we must consider time dependence in H D (because of the λ µ ). In this case, H D (t 1 ) does not necessarily have vanishing Poisson bracket with H D (t 2 ), for t 1 = t 2 . The time evolution operator (2.2) is then replaced by where T is the time-ordering operator: It acts as with t > = max(t 1 , t 2 ) and t < = min(t 1 , t 2 ) (this expression generalizes to any order). The levels of stabilization in (2.5) now become with t 1 < t 2 < t 3 < . . .. These requirements (2.6) may determine some of the arbitrary functions in H D or they may bring forth further constraints. Once an arbitrary function gets determined, it can be replaced by its expression in phase space for all remaining levels of stabilization.
The sequence (2.6) eventually terminates when the stabilization equations for all the constraints no longer determine new constraints: Higher stabilization equations are automatically satisfied.
B. An aside on ineffective constraints
There is an exception to the rule, just enunciated, that says that the stabilization algorithm is finished when, at a given level, no new constraints appear. The expression {φ (1) µ , H} = 0 is meant to be Dirac's requirement that the vector field {−, H} be tangent to the constraint surface defined by the primary and secondary constraints. This is not an accurate statement when a secondary constraint is ineffective (the primary constraints are always taken in effective form), that is, if its differential vanishes on the constraint surface. For instance, consider the effective constraint φ. To make it ineffective we can square it to get f = φ 2 . The two constraints still define the same surface, φ = 0 ⇐⇒ f = 0. However, the vanishing of {f, H} does not imply the tangency of {−, H} to the surface f = 0 but rather a triviality, because {f, H} = 2φ {φ, H} automatically vanishes on f = 0. This reflects the ineffective character of f (but notice that {f, H} cannot be expressed as a linear combination of f with the coefficient being regular at the surface f = 0).
Because of the possible presence of ineffective constraints, it may be true that one level of stabilization does not bring new restrictions, and yet subsequent levels do. In fact, in our example with f ineffective, the next level of stabilization produces {{f, H}, H} = 2φ{{φ, H}, H} + 2{φ, H} 2 . This could introduce a new ineffective constraint {φ, H} 2 = 0 that defines the same surface as {φ, H} = 0.
The moral is that if we have ineffective constraints, we must take special precautions that the tangency conditions are correctly implemented and that all levels of equation (2.6) are examined.
C. Stabilization of reality conditions
Suppose that our reality condition requires that the functions f α , for some set of indices α, must be kept real under time evolution. We begin, for simplicity, with the case when the Lagrangian multipliers play no part, as in Subsection II A; then we may work with the timeindependent canonical Hamiltonian H c . Expressed in the notation introduced above, the reality requirement is which is, using the evolution operator (2.3), for any t. Therefore, in addition to the primary reality condition, we get the levels of stabilization We call these conditions the secondary reality condition, tertiary reality condition, and so on. Notice in fact that all these requirements need only to hold on the constraint surface, because the complete dynamical setting is given by the evolution operator (2.2) supplemented with the Dirac constraints. One striking difference between these conditions (2.8) and the Dirac stability conditions (2.6) is that the vanishing of one level of stabilization due to the fulfillment of the previous ones does not guarantee that the subsequent levels will also vanish. For instance, let us suppose that for a real matrix η β α (in field theory, the summation over like indices implies a spatial integration, also), so that the secondary reality condition is satisfied when the primary one is. However, this relation is of no value in implementing the tertiary condition. Instead, if we had for any real matrix η β α such that then indeed the stabilization algorithm would have been over. Of course this is only a sufficient condition.
In a more realistic case we would use H D , which is in general time dependent. Considering how we arrived at (2.8), which plays, for the reality conditions, the role analogous to (2.4) for Dirac constraints, it is easy to get an analog for (2.6). In fact we can use here all the results obtained from the Dirac analysis, in particular the determination in phase space of some of the Lagrange multipliers. This means that we can start with a first class (f c) Hamiltonian where we have assumed for simplicity that the first n 1 Lagrange multipliers are the ones that get determined as functions λ µ c in phase space through the Dirac stabilization algorithm. In this general case the reality conditions may lead to a further reduction of the gauge freedom present in H f c D , that is, to a partial determination of the remaining Lagrange multipliers-for instance: their real or imaginary parts. This is what will happen with the triad reality conditions for the Ashtekar formulation, to be analyzed in Section V.
It is obvious that nothing in this Section depends on the theory being formulated in phase space. Indeed, we could replace {−, H c } + λ µ {−, φ µ } everywhere by X + λ µ Y µ , with X and Y µ being vector fields in some given space (for instance configuration-velocity space).
III. THE ASHTEKAR LAGRANGIAN
One way to present the Ashtekar Lagrangian density is [18][19][20] where g is the determinant of the spacetime metric; E µ I are the tetrad components, µ being a spacetime index and I an internal index; and 4 F IJ τ σ is the curvature tensor associated with the Ashtekar connection 4 A IJ µ . We use the standard definitions of these quantities [22], and we do not repeat these definitions here, because we will be working in a 3 + 1 decomposition and will give specific definitions of our variables below.
L A is interpreted in a Palatini-like formalism: The components of the self-dual complex connection are taken to be independent variables. Their equations of motion determine them in terms of the other variables (and their derivatives). This determination is similar to the determination of the Christoffel coefficents in the Einstein-Palatini version of general relativity (see [23] for a good review of actions for gravity). Variables having this property of being determined by their own equations of motion are usually called auxiliary variables. When this dynamical determination of the Ashtekar connection is substituted into the Lagrangian we get the standard Ashtekar Lagrangian, which is equivalent to the Einstein-Hilbert Lagrangian.
We are interested in the canonical description (in phase space). Therefore we will write the action in a 3 + 1 decomposition of the variables. The contravariant spacetime metric is written in terms of the lapse function N and shift vector N a , and a triad of orthonormal vectors T a i (a, b are spatial indices; i, j are internal indices, raised or lowered with δ ij , so that repeated internal indices imply a sum even if both are raised or lowered): The triad vectors and the (unit) normal vector to the constant-time hypersurfaces constitute an orthonormal tetrad.
We represent the components of the orthonormal spatial one-forms by t i a , so that the covariant three-metric is given by It turns out to be convenient to take one set of canonical variables to be the triad vectors multiplied by the square root of the determinant of the three-metric. As has now become conventional, we represent densities of arbitrary positive weight under spatial diffeomorphisms by an appropriate number of tildes over the symbol. For negative weights we place the tilde(s) below the symbol. Hence we define, for the densitized triad as In the Ashtekar approach the connection is self-dual. An antisymmetric tensor, whose components in an or- where ǫ IJKL is the four-dimensional Levi-Civita symbol defined by ǫ 0123 = −1. Because of self-duality, the fourconnection 4 A IJ µ in (3.1) is determined by the independent components ǫ ijk being the Levi-Civita symbol. In the 3 + 1 decomposition the Ashtekar Lagrangian becomes (˙is ∂/∂x 0 , and we will also use a subscript comma for partial derivatives) where F jk ab =: ǫ ijk F i ab is the three dimensional Riemann tensor associated with the Ashtekar connection, and where the covariant derivative D b is defined using the Ashtekar connection: Its action on the densitized triad is, for example,
5)
3 ∇ b being the covariant derivative based on the 3-metric g ab . It is convenient to take the densitized lapse N ∼ as an independent variable, but for convenience, some equations will be written in terms of N itself; likewise it will prove convenient to use both densitized and undensitized variables in some of our results.
Two observations should be made at this point. First: From the fact that L in (3.4) does not depend on the velocitiesṄ ∼ ,Ṅ a ,Ȧ i 0 , we can conclude (details are given in [1]) that the necessary and sufficient condition for a function f in configuration-velocity space T Q to be projectable to phase space T * Q is that f does not depend on these velocities.
Second: The fact that the independent components of the Ashtekar connection play the role of auxiliary variables tells us that their equations of motion give where Ω i µ := 1 2 ǫ ijk Ω jk µ and Ω 0i µ are the components of the spin connection, that is, the Ricci rotation coefficients. In particular, Ω ij a are the three-dimensional Ricci rotation coefficients formed from the triad, so that with 3 Γ b ca being the Christoffel symbols. For future use, we define the covariant derivative using the threedimensional Ricci coefficients, which applied to ∼ T a i gives zero: Notice that when (3.6) holds, The other components of the spin connection involve time derivatives: where K ab is the extrinsic curvature, defined as Equations (3.6, 3.7) will be useful when we consider the reality conditions and in determining variations of A i 0 . Now we will continue with the canonical version of the theory.
IV. THE CANONICAL HAMILTONIAN APPROACH
The Legendre map we working locally, with q,q being coordinates in configuration-velocity space and q, p being coordinates in phase space, as is conventional.
Our configuration variables and their conjugate canonical momenta are as follows: The primary constraints, consequences of the Lagrangian definition of the momenta, are: The canonical Hamiltonian H c is defined as a function in phase space such that its pullback to tangent space under the Legendre map is the Lagrangian energy E L from (2.1), that is, E L = FL * (H c ). H c is uniquely defined up to primary constraints. We take The constraints P i a −iA i a = 0 and ∼ π a i = 0 are second class in the sense of Dirac and can be readily disposed of; in the process, we eliminate the conjugate variables A i a and ∼ π a i . The recipe is to put A i a = −iP i a and ∼ π a i = 0 everywhere in the Hamiltonian. In fact, we don't even need to substitute −iP i a for A i a : Since P i a was not present in H c , we can just take iA i a to be the momentum variable canonically conjugate to ∼ T a i . The rest of the variables are pairs of conjugate variables whose Dirac brackets coincide with the Poisson brackets.
We have achieved a canonical Hamiltonian H c , and a number of canonical variables with Poisson brackets (actually Dirac brackets), The Dirac Hamiltonian, which governs the time evolution of the system, is constructed by adding to H c the primary constraints multiplied by arbitrary functions: The second class primary constraints having been already eliminated, all the remaining primary constraints are first class. The equations of motion derived from H D for ∼ T a i and A i a are∼ The equations obtained from the stabilization of the primary first class constraints yield the three secondary constraints The canonical Hamiltonian written in terms of these constraints is Finally, the equations for the rest of the variables, They inform us that these variables are arbitrarygauge-variables. The secondary constraints (4.5) are all first class (their algebra will be displayed in Section VII). No more constraints appear. Let us observe that the Lagrangian equations of motion for ∼ T a i and A i a are the same as the Hamiltonian equations of motion. The constraints (4.5) appear in configurationvelocity space as the Lagrangian equations of motion for the variables N ∼ , N a , and A i 0 . There are no equations for the time derivatives of these variables, indicating that they are gauge variables. Also, observe that equations (3.6) have the same contents as (4.4a) and (4.5c). Now we are ready to apply our stabilization procedure for the reality conditions to Ashtekar's version of canonical gravity.
A. The metric reality conditions
At the very least, the metric tensor should be real: the primary metric reality conditions are It is clear that, according to (4.3), (5.1a) and (5.1b) fix the arbitrary functions λ ∼ and λ a to be real. These equations do not have any further consequence. Requirement (5.1c) is equivalent to ℑg ab = 0. Notice that these reality conditions will also preserve the Lorentzian signature of the metric (presuming that N and det(T a i ) remain nonzero). Before applying our method of stabilization, let us recall the last result in Section III: The components of the Ashtekar connect ion are auxiliary variables for the Lagrangian (3.4). Recalling the definitions (3.7d), we can write a portion of the equations of motion (3.6) as Thus, if we define the quantities M ab as then this portion of the equations of motion becomes K ab is a functional of the three-metric that is real and symmetric. Thus we find here a requirement that M ab must be real and symmetric. The symmetry is already guaranteed by the constraint (4.5c). That M ab must be real is in fact the content of the secondary reality conditions, ℑ{g ab , H c } = 0, as we shall now prove. The equations of motion for g ab are hidden in (5.4), 5) where L N is the Lie derivative with respect to the vector field N c ∂ c . From the first term in (5.5) we extract the secondary reality conditions as was expected. The last term in (5.5) is a combination of the type η ν µ f µ , as discussed in (2.9), with We had mentioned that the stabilization procedure simplifies when {η, H D } vanishes; a similar simplification occurs when, as here, {η, H D } is not zero but a harmless combination of the λ a (which are real). Thanks to this fact, and applying a similar argument to show the irrelevance of the factor N before M ab in (5.5), we are ready to consider the tertiary reality conditions. Since {M ab , H D } = {M ab , H c }, the tertiary reality conditions are The computation of (5.7) is a bit involved. It is useful to start by writing the canonical Hamiltonian (4.1) as a sum of three terms that clearly preserve the reality of a real triad. This way we will also gain information on the structure of the Hamiltonian; this information is useful whether we consider the metric or the triad reality conditions. The term N a ∼ H a (we have used the definition (4.5b)) in H c produces a time evolution of the triad that makes it acquire an imaginary part. This part can be eliminated by a rotation generated by ∼ H i . This way we obtain a unique linear combination of ∼ H a and ∼ H i that preserves the reality of a real triad. We are led to define Then H c is written as The rotations generated by the first term in (5.9), the integrand of which is equal to −N A i µ n µ ∼ H i , are not real in general. But note that according to the equations of motion (3.6) where we have used definitions (3.7d) and (3.7e).
Since Ω i µ will be real if the triad reality conditions hold, it is useful to rewrite H c as Let us display the action of these three terms of H c on t i a and A i a (since we are computing (5.7), recall that ). The first term in (5.11) is of the type with B i complex. It generates SO(3, C) rotations (R) of the triad vectors, δτ being an infinitesimal parameter, and for the connection components, that is, the Yang-Mills-like gauge transformation. The variations of the Ricci rotation coefficients are computed from the variations of the triad vectors, the results being where D a stands for the covariant derivative associated with the spin connection ω i a . The second term in (5.11) is It generates standard spatial (three-space) diffeomorphisms (D), that is, ,a )δτ . The third term in (5.11) generates a perpendicular diffeomorphism (that is, perpendicular to the constant-time hypersurfaces) plus a gauge rotation with descriptor , as we will show in Section VI. Thus in the real triad sector it does generate real variations. These variations (which we call δ S ′ to distinguish them from the variations δ S generated by ≈ H 0 ) are in fact identical to the variations generated by the scalar generator in the real triad formalism [3], although here we apply them even if the triad is not real. The resulting variation is 14) The corresponding variation of t i a is where M b a = e bc M ca , with e ac g cb = δ a b . When operating on A i a this transformation is, on the constraint hypersurfaces, where the symmetry of M ab (guaranteed by the constraint (4.5c)) has been used, and 3 R ab is the three dimensional Ricci tensor. Therefore the tertiary reality conditions (5.7) are automatically satisfied, for all terms on the right side of (5.16) are real by way of the primary and secondary reality conditions.
Also, we have more information: The first term on the right side of (5.16) is of the type (2.9) and will not give further consequences in subsequent levels of stabilization. The same is true for all the other terms, though they are not exactly of the type (2.9). For instance, consider the term N ,ab in the last term of (5.16). In stabilizing this term, notice that {N ,ab , H D } = λ ,ab which is already real. The next step, {λ ,ab , H D }, gives exactly zero.
Summing up, from the form of the right side of (5.16) we conclude that the metric reality conditions have been fully satisfied. The algorithmic procedure devised in the previous Section has terminated.
B. The triad reality conditions
The primary triad reality conditions are As before, (5.1a) and (5.1b) fix the arbitrary functions λ ∼ and λ a to be real. They do not have any further consequence.
The secondary reality conditions are ℑ{ ∼ T a i , H c } = 0: Using the primary triad reality condition (5.17c), we can write where the constraint (4.5c) has been used. These secondary reality conditions (5.19) were expected from the calculations of the metric reality condition case. The remaining terms of (5.18) give the rest of the secondary triad reality conditions, Notice that the object in (5.20) which is required to be real is the coefficient of We need not worry about the stabilization of (5.19) because this issue has been already addressed in the study of the metric reality conditions. We do have to be concerned with the stabilization of (5.20). The tertiary triad reality conditions read They determine the imaginary part of λ i in (4.3), where λ i 0 is a real arbitrary function. Notice that we have reduced the gauge freedom of rotations of the triad vectors from SO(3, C) to SO(3, R).
With this determination, the Dirac Hamiltonian becomes with λ ∼ , λ a , and λ i 0 all real arbitrary functions. H ′ D is now used for time evolution. The next reality condition is which is trivially satisfied: Since now we have the stronger result which guarantees that no further reality conditions will arise.
VI. PROJECTABILITY OF GAUGE SYMMETRIES
In this Section we will realize the full gauge group in phase space, including transformations based on spacetime diffeomorphisms and triad rotations. Two tasks are involved in this goal. The first one is to make the infinitesimal gauge transformations in configurationvelocity space projectable to phase space. From our previous experience with conventional general relativity [1], Einstein-Yang-Mills theory [2], and real triad theory [3], we know that the arbitrary functions in the infinitesimal spacetime diffeomorphisms must depend in an explicit way on the lapse and shift functions. This was sufficient in the case of general relativity, but in the latter two cases a second step was required: We needed to add a gauge rotation. We expect something similar to occur with the Ashtekar formulation.
The second task is to construct the generators of the gauge group in phase space and to check that the transformations they generate do indeed coincide with the projectable transformations in configuration-velocity space. Notice that now there is a consistency condition to be met which wasn't needed in our previous work: We must require that the gauge group preserve the reality conditions.
We have already calculated [3] the projectable variations of the configuration variables N ∼ and N a under diffeomorphisms with x µ → x µ − δ µ a ξ a − n µ ξ 0 , where the ξ µ are arbitrary functions. As in all the theories considered previously, this dependence on the lapse and shift functions is required in order to make the variations of N ∼ and N a projectable under the Legendre map. The resulting variations under perpendicular diffeomorphisms (P D), with descriptor ξ 0 (with ξ ∼ 0 = t −1 ξ 0 , which will be useful later), are The resulting variation of We can rewrite the variation of ∼ T a i in terms of the canonical variables, using the equation of motion (5.10) so that Ω i µ n µ = A i µ n µ − iN −1 T ai N ,a . Also, using equation of motion (3.6), we find The result is that The variation of the Ashtekar connection requires a little more work. Since under perpendicular diffeomorphisms we will be concerned only with on-shell variations (that is, variations of solutions), our task is to find the appropriate variations of the four-dimensional Ricci rotation coefficients. We begin with the three-dimensional coefficients ω i a , which are constructed from the triad and whose variation therefore requires only (6.2). We showed in [3] that generally Using (6.2) we find Note that (5.15) demonstrates that We will calculate the variation of Ω 0i a in (3.7d) using the expression The general variation of the four-dimensional Christoffel symbols 4 Γ 0 ab under a diffeomorphism with descriptor ǫ µ is Using methods employed in [2], we find Finally, substituting (6.5) and (6.8) into we find that on-shell We turn finally to the variation of A i 0 . Results obtained in [3] are and The most efficient calculation of the on-shell variation of Ω 0i 0 is accomplished by proceeding from the expression (3.7e), using the variations (6.1) and (6.8). For this purpose we also require the variation The result is Using (6.10) and (6.13), we deduce that on-shell Notice that this variation is not projectable under the Legendre map due to the presence of time derivatives of the gauge functions A i 0 , N , and N a in the next to last line of (6.14). But fortunately, the final two lines of (6.14) are a variation under a gauge rotation with descriptor That means we must accompany perpendicular diffeomorphisms with a gauge rotation with the descriptor −θ i to obtain a gauge variation which is projectable under the Legendre map. It is significant that on-shell, according to (5.10), −θ i = ξ 0 n µ Ω i µ , so that in the real triad sector the required gauge rotation is real, and in fact we recover the same projectability condition as in the real triad formulation of general relativity [3].
Finally we write down the variation of A i 0 under a spatial diffeomorphism. Since Ω i µ and Ω 0i µ each transform as a four-vector under these transformations, the result is the usual Lie derivative,
VII. SYMMETRY GENERATORS
We now turn to the gauge group itself and the structure and algebra of the generators of this group.
A. Group algebra
First, we will find the transformations of non-gauge variables generated by each of the secondary constraints. For this purpose let us define These generators are written at a given time (that is not explicitly given in the notation). All brackets associated with them are equal-time brackets. These generate gauge rotations, spatial diffeomorphisms plus associated gauge rotations, and perpendicular diffeomorphisms plus associated gauge rotations, respectively. We have, for example, Thus, according to our discussion following (6.15), S[ξ ∼ 0 ] does indeed generate a projected variation. Notice also that we obtain a real projected variation of a real triad if we undo the imaginary rotation of the triad due to the imaginary descriptor i The generator on non-gauge variables is As we noted in the discussion preceding (5.14), in the real triad sector this object generates the same variations as the scalar generator S[ξ ∼ o ] in the real triad theory [3].
It is convenient from a geometrical perspective to define generators of non-gauge variables which effect pure spatial diffeomorphisms. Using (7.2b) we deduce that the required generator is This is the real triad sector term we isolated in (5.11).
We are now in position to calculate the entire group algebra from the transformation properties in configuration-velocity space, projected to phase space. The projections under the Legendre map of the variations of the generators are Poisson brackets of generators. The calculations parallel those in [2,3], except here it is technically simpler, and conceptually rewarding, also to calculate the Poisson bracket {S[ξ where in (7.5e) It will be useful in constructing the final complete gauge generators to have the algebra of the set R, V , and S. Using the brackets above the remaining non-vanishing brackets are where for clarity we use the notation R[ξ i ] in the last equation instead of R[ξ] as in (7.1a).
B. Complete symmetry generators
The canonical Hamiltonian in terms of the generators takes the form where we define and where spatial integrations over corresponding repeated capital indices are assumed. It was shown in [1] that the complete symmetry generators then take the form A ; (7.8) the descriptors ξ A are arbitrary functions: The simplest choice for the G A are the primary constraints P A , with the result that where the structure functions are Using the brackets calculated in the previous Section we read off the following non-vanishing structure functions: With the use of the structure functions derived above, we obtain the following generators, denoted by G R [ξ], G V [ η], and G S [ζ ∼ 0 ]. These generate, respectively, gauge rotations, spatial diffeomorphisms, and perpendicular diffeomorphisms (plus associated gauge rotations in the last two cases): We wish to emphasize the following point: Notice that the variation of A i 0 generated by G S [ξ ∼ 0 ] is, using (6.15), The second term removes the offending time derivatives of gauge variables, so that the first two variations taken together are projectable. The third variation is projectable, and in fact when combined with the variation generated by G S [ξ ∼ 0 ] produces a variation which conserves the reality of real triads, as we noted in defining the generator S ′ [ξ ∼ 0 ] in (7.3). The general relation is Note that the secondary constraint term in G S ′ is just (7.3). Finally, we use the generators above to construct G D [ ξ], the complete generator of spatial diffeomorphisms with descriptor ξ. Refer to (7.4); the generator is evidently, using the equation of motion (4.4b),
C. The Hamiltonian and rigid time translation
Now that we have the complete set of generators, we can reconstruct the Hamiltonian, recognizing that rigid (in the sense of advancing by the same infinitesimal parameter on each constant-time hypersurface) translation in time is a diffeomorphism implemented on restricted members of equivalence classes of solution trajectories. We take as given explicit spacetime functions ξ ∼ 0 and ξ a . We restrict our attention to solutions for which tξ ∼ 0 = N δτ and ξ a = N a δτ for some infinitesimal parameter δτ . However, we recall that does not generate a pure diffeomorphism. We must subtract the additional gauge rotation generated by According to (7.2c) the descriptor of this gauge rotation is When we restrict this descriptor to those solutions for which ξ 0 = N δτ and ξ a = N a δτ , the descriptor becomes A i 0 δτ − A i a N a δτ . We deduce that the required Hamiltonian is where we have used the fact that Hamiltonian in (7.13) is coincident with the canonical Hamiltonian (4.6)! The gauge variables N, N a , A i 0 in (7.13) are now to be thought of as arbitrarily chosen but explicit functions of spacetime. This object (7.13) will then generate a time translation, which is rigid in the sense of having the same constant value δτ on each equal-time hypersurface, but only on those members of equivalence classes of solutions for which the dynamical variables N, N a , A i 0 have the same explicit functional forms. On all other solutions the corresponding variations correspond to more general diffeomorphism and gauge transformations.
In fact, as we pointed out in [2], every generator G[ξ A ] in (7.9) with ξ 0 > 0 may be considered to be a Hamiltonian in the following sense: generates a global time translation on those solutions which have We have already demonstrated this fact for the non-gauge variables, and it is instructive to verify the claim for the gauge variables N , N a , and A i 0 . The demonstration for N and N a is given in [2]. Substituting (7.14) into (7.11a,7.11c,7.11e), we have
D. Finite real gauge transformations
We close this Section by noting that the arguments presented in Section V demonstrating the preservation of reality conditions under time evolution apply almost unaltered to finite arbitrary symmetry transformations. The only restrictions which must be placed on the descriptors ξ i and ξ a are that they be real. The triad reality condition implies in addition that we must employ the generator G S ′ [ξ ∼ 0 ], defined in (7.11d), instead of G S [ξ ∼ 0 ], defined in (7.2c), and the descriptor ξ ∼ 0 must be real.
Then we find, as in (5.18), with the simple substitutions when ξ ∼ 0 is real. The next and higher levels of reality stabilization are satisfied, just as in Section V, with the substitutions The complete infinitesimal gauge generator which respects the triad reality condition is where ξ A are real (if one has only the metric reality conditions, then only ξ and ξ ∼ 0 need be real). Finally, the finite real generator (which complies with the triad reality conditions), for finite parameter τ , is We substitute into the constraints (4.5a) and (4.5b) (remember that the content of (4.5c) is the condition that M ab be symmetric). We get, for (4.5a) ( 3 R is the three-Ricci scalar), and for (4.5b), These are the standard scalar and vector constraints for canonical ADM general relativity [24]. This is an expected result, because M ab gives, according to (5.4), the initial values for the components of the extrinsic curvature.
The initial data are, therefore: N , N a , M ab , all real with M ab symmetric, and t i a , A i 0 , complex. Thus we are implementing the constraints (4.5c) and the secondary reality condition (5.6). A i a is then determined by (8.1). This amounts to 1 + 3 + 6 + 2 × (9 + 3) = 34 real pieces of data. But t i a must satisfy the 6 restrictions coming from the first metric reality condition (5.1), and both M ab and t i a must fulfill the 4 constraints (8.2) and (8.3). The number of independent real pieces of data is then 34 − 6 − 4 = 24. Now let us turn to the gauge freedom. We have the 4 generators corresponding to the space-time diffeomorphisms and the 6 generators for SO(3, C), three for real rotations and three for imaginary rotations. This totals 10 generators. All these generators, as we have seen in the previous Section, contain primary and secondary first class constraints. This means that we must spend 2 gauge fixing constraints for each generator-see, for example, [4] for the theory of gauge fixing. Hence we must produce 2 × 10 = 20 gauge fixing constraints to eliminate fully the unphysical degrees of freedom. The final counting of physical degrees of freedom is therefore 24−20 = 4. This is the standard number of degrees of freedom of general relativity.
B. With the triad reality conditions
Now the initial data are: N , N a , M ab , t i a , and ℜA i 0 , all real with M ab symmetric. In this way we have already implemented the primary and secondary triad reality conditions. A i a is determined, as before, by (8.1), and the imaginary part of A i 0 is determined by (5.20). This amounts to 1 + 3 + 6 + 9 + 3 = 22 real pieces of data. But t i a and M ab are still constrained to satisfy the 4 ADM constraints (8.2) and (8.3). The number of independent real data is then 22 − 4 = 18. Now let us turn to gauge freedom. We have the 4 generators corresponding to the spacetime diffeomorphisms and the 3 generators that are left after reducing SO(3, C) to SO(3, R) in order to preserve (5.20). This totals 7 generators. As we have mentioned above, we must introduce 2 gauge fixing constraints for each generator. The final counting of physical degrees of freedom is, again, 18 − 14 = 4.
IX. CONCLUSIONS
In this paper we have given a full account of two issues concerning the complex Ashtekar approach to canonical gravity: the nature of the gauge group and the implementation of reality conditions. We have solved the problem of the projectability of the spacetime diffeomorphism transformations from configuration-velocity space to phase space; we have constructed the complete set of canonical generators of the gauge group in phase space (which includes the gauge variables); and we have verified that they indeed generate the projected gauge transformations obtained from configuration-velocity space. This result proves that the canonical formalism is capable of displaying all the gauge structure of the theory, including the time diffeomorphisms, and in particular it proves that the gauge group in configuration-velocity space is the same as in phase space-the only difference is a matter of a convenient basis for the generators.
The gauge rotations which must be added to spacetime diffeomorphisms in achieve projectability differ somewhat from the Einstein-Yang-Mills case (see [2]). The difference is due to the fact that the Ashtekar connection is not a manifest spacetime one-form under diffeomorphisms for which the descriptor ǫ 0 ,a = 0. The full projectable transformation group must be interpreted as a transformation group on the space of solutions of the equations of motion. The pullback of variations of A i a from phase space to configuration-velocity space yields variations δ∼ T a i which only coincide on-shell with d dt ∼ T a i . However, if we use only the pullback of the variations of the configuration variables, ignoring the pullback of momentum variables, the resulting variation of the Lagrangian is a divergence (note that we have been ignoring boundary terms-for a discussion of the algebra of spatial diffeomorphisms including boundary terms, see [25]). These pullbacks yield Noether Lagrangian symmetries. For details see [2,26] This restriction to solution trajectories is intimately related to our demonstration that all G[ξ A ] generators (with ξ ∼ 0 > 0) can be interpreted as Hamiltonians (for time evolution, in the sense discussed in Section VII).
Since the complex character of the Ashtekar connection introduces the issue of reality conditions, we have first produced a general theoretical framework for the stabilization algorithm for these conditions. We showed that there are striking differences from Dirac's method of stabilization of constraints (reality conditions are not constraints in the Dirac sense). For instance, the calculation that shows that the stabilization procedure has been completed is typically not nearly as straightforward as in the Dirac case.
Our display of the reality conditions for Ashtekar's formulation is not new, but we present a rigorous proof, based on the stabilization algorithm, that the set of reality conditions and the algorithmic computation are complete. Also, in the case of the triad reality conditions, we showed that the stabilization algorithm implies the partial determination of some of the arbitrary functions (actually, the determination of their imaginary parts) in the Dirac Hamiltonian H D . We have proved that the the reality conditions are consistent with the gauge group.
We note two links between the triad reality conditions and the canonical generators associated with projectable diffeomorphisms. First, the form of our generator (7.4) for spatial diffeomorphisms of the nongauge variables is the same as the form of the generator (5.8) dictated by the triad reality conditions. In contrast, in the Einstein-Yang-Mills case [2], the form of this generator was more a matter of convenience than necessity. Second, the form of the canonical Hamiltonian in (5.11) was suggested by triad reality conditions. When N ∼ is replaced by ξ ∼ 0 in the third term in the integrand, one obtains the generator (7.3) of the canonical version of the perpendicular diffeomorphisms-when a rotation is subtracted to make these diffeomorphisms projectable; this rotation cancels the next to last term in (6.15). In fact, the rotation which is subtracted is identified as being a real rotation within the triad reality conditions (see 5.20). Finally, we presented the counting of degrees of freedom, either under the metric reality conditions or the triad reality conditions. We showed this number matches the standard number of degrees of freedom of general relativity.
We feel that this work provides a new understanding of spacetime diffeomorphisms in the full (that is, including the gauge variables) complex canonical formalism of Ashtekar for gravity. We expect that implications for an eventual quantum theory of gravity will include insights into the problem of time in such a theory. We will be investigating these ideas further.
ACKNOWLEDGMENTS
JMP and DCS would like to thank the Center for Relativity of The University of Texas at Austin for its hospitality. JMP acknowledges support by CICYT, AEN98-0431, and CIRIT, GC 1998SGR, and wishes to thank the Comissionat per a Universitats i Recerca de la Generalitat de Catalunya for a grant. DCS acknowledges support by National Science Foundation Grant PHY94-13063. We also wish to thank the referee for carefully reading this paper and suggesting improvements.
|
2017-09-07T14:13:55.315Z
|
1999-12-20T00:00:00.000
|
{
"year": 1999,
"sha1": "b679232388100f95e1fc96c4004bfd80c5893cd8",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/12443/1/185908.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "62efd5f1c07004cb2a791f7e40fcac9132263d0d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235652067
|
pes2o/s2orc
|
v3-fos-license
|
$\ell^p$-Distances on Multiparameter Persistence Modules
Motivated both by theoretical and practical considerations in topological data analysis, we generalize the $p$-Wasserstein distance on barcodes to multiparameter persistence modules. For each $p\in [1,\infty]$, we in fact introduce two such generalizations $d_{\mathcal I}^p$ and $d_{\mathcal M}^p$, such that $d_{\mathcal I}^\infty$ equals the interleaving distance and $d_{\mathcal M}^\infty$ equals the matching distance. We show that on 1- or 2-parameter persistence modules over prime fields, $d_{\mathcal I}^p$ is the universal (i.e., largest) metric satisfying a natural stability property; this extends a stability theorem of Skraba and Turner for the $p$-Wasserstein distance on barcodes in the 1-parameter case, and is also a close analogue of a universality property for the interleaving distance given by the second author. We also show that $d_{\mathcal M}^p\leq d_{\mathcal I}^p$ for all $p\in [1,\infty]$, extending an observation of Landi in the $p=\infty$ case. We observe that on 2-parameter persistence modules, $d_{\mathcal M}^p$ can be efficiently approximated. In a forthcoming companion paper, we apply some of these results to study the stability of ($2$-parameter) multicover persistent homology.
Introduction
Topological data analysis (TDA) extracts multiscale information about the geometry of a data set by constructing a diagram of topological spaces from the data. The standard example is persistent homology [50,92], which constructs a filtration, i.e., a diagram indexed by a totally ordered set T , and then applies homology with field coefficients to obtain descriptors of the data called barcodes. A barcode is simply a collection of intervals in T . In the last two decades, persistent homology has been the subject of intense interest, leading to a rich theory, highly efficient algorithms and software, and hundreds of applications [25,56,76,77].
However, in many settings, e.g., in the study of noisy or time-varying data, a single filtration cannot fully capture the structure of interest in the data. We are then led to consider multiparameter persistent homology, where we instead construct a diagram of spaces indexed by a product of d ≥ 2 totally ordered sets; the case d = 2 is of particular interest in applications [26]. Applying homology with field coefficients to this diagram yields a diagram of vector spaces called a (multiparameter) persistence module. Whereas the isomorphism type of a 1-parameter persistence module is completely described by a barcode, this is not the case for 2 or more parameters, and no fully satisfactory generalization of a barcode is available in the multiparameter setting [17,26]. This creates substantial challenges for the development of multiparameter persistent homology as a practical data analysis methodology. A great deal of recent research has been aimed at addressing these challenges, e.g., [15,17,43,49,60,67,72,81,82] and the progress has been very encouraging. While practical applications of multiparameter persistence are still in their early stages, a few promising applications have been developed, e.g., in [28,90], and with the recent advent of efficient algorithms and software for multiparameter persistence [62,63,68,69] there seems to be great potential for further applications.
Much of both the theory and applications of 1-parameter persistence use barcodes primarily via distances defined on the space of barcodes. Analogously, to develop theory and applications of multiparameter persistent homology, one needs suitable distances on multiparameter persistence modules.
Distances on multiparameter persistence modules have thus been a major focus of recent work, and many have been proposed, e.g., in [21,30,31,57,67,70,82,87]. Of these, the distances that have received the most attention and (arguably) have the most fully developed theory are ℓ ∞ -distances, i.e., they can be defined in terms of ℓ ∞ -metrics on Euclidean spaces, and they behave accordingly; we discuss these distances in detail below. But as we will explain, there is a clear need in both theory and applications for ℓ p -distances on multiparameter persistence modules. In spite of some prior work in this direction (see Section 1.5), our understanding of such distances has lagged well behind our understanding of ℓ ∞ -distances.
In this paper, we develop a theory of ℓ p -distances on multiparameter persistence modules which extends and closely parallels the existing theory for ℓ ∞ -distances, and at the same time extends fundamental ideas about ℓ p -distances on barcodes in the 1-parameter setting. We expect the ideas introduced here to be useful in both theory and applications of multiparameter persistent homology.
1.1. Generalizations of the Bottleneck Distance. In the theory of persistent homology, the most widely used distance on barcodes is the bottleneck distance d ∞ W . It is used to state standard stability results for persistent homology [3,33,34,38,40], and it plays an important role in the statistical foundations of the subject [53], as well as in the computational theory [29,84].
In the TDA literature, d ∞ W is generalized in two key directions, as summarized by the middle row and middle column of Table 1. [27,41,79]; this amounts to replacing an ℓ ∞ -norm appearing in the definition of d ∞ W with an ℓ p -norm. In fact, several variants of the definition of d p W appear in the literature, which differ in a choice of ℓ q -norm on R 2 . Following [79,85], we take p = q in this paper. See Section 2.4 for the definition of d p W and its variants.
Many practical applications of persistent homology involve the computation of p-Wasserstein distances on barcodes, usually with p ∈ {1, 2, ∞}. Efficient algorithms and code are available for this [39,61]. In spite of the importance of d ∞ W to the theory, applications using a small value of p appear to be more common; papers describing practical applications of d 1 W or d 2 W include [5,7,10,18,23,24,48,54,55,58,64,75,79,80,91]. Loosely speaking, as p decreases the p-Wasserstein distances become relatively less sensitive to outlying intervals in the barcodes, and more sensitive to the number of small intervals, which may explain in part why small choices of p are often preferred.
On the theoretical side, the 1-Wasserstein distance on barcodes is often used to formulate stability results for "vectorizations" of barcodes, i.e., for maps from barcode space to a linear space which are used in machine learning applications of TDA; see, e.g., [85,Section 7] and the references given there.
As the second key direction of generalization, d ∞ W extends to a pair of distances on multiparameter persistence modules, the interleaving distance d I [33,37] and the matching distance d M [30]. The interleaving distance is the most prominent distance in the literature on multiparameter persistence modules. It has proven to be a useful theoretical tool in TDA, e.g., for formulating stability and consistency results [15,66,81]. The use of d I in the TDA theory is justified in part by a universal property which says that on multiparameter persistence modules over prime fields, d I is the largest distance satisfying a natural stability property [67]; see Theorem 1.4 below.
The result that d I = d ∞ W on 1-parameter persistence modules is known as the isometry theorem [3,11,22,37,67]; see Theorem 2.17 for a formal statement. The inequality d I ≥ d ∞ W , originally due to Chazal et al. [33], is called the algebraic stability theorem. It plays a particularly important role in persistence theory.
It has been shown that computing d I on n-parameter persistence modules is NP-hard for n ≥ 2 [12]. This motivates the search for a computable surrogate for d I , and the matching distance has emerged as a natural choice. It has been shown by Landi [65] (and is implicit in earlier work by Cerri et al. [30]) that d M ≤ d I . On bipersistence modules, (i.e., 2-parameter persistence modules,) d M is computable in polynomial time [60]. It can also be efficiently approximated [9,62], and a fast implementation of the approximation algorithm of [62] was recently made available as part of the Hera code [61]. It is expected that these results about computing d M on bipersistence modules extend to higher-parameter persistence modules, though this has not been proven. While there have been only a few applications of the multidimensional matching distance to date [9,59], it seems likely that recent advances in 2-parameter persistence computation and software [62,63,69] will lead to further applications.
1.2. Stability of the Interleaving and Wasserstein Distances. The p-Wasserstein distances and interleaving distance satisfy similar stability properties, which will be important points of reference for our work. To state these properties, we will need a few definitions. We consider R n as a poset with the usual product partial order, i.e., (a 1 , . . . , a n ) ≤ (b 1 , . . . , b n ) if and only if a i ≤ b i for each i. Given a CW-complex X, let Cells(X) denote the set of cells of X. Following [85], we say a function f : Cells(X) → R n is monotone if f (σ) ≤ f (τ ) whenever the attaching map of τ has non-trivial intersection with σ. (i) For any topological space T and function f : T → R n , the sublevel filtration of f , denoted S(f ), is the functor R n → Top given by for each a ∈ R n , with the internal maps of S(f ) taken to be inclusions. (ii) Similarly, given a CW-complex X and monotone f : Cells(X) → R n , the sublevel filtration of f , also denoted S(f ), is the functor R n → Top given by taking S(f ) a to be the following subcomplex of X with the internal maps again taken to be inclusions.
For p ∈ [1, ∞] and v ∈ R n , let v p denote the p-norm of v.
Notation 1.2. Given a set S and function f : S → R n , let and if f has finite support, then for all p ∈ [1, ∞) let Let H i denote the (singular or cellular) homology functor with coefficients in some fixed field k. In what follows (and throughout this paper), by a distance we mean an extended psuedometric; see Definition 2.12. Definition 1.3 (Stability Properties). For n ≥ 1 and p ∈ [1, ∞], we say a distance d on n-parameter persistence modules is (i) stable if for all topological spaces T , functions f, g : T → R n , and i ≥ 0, we have (ii) p-stable if for any finite CW-complex X, monotone f, g : Cells(X) → R n , and i ≥ 0, we have (iii) p-stable across degrees with constant c if for all X, f , and g as in (ii), we have where d(H * S(f ), H * S(g)) denotes the sequence of non-negative real numbers Theorem 1.4 (Stability and Universality of the Interleaving Distance [67]). For any n ≥ 1, (i) d I is stable on n-parameter persistence modules, (ii) if the field of coefficients k is prime and d is any stable distance on n-parameter persistence modules, then d ≤ d I .
In view of the isometry theorem, Theorem 1.4 (i) generalizes the wellknown stability theorem for sublevel persistent homology of R-valued functions given in [40,46]. In fact, Theorem 1.4 (i) turns out to be trivial, but the proof of Theorem 1.4 (ii) requires some work.
Recently, Skraba and Turner have established the following fundamental ℓ p -stability result for the 1-parameter persistent homology of filtered CWcomplexes.
Note that in the special case of R-valued monotone functions on finite CWcomplexes, Theorem 1.4 (i) coincides with the p = ∞ case of Theorem 1.5. In [85], Theorem 1.5 is applied to obtain new stability results for sublevel filtrations of greyscale images, Vietoris-Rips complexes, and the persistent homology transform. Remark 1.6. A 2010 paper of Cohen-Steiner et al. gives a different stability result for sublevel persistent homology using a Wasserstein distance on barcodes [41]. This result concerns Lipschitz functions on triangulable, compact metric spaces, and uses the variant of p-Wasserstein distance on barcodes defined using the ℓ ∞ -norm on R 2 , rather than the ℓ p -norm; see Remark 2.16.
1.3. The p-Presentation and p-Matching Distances. For p ∈ [1, ∞], we introduce two generalizations of the p-Wasserstein distance d p W to finitely presented multiparameter persistence modules. We call these the p-presentation distance (or simply presentation distance) and the p-matching distance, and denote them by d p I and d p M , respectively. In the case p = ∞, they are respectively equal to the interleaving distance and matching distance. We show that several fundamental properties of the Wasserstein, interleaving, and matching distances extend to our new distances.
A number of other ℓ p -type distances on multiparameter persistence modules have recently been proposed; we discuss these below, in Section 1.5. However, our work is the first to consider a common generalization of the Wasserstein and interleaving distances, and also the first to consider a common generalization of the Wasserstein and matching distances.
We next give a brief description of p-presentation and p-matching distances; details can be found in later sections of the paper. A presentation of an n-parameter persistence module M is a morphism of free modules γ : F → G whose cokernel is isomorphic to M . If M is finitely presented, then by choosing ordered bases of F and G, we can represent γ as a matrix with an R n -valued label attached to each row and each column; we call this a presentation matrix. Given presentation matrices P M and P N for persistence modules M and N with the same underlying matrix, let d p (P M , P N ) denote the ℓ p -distance between the sets of labels of P M and P N . Definê d p I (M, N ) to be the infimum of d p (P M , P N ) over all such choices of P M and P N . It turns out thatd p I does not satisfy the triangle inequality when n ≥ 2 and p ∈ [1, ∞). We define d p I to be the largest distance that is bounded above byd p I and satisfies the triangle inequality. The matching distance d M (M, N ) is defined in terms of the bottleneck distance between 1-parameter affine restrictions of M and N . The definition of d p M amounts simply to replacing the bottleneck distance with the p-Wasserstein distance in the definition of d M .
1.4. Main Results. The following theorem describes the relationship of our distances to each other, and to the other distances on persistence modules mentioned above. Theorem 1.7. On finitely presented multiparameter persistence modules, On finitely presented 1-parameter persistence modules, The content of the theorem can be summarized by placing the expression d p M ≤ d p I in the lower right corner of Table 1, as we have done. We prove (i) and the first equality of (iv) in Section 3; (iii) is proven in Section 4; and (ii) and the second equality of (iv) are immediate from the definitions. A proof of a generalization of (i) is implicitly given in [67,Section 4] as part of the proof of Theorem 1.4 (ii). That proof of (i) is somewhat technical; here we provide a more intuitive proof. Our proof of (iv) is similar to the proof of Theorem 1.5 given in [85].
Together, Theorem 1.7 (i) and the p = ∞ case of Theorem 1.7 (iv) imply the isometry theorem (i.e., that d I = d ∞ W ) for finitely presented 1-parameter persistence modules. In fact, our proofs of Theorem 1.7 (i) and (iv) together amount to a novel proof of the isometry theorem. While there already exist several good proofs of the isometry theorem which work in greater generality [3,4,11,37], we feel that the proof given here may be of interest, because it is simple, intuitive, and rests on established facts known to be useful elsewhere. Our proof is similar in some respects to existing proofs of algebraic stability and stability for R-valued functions; specifically, [42,85] use a very similar interpolation strategy, and [33,37,40] use a somewhat similar one.
As an application of the first equality of Theorem 1.7 (iv), we observe that on bipersistence modules, d p M can be efficiently approximated, up to arbitrarily small error, by a simple extension of the algorithm of [62] for approximating d M ; see Section 4.2 for details, including a precise runtime bound.
In view of what is known about computation of d I and d M , the following conjecture seems natural, and is left to future work: Conjecture 1.8.
(i) For fixed n ≥ 1, the distance d p M on finitely presented n-parameter persistence modules is exactly computable in polynomial time.
(ii) Computing d p I on finitely presented 2-parameter persistence modules is NP-hard, for all p ∈ [1, ∞].
Our final result, which is perhaps the central result of the paper, concerns the stability and universality of the presentation distance on 1-and 2-parameter persistence modules. The result extends Skraba and Turner's cellular ℓ p -stability result Theorem 1.5, and is a close analogue of Theorem 1.4.
In the statement, we adopt the convention that 1 ∞ = 0. (i) d p I is p-stable, and also p-stable across degrees with constant n 1 p , on finitely presented n-parameter persistence modules, (ii) if the field of coefficients k is prime and d is any p-stable distance on finitely presented n-parameter persistence modules, then d ≤ d p I .
(i) For both the cases n = 1 and n = 2, the constant n 1 p appearing in Theorem 1.9 (i) is tight; see Proposition 5.13. (ii) In the case n = 1, Theorem 1.9 (ii) in fact holds for arbitrary fields k. (iii) In the case n = 1, Theorem 1.9 (i) is an immediate consequence of Skraba and Turner's result Theorem 1.5 and Theorem 1.7 (iv), but can be proven more directly via an argument similar to the proof of Theorem 1.5; see Section 5.1. (iv) Theorem 1.7 (iii) implies that Theorem 1.9 (i) also holds if d p I is replaced by d p M .
(v) Note that in the case p = ∞, Theorem 1.9 is somewhat weaker than Theorem 1.4: Theorem 1.4 holds for persistence modules with an arbitrary number of parameters, and does not require any finiteness assumptions. In addition, Theorem 1.4 (i) holds for the sublevel filtrations of arbitrary R n -valued functions, not only for monotone functions on cell complexes.
1.5.
Other Approaches to Defining ℓ p -Distances. The problem of generalizing the Wasserstein distance on barcodes was first studied by Bubenik, Scott, and Stanley [21], who consider the variant of d p W defined in terms of the ℓ 1 -norm on R 2 . They introduce and study a generalization of this to diagrams of vector spaces indexed by an arbitrary small category, which includes the case of multiparameter persistence modules. In this case, their distance, which we will denote as D p , is not equal to either d p M or d p I for any p, and has rather different properties. For example, d p I and d p M are always finite on finitely generated, free R n -indexed or Z n -indexed persistence modules with the same number of generators, but when n ≥ 2, D p is infinite unless the modules are isomorphic. Another qualitative difference is that for indecomposable persistence modules M and N , D p (M, N ) is in fact independent of p, but our distances generally are not. It is shown in [21] that D p satisfies a universal property, though this is rather different than the universal property of Theorem 1.9 (ii). Skraba to other abelian categories, giving axioms on such a generalization which ensure that it satisfies the triangle inequality. However, no generalization of d p A to multiparameter persistence modules is given in [85], and the question of how to define one is mentioned as a direction for future work. Previous work by Scolamiero et al. introduced a similar (but not identical) axiomatic approach to defining distances on multiparameter persistence modules [82,Definition 8.6], though this work does not consider ℓ p -distances. The definition of generalized p-Wasserstein distance appearing in [21] also uses a construction very similar to [82,Definition 8.6].
Recent work of Giunti et al. introduces a general framework for defining metrics on multiparameter persistence modules which extends each of the approaches discussed above and appears to yield novel ℓ p -distances [57, Example 3.34, Remark 3.35, and Definition 4.9].
Another approach to defining ℓ p -type distances on multiparameter persistence modules is to vectorize the modules, i.e., to specify a map from modules into a vector space equipped with a p-norm; the induced metric on the vector space then pulls back to a distance on the modules. Several such maps have been proposed and applied to data. One very simple approach is to define the map in term of the Hilbert function, i.e., the dimension of the vector space at each index [8,59]. Alternatively, when working with homology in all degrees, one can instead take the Euler characteristic at each index [6]. As these approaches do not depend on the internal linear maps in the persistence modules, they are rather coarse. Other vectorizations have been proposed that do depended on the internal maps: Vipond's multiparameter persistence landscapes [89,90] and Carriére and Blumberg's multiparameter persistence images [28] consider the internal maps of the module only along a fixed direction, whereas the kernel construction of Corbet et al. [43] considers the internal maps in all directions.
Stability results have been given for each of the last three approaches, though these have some key limitations. To elaborate, [89] shows that the multiparameter persistence landscapes are 1-Lipschitz stable with respect to the interleaving distance on modules and the L ∞ -distance on landscapes. On the other hand, [85,Section 7.2] observes that for all p ∈ [1, ∞), 1parameter persistence landscapes are not Hölder continuous with respect to the Wasserstein distance W p on barcodes and the L p -distance on landscapes. A stability bound for the kernel construction of [43] is given with respect to the matching distance on modules and the L 2 -distance, but the bound involves a constant which depends on the size of the input and can be quite large. While the multiparameter persistence images of [28] are unstable in general, the authors give a partial stability result under special conditions [28,Supplementary Material].
A potential practical issue with some of the distances described above, e.g., the L p -distance on Hilbert functions, is that they are often infinite on modules with unbounded support. In an effort to address this issue, Miller and Thomas [74] use primary decomposition to construct modified distances which are finite on a larger class of persistence modules, including all finitely generated modules. A preliminary exposition of the ideas appears in Thomas's Ph.D. thesis [87, §4.3].
1.6. Applications. We describe three potential directions for applications of the distances introduced in this work. A first natural direction is to use d p I to formulate stability and inference results for multiparameter persistent homology. Some such results for the multicover bifiltration will appear in a companion paper [13], and are outlined in Section 1.7 below.
Second, we imagine that the p-matching distance for small p could be used in practical applications of multiparameter persistent homology, in much the same way that the p-Wasserstein distance on barcodes has been used. To elaborate, while d M is a natural candidate for such practical use, it inherits some features of d ∞ W that may be undesirable, namely, sensitivity to outlying algebraic structure and insensitivity to small features. Thus, in analogy with the 1-parameter case, where d 1 W or d 2 W is often preferred over d ∞ W , one imagines that a variant of d M generalizing d p W might perform better in applications than d M . One example of a potential application along these lines is the virtual screening problem in computational chemistry, i.e., the problem of identifying drug candidates from a large database of ligands. The matching distance on 2-parameter persistence modules has been applied to this problem in [59]. (Persistent homology has also been applied to the problem in [23].) We hypothesize that d 1 M and d 2 M would outperform d M in this application, though we do not explore this, or any other application to real data, in the present paper.
A third potential direction, also not explored here, is to use the distance d p I (particularly, d 1 I ) to formulate stability results for vectorizations of multiparameter persistence modules. As mentioned in Section 1.1, in the 1parameter case there exist several such stability results for the 1-Wasserstein distance on barcodes. One might hope that some of these extend to the multiparameter setting.
1.7. ℓ p -Stability of Multicover Persistence. As mentioned above, in [13] we apply the distances d p I and d p M to study the stability and continuity of the multicover persistent homology of point cloud data. To motivate the results of this paper, we briefly describe this application.
Multicover persistent homology is a natural 2-parameter extension of the standard union-of-balls construction of persistent homology [35,83]. It takes into account the number of times a point is covered by a ball, and thus is density-sensitive in a way that the standard construction is not. Recent work has shown that multicover persistent homology is computable, at least for modestly sized data embedded in a low-dimensional Euclidean space [44,51].
It was also recently shown that multicover persistent homology satisfies a 1-Lipschitz stability property closely analogous to the usual stability results for persistent homology [15]. The property is stated in terms of the Prohorov distance π, a classical distance on probability measures, and the interleaving distance d I . The result tells us in particular that, unlike the 1-parameter union-of-balls persistence, multicover persistence is robust to outliers. However, because this stability result is formulated in terms of d I , it is rather coarse, in a sense: According to the characterization of d I provided by Theorem 1.7 (i), d I can be defined in terms of a sup-norm on the grades of generators and relations, and thus is sensitive only to the largest perturbation of such grades, not to the number of perturbations. Thus, the stability result of [15] says nothing about how many small-scale algebraic changes to the persistent homology can result from a small change to the data. In fact, a computational example in [44] indicates that the addition of even a few random outliers to a data set can generate quite a lot of smallscale algebraic noise in the multicover persistent homology. This raises the question of whether, by using different metrics on persistence modules, we can develop a more nuanced picture of the stability of multicover persistence, in which such noise is visible.
In answer to this, we show in [13] that when formulated using our distances d p I and d p M for p < ∞, the stability story for multicover persistent homology looks rather different than it does for d I = d ∞ I . Specially, we show that for each p ∈ [1, ∞) and m ≥ 2p, multicover persistent homology of points in R m is discontinuous with respect to π and d p I . However, if we restrict to point clouds of bounded cardinality, then multicover persistent homology is Lipschitz continuous for all p ∈ [1, ∞). The same results also hold using d p M in place of d p I , or using the Wasserstein distance on probability measures in place of π.
Acknowledgements. We thank Andrew Blumberg for valuable discussions about matters related to this work and the companion work [13]. We also thank Barbara Giunti and Nina Otter for sharing insights into related work on metrics for generalized persistence modules, and we thank Ezra Miller for helpful feedback on a draft of this paper. The first author is supported by the Austrian Science Fund (FWF) grant number P 33765-N.
Persistence Modules.
A category C is said to be thin if for every pair of objects a, b in C, Hom(a, b) contains at most one morphism. Any poset (X, ≤) can be regarded as a thin category with object set X, by taking Hom(a, b) to be non-empty if and only if a ≤ b.
For C a small, thin category, we define a C-persistence module to be a functor M : C → Vect, where Vect denotes the category of k-vector spaces over some fixed field k. Thus, M consists of a vector space M a for each object a of C, and a linear map M a→b for each morphism a → b in C, such that M a→a = Id Ma and M b→c • M a→b = M a→c . When the category C is understood, we often omit C and simply call M a "persistence module" or even just a "module". If dim(M a ) < ∞ for all a, then we say that M is pointwise finite dimensional, or p.f.d.
A morphism γ : M → N of C-persistence modules is a natural transformation, i.e., a choice of a linear map γ a : M a → N a for each object a of C such that for each morphism a → b in C, With this definition of morphism, the C-persistence modules form an abelian category Vect C . Hence, many standard constructions of abstract algebra are well-defined in Vect C , e.g., submodules, quotients, kernels, images, and direct sums. The definitions of these are given objectwise. For example, the direct sum M ⊕ N of persistence modules M and N is given by Similarly, the kernel of a morphism γ : M → N of persistence modules, denoted ker γ, is given by with the internal maps taken to be the restriction of those in M .
In this paper, we are interested primarily in R n -persistence modules, where R n is given the partial order of Section 1.2. We sometimes call these n-parameter persistence modules. Interleavings, defined in Section 2.3 below, are a second example of persistence modules that we will want to consider. Definition 2.1. An interval in R is a non-empty connected subset. For I ⊂ R an interval, define the interval module k I to be the 1-parameter persistence module such that In general, we call a multiset B of intervals in R a barcode.
Free Modules and Presentations.
Free R n -persistence modules arise frequently in TDA. Their definition in fact extends immediately to a definition of free C-persistence modules for any small category C; for example, see [86,Section 4]. We will only need to consider the case where the indexing category is a poset, so we give the definition just in this case.
For X a poset and x ∈ X, let Q x denote the X-persistence module given by For a ∈ X and v ∈ M a , we write gr(v) = a. We say that Definition 2.4. A basis of a free X-persistence module F is a minimal set of generators for F . Remark 2.5. Alternatively, one can give equivalent definitions of free modules and bases in terms of a universal property [26,86].
Given a free X-persistence module F and a basis B for F , write B a := {b ∈ B | gr(b) = a}. Proposition 2.6. For any free X-persistence module F and a ∈ X, the cardinality of B a is the same for all bases B of F .
It is easily checked that B a descends to a basis for the vector space F a /V . Since all bases of a vector space have the same cardinality, the result follows.
Matrix Representation of Morphisms of Free Modules. Let B = (b 1 , . . . , b |B| ) be an ordered basis of a finitely generated free X-persistence module F , and Thus, [v] B records the field coefficients in the linear combination of B giving v.
Along similar lines, for a finite ordered basis of a free persistence module F ′ , we represent a morphism γ : F → F ′ as a matrix [γ] B ′ ,B with coefficients in the field k, with each row and column labeled by an element of X, as follows: • The label of the j th column is gr(b j ), • The label of the i th row is gr(b ′ i ). Where no confusion is likely, we sometimes write [γ] B ′ ,B simply as [γ]. In the literature on multigraded commutative algebra, [γ] is typically called a monomial matrix [71,73], though we will not use this term. Presentations. A presentation of an X-persistence module M is a morphism of free X-persistence modules γ : F → G such that coker γ ∼ = M . In view of the above, if F and G are finitely generated, then we can represent γ with respect to ordered bases of F and G as a matrix [γ] with coefficients in k, together with an X-valued label for each row and each column. We call the labeled matrix [γ] a presentation matrix for M , or simply a presentation (abusing terminology slightly). It is easily checked that [γ] encodes γ up to natural isomorphism; that is, given [γ], we can construct a morphism of free modules γ ′ : If there exists a presentation γ : F → G of M with F and G finitely generated, then we say M is finitely presented.
Example 2.7. A module F is free if and only if 0 → F is a presentation of F . Choosing a basis of F , we get a presentation matrix that has no columns and therefore contains no data except a vector of X-valued row labels. This vector records the grades of the basis elements.
Define the δ-interleaving category I δ to be the thin category with object set R n × {0, 1} and a morphism (a, i) → (b, j) if and only if either Thus, J i is the restriction of I δ to one of the two copies of R n .
Remark 2.11. Equivalently, a δ-interleaving can be defined as a certain pair of natural transformations, as for example in [20], and this viewpoint is a useful one. However, for the purposes of this paper, the definition we have given here is more convenient.
We define the interleaving distance d I between functors M, N : R n → Vect by d I (M, N ) = inf {δ | There exists a δ-interleaving between M and N}. Definition 2.12 (Extended Metrics and Pseudometrics). Given a set X in some Grothendieck universe, an extended pseudometric on X is a function d : . If in addition d(x, y) > 0 whenever x = y, then d is called an extended metric. (By convention, "extended" is understood in this context to mean that d can take value ∞.) As noted in the introduction, in this paper a distance is understood to be an extended pseudometric.
Remark 2.13. d I is an extended pseudometric on n-parameter persistence modules. Moreover, it is shown in [67] that d I descends to an extended metric on isomorphism classes of finitely presented modules.
2.4. The Wasserstein Distance. We next define the Wasserstein distance on barcodes. To keep notation simple, we give the definition only for finitely presented barcodes (Definition 2.8), as this is the only case we use in this paper, but the definition extends without difficulty to arbitrary barcodes.
A matching between sets S and T is a bijection σ : S ′ → T ′ , where S ′ ⊂ S and T ′ ⊂ T . Formally, we may regard σ as a subset of S × T , where (s, t) ∈ σ if and only if σ(s) = t. The definition of a matching extends readily to multisets.
In what follows, we will freely use the standard conventions for arithmetic over the extended reals R ∪ {−∞, ∞}. In addition, we adopt the convention that ∞ − ∞ = 0.
Recall from Definition 2.8 that each interval in a finitely presented barcode is of the form [a 1 , a 2 ) for a 1 < a 2 ∈ R ∪ {∞}; we will identify this interval with the point a = (a 1 , Definition 2.14 (Wasserstein Distance on Barcodes). Let B and C be finitely presented barcodes. Given p ∈ [1, ∞) and a matching σ : Similarly, we define For p ∈ [1, ∞], the p-Wasserstein distance between B and C, denoted d p It is easily checked that [79]. However, the p = 1 case was considered earlier [27], as was the p = ∞ case (i.e., the bottleneck distance) [19,46]. As mentioned in the introduction, Cohen-Steiner et al. [41] considered the variant of d p W where in the definition of cost(σ, p), the ℓ p -norm on R 2 is replaced by the ℓ ∞ -norm. Similarly, Bubenik et al. [21] replaced the ℓ p -norm on R 2 with the ℓ 1 -norm.
We are now ready for a formal statement of the isometry theorem: Theorem 2.17 (Isometry Theorem [22,37,67]). For all p.f.d. R-persistence modules M and N , we have N ). In fact, it was shown by Chazal et al. that Theorem 2.17 holds for a more general class of modules, the q-tame modules [3,37], though defining barcodes in this generality requires some care [36]. Proof. It is easily checked that a δ-interleaving between M and N induces a δ-interleaving between M l and N l , for any admissible line l. The result now follows from the algebraic stability theorem.
2.6. Left Kan Extensions along Grid Functions. The material of this subsection will be used in the proofs of Theorem 1.7 (i) and Theorem 1.9 (i).
Definition 2.20 ( [16,68]). Let T = Z or T = R. A grid function is a function X : T n → R n of the form X (a 1 , . . . , a n ) = (X 1 (a 1 ), . . . , X n (a n )), where each X i : T → R is a functor (i.e. order-preserving function) such that If T = R, then we also require that each X i is left continuous.
Note that in the above definition, we do not assume the X i to be injective. For X a grid function, define X −1 : R n → T n by X −1 (a 1 , . . . , a n ) = (X −1 1 (a 1 ), · · · , X −1 n (a n )), where X −1 i (a) = max {t ∈ T | X i (t) ≤ a}. The left continuity assumption in the definition of a grid function ensures that these maxima exist.
Left Kan extensions along grid functions have an especially simple description. It will be convenient to take this description as a definition: Definition 2.21 (Left Kan Extension along a Grid Function). For any grid function X : T n → R n and category C, define a functor E X : C T n → C R n by where a ≤ b ∈ R n . Similarly, for any natural transformation γ of T npersistence modules, define E X (γ) by It is readily checked that E X is indeed a functor. This functor (or any functor naturally isomorphic to it) is called the left Kan extension along X .
Remark 2.22.
It is straightforward to check that the functor E X of Definition 2.21 is indeed the standard left Kan extension functor along X , as defined, e.g., in [78, Proposition 6.1.5]. However, for our purposes, it will be sufficient to work directly with Definition 2.21, avoiding appeals to the standard definition.
We now fix a grid function X .
Lemma 2.23. If C = Vect, then E X is exact, i.e., given a short exact sequence of T n -persistence modules applying E X to the sequence yields an exact sequence: Proof. Exactness of a sequence of persistence modules is an objectwise property, so this immediate from Definition 2.21.
Notation 2.24. For B a basis of a free T n -persistence module F , let To see that the above definition in fact makes sense, note that if b ∈ B, then by Definition 2.21, b ∈ F gr(b) = E X (F ) X (gr(b)) .
Proposition 2.25. For any free T n -persistence module F , for all x ∈ T n . Item (i) now follows. Items (ii) and (iii) follow easily from Definition 2.21.
Lemma 2.26. Any morphism of γ : M → N of finitely presented R npersistence modules is the left Kan extension of a morphism γ ′ : M ′ → N ′ of finitely generated Z n -persistence modules along a grid function X : Z n → R n .
Proof. Let P M and P N be presentation matrices for M and N , and let X be any injective grid function such that im X contains all row and column labels of both P M and P N . Let It is easily checked that M ′ and N ′ are finitely generated, and that γ is naturally isomorphic to E X (γ ′ ). For M and N finitely presented, we definê
The p-Presentation
By Theorem 1.7 (i) and Remark 2.13, d ∞ I = d I descends to a metric on isomorphism classes of persistence modules. However, the following example shows that for p ∈ [1, ∞),d p I does not satisfy the triangle inequality. To obtain the lower bound for d p I (M, Q (r,r) ), let γ : F → G be a presentation of M , and let [γ] be its matrix representation with respect to a choice of ordered bases for F and G. Write the basis of G as (g 1 , . . . , g k ). Since M (0,−1) = k and M a = 0 for all a < (0, −1), G must have a basis element g i at (0, −1), meaning that the i th row label of [γ] is (0, −1). Moreover, since M (0,−1)→a is an isomorphism for all a ≥ (0, −1), we may choose g i so that To show the last inequality, we use the obvious presentations 0 → Q (0,0) and 0 → Q (r,r) . The d p -distance between these presentations is (r, r) − (0, 0) p = 2 1 p r, sod p I (Q (0,0) , Q (r,r) ) ≤ 2 1 p r.
Thoughd p I does not satisfy the triangle inequality, it generates an extended metric d p I on isomorphism classes of persistence modules: Definition 3.2 (Presentation Distance). For R n -persistence modules M and N and p ∈ [1, ∞], define where the infimum is taken over all finite sequences Q 0 , Q 1 , . . . , Q l of finitely presented persistence modules with Q 0 = M and Q l = N . We call d p I the p-presentation distance.
Proposition 3.3.
(i) d p I is a distance (i.e., extended pseudometric) on finitely presented R n -persistence modules.
Concatenating the sums, we get that d p I (M, N ) ≤ δ + δ ′ . It follows that Let d be a distance bounded above byd p I . Then Proof. Since d I satisfies the triangle inequality andd ∞ I = d I , we see thatd ∞ I also satisfies the triangle inequality. Proposition 3.3 (ii) then implies that d ∞ I =d ∞ I = d I .
As noted in the introduction, the proof of Theorem 3.7 appearing in [67], while not difficult, is rather technical and unintuitive. Here, we present a more intuitive proof. The key idea behind the proof is Proposition 3.11 below, which will also be of use to us later, in the proof of Theorem 1.7 (iii). Definition 3.9. For J : X ֒→ Y an injection of posets, we say Y pushes onto im J if for any q ∈ Y , there is a unique minimal element p ∈ X such that q ≤ J (p). If Y pushes onto im J , then this defines a function push J : Y → X sending each q ∈ Y to such p.
In the special case that J is the inclusion of an affine line into R 2 , the maps push J were introduced in [68] and subsequently also used in [60,88]. Lemma 3.13. For δ > 0, suppose that Z : I δ → Vect is a finitely presented δ-interleaving between R n -persistence modules M and N , and P Z is a presentation matrix for Z. Then there exist presentation matrices P M and P N for M and N with the same underlying matrix as P Z , such that d ∞ (P M , P N ) ≤ δ.
Proof. By Example 3.10 and Proposition 3.11, P Z induces a presentation of P M of Z • J 0 = M and a presentation P N of Z • J 1 = N , both of which have the same underlying matrix as P Z . For i ∈ {0, 1} and a ∈ R n , we have push J i (a, i) = a and push J 1−i (a, i) = a + δ. Hence, It follows that d ∞ (P M , P N ) ≤ δ.
Lemma 3.14. If γ : M → N is a morphism of finitely presented R n -persistence modules, then ker γ is finitely presented.
Proof. The corresponding result for finitely generated Z n -persistence modules is standard: Z n -persistence modules are (up to isomorphism of categories) n-graded modules over the polynomial ring k[x 1 , . . . , x n ], and the result holds because this ring is Noetherian [52].
To obtain the result for R n -persistence modules, note that by Lemma 2.26, γ is the left Kan extension of a morphism γ ′ of finitely generated Z npersistence modules along a grid function X : Z n → R n . By the corresponding result for the Z n -indexed case, ker γ ′ is finitely presented. Lemma 2.23 and Proposition 2.25 together imply that a finite presentation of ker γ ′ induces a finite presentation of E X (ker γ ′ ). But by Lemma 2.23, we have that E X (ker γ ′ ) ∼ = ker(γ), so ker(γ) is finitely presented.
Lemma 3.15. For all δ > 0, a δ-interleaving Z : I δ → Vect between finitely presented R n -persistence modules M and N is finitely presented.
Proof. Since M and N are finitely generated, Z is also finitely generated, i.e., there exists an epimorphism γ : F → Z, where F is a finitely generated, free I δ -persistence module. For i ∈ {0, 1}, F • J i is free and finitely generated and thus finitely presented. Additionally, Z • J i is isomorphic to either M or N , so is also finitely presented. Lemma 3.14 thus implies that ker(γ • J i ) is finitely presented, and hence finitely generated. But ker(γ • J i ) = ker(γ) • J i , so both ker(γ) • J 0 and ker(γ) • J 1 are finitely generated. This implies that ker γ is finitely generated, giving the result. N ), it suffices to show that for any δ > 0, there exists a δ-interleaving between M and N if and only if there exist finite presentations P M and P N of M and N with the same underlying matrix such that d ∞ (P M , P N ) ≤ δ. The proof that if we have such presentations then we get a δ-interleaving is entirely straightforward; we omit the details. To prove the converse, note that since M and N are finitely presented, Lemma 3.15 implies that any δ-interleaving Z : I δ → Vect between M and N is finitely presented. The result now follows from Lemma 3.13.
Equality of d p
I and d p W on 1-Parameter Persistence Modules. We next prove Theorem 1.7 (iv). Let us recall the statement: We construct a presentation P M of M as follows: Starting with the q × q identity matrix, for each a i = (a i 1 , a i 2 ) ∈ B M , we assign the label a i 1 to row i and the label a i 2 to column i. We then remove all of the columns with label ∞. It is clear that this does indeed give a presentation matrix for M .
We construct a presentation P N of N in the same way, using the b i in place of the a i . Note that the deleted columns are the same as for M , because the assumption that cost(σ, p) = δ < ∞ implies that σ matches each infinite length interval to an infinite length interval.
For p ∈ [1, ∞), we then have N ). In the case that p = ∞, a similar sequence of equations shows that δ ≥ d ∞ I (M, N ).
We prepare for our proof that d p W (M, N ) ≤ d p I (M, N ) by recalling some standard facts about presentations of R-persistence modules. Let P be a presentation matrix for an R-persistence module Q. Let P i * and P * i denote the i th row and column of P , respectively, and let L(P ) i * , L(P ) * i denote their labels.
If L(P ) * i ≤ L(P ) * j , then adding a scalar multiple of P * i to P * j yields another presentation matrix for Q (with the same labels); indeed, in close analogy with ordinary linear algebra, such a column addition can be seen as representing a change of basis operation. Similarly, if L(P ) i * ≤ L(P ) j * , then adding a scalar multiple of P j * to P i * yields another presentation matrix for Q. We call such row and column operations admissible operations. . Given a presentation matrix P for an R-persistence module Q, (i) There exists a sequence of admissible row and column operations which transforms P into a presentation matrix R with at most one non-zero entry in each row and each column. (ii) B Q can be immediately read off R via the following formula: One says that the presentation R of Proposition 3.16 (i) is in (graded Smith) normal form.
Definition 3.17. Given functions f, g : S → R, we will say that g is fcompatible if f (x) ≤ f (y) implies g(x) ≤ g(y) for all x, y ∈ S.
Note that if g is f -compatible, then f (x) < f (y) implies g(x) ≤ g(y), and f (x) = f (y) implies g(x) = g(y).
The following lemma is a straightforward consequence of the definitions.
Lemma 3.18. If P and P ′ are presentation matrices for R-persistence modules such that L(P ′ ) is L(P )-compatible, then a sequence of admissible row and column operations on P is also admissible on P ′ .
We are now ready to finish the proof of Theorem 1.7 (iv). N ). According to Proposition 3.3, d p I is the largest distance less than or equal tod p I . Thus, since d p W is a distance, it is enough to show that d p W (M, N ) ≤d p I (M, N ). Suppose that P M and P N are finite presentations of modules M and N with the same underlying r × c matrix. For t ∈ [0, 1], let P t be the presentation with the same underlying matrix as P M and P N , and L(P t ) = (1 − t) L(P M ) + t · L(P N ).
Note that P 0 = P M and P 1 = P N . Let M t be a module P t is presenting. There exists a finite set of real numbers 0 = t 0 < t 1 < t 2 < · · · < t w+1 = 1 such that for i ∈ {0, . . . , w} and s ∈ (t i , t i+1 ), both L(P t i ) and L(P t i+1 ) are L(P s )-compatible. Explicitly, we may take {t 1 , . . . , t w } to be the set of points t ∈ (0, 1) such that there exist i, j ∈ {1, . . . , r + c} and t ′ ∈ [0, 1] with Informally, this is the set of points where the order of the labels changes as t increases.
By Proposition 3.16 (i), there exists a sequence of admissible row and column operations putting P s into Smith normal form, and by Lemma 3.18 this sequence is also admissible for both P t i and P t i+1 . In particular, this gives normal form presentations R 0 and R 1 for M t i and M t i+1 , respectively, with the same underlying matrix R and By Proposition 3.16 (ii), this pair of presentations induces a matching σ i : To prove the claim, let z ∈ {0, 1}. Note that by Proposition 3.16 (ii), each a = (a 1 , a 2 ) ∈ B Mt i+z with a 2 < ∞ is indexed by a row-column pair (r a , c a ) of R such that a 1 = L(R z ) ra and a 2 = L(R z ) ca , where have simplified notation by letting L(R z ) ra = L(R z ) ra * and L(R z ) ca = L(R z ) * ca . Similarly, each a = (a 1 , ∞) ∈ B Mt i+z is indexed by a zero row r a of R such that It follows that for each matched pair (a, b) ∈ σ i with a 2 , b 2 < ∞, If a ∈ B Mt z is not matched by σ i , then letting an easy calculation gives that Now observe that for all a, a ′ ∈ B Mt i+z , we have r a = r a ′ only if a = a ′ . Similarly, c a = c a ′ only if a = a ′ . This implies the second equality below: The claim now follows by taking p th roots.
In view of Eq. (3.2), we have Thus, where the second to last equality holds because the presentations P t i were constructed via linear interpolation on the labels. Since (P M , P N ) was an arbitrary pair in P M,N , we are done.
The p-Matching Distance
As noted in the introduction, the definition of the p-matching distance is given simply by replacing d ∞ W with d p W in the definition of the matching distance: where l : R → R n ranges over all admissible lines (as defined in Section 2.5).
Using the triangle inequality for d p W , it is easy to check that d p M satisfies the triangle inequality, and hence is indeed a distance (i.e., an extended pseudometric). It is clear from the definition that Proof. Since a − b ∞ < a − b p , it suffices to prove the result in the case p = ∞. Let l be given by l(t) = t v + w, and let δ = a − b ∞ . Recalling our notation δ = (δ, δ, . . . , δ) ∈ R n from Section 2.3, note that a − b ≤ δ. Since 1 ≤ v, we have Thus, l • push l (b) + δ v is a point on im l that is greater than or equal to a, so l • push l (a) ≤ l • push l (b) + δ v. The same argument with a and b switched shows that It then follows from the definitions of l that push l (a) ≤ push l (b) + δ, This gives | push l (a) − push l (b)| ≤ δ, as desired.
Proof of Theorem 1.7 (iii). Recall from Proposition 3.3 that d p I is the largest distance bounded above byd p I . Since d p M is also a distance, it suffices to show that d p M (M, N ) ≤d p I (M, N ), i.e., that for any admissible line l, By Proposition 3.11, any pair of presentation matrices P M and P N for M and N induces presentation matrices P M l , P N l for M l and N l , with L(P M l ) = push l • L(P M ) and L(P N l ) = push l • L(P N ). N ). Since M l and N l are 1-parameter persistence modules, Theorem 1.7 (iv) gives us that N l ), completing the proof.
4.2.
Computing the p-Matching Distance on Bipersistence Modules. Assume we are given presentations P M and P N for R 2 -persistence modules M and N , where the total number of generators and relations in both presentations is m. By translating both modules if necessary, we can assume without loss of generality that all grades of generators and relations for both modules lie in some bounded rectangle An algorithm of Biasotti et al. [9] computes an estimate time. In brief, the idea is to take where L is a suitably chosen finite set of admissible lines; each of the bottleneck distances d ∞ W (M l , N l ) can be computed efficiently using a standard algorithm, e.g., the one of [61]. L is in fact chosen adaptively, using a quad tree construction. The guarantee that d M (M, N ) − ǫ ≤ D follows from the algebraic stability theorem.
More recent work of Kerber and Nigmetov [62] revisits these ideas, introducing several optimizations to the approach of [9]. This leads to a more efficient algorithm, with time complexity O m 3 C ǫ 2 . It turns out that, using Theorem 1.7 (iv), the algorithm of [62] extends quite readily to an approximation algorithm for d p M (M, N ), for any p ∈ [1, ∞], yielding the following result: Proof. We briefly outline the modifications to the approach of [62] which lead to this result. First, let us remark that while the exposition of [62] assumes that the input to the algorithm is a pair of bifiltrations, the authors note in the conclusion that their algorithm in fact applies to presentations in exactly the same way.
In [62], Kerber and Nigmetov in fact introduce and compare a few variants of their algorithm; for simplicity, we focus here on the variant which uses the so-called local linear bound. In this variant, at each rectangle B constructed in the quad-tree, quantities v(M, B) and v(N, B) are computed; we now define these quantities. Elements of B parameterize admissible lines; to keep notation simple, let us in fact identify each element of B with the line it parameterizes, and let l c ∈ B denote the center of B. Then, adapting the notation of [62] to the notation for presentations used in our paper, the definition is as follows: We define v (N, B) analogously.
To extend the algorithm of [62], just two simple changes are required: First, wherever a bottleneck distance is computed in the original algorithm, we instead compute a p-Wasserstein distance. Second, in the specification of the algorithm, we replace v(M, B) and v(N, B) with quantities v p (M, B) and v p (N, B), defined in the following way: Regarding L(P M ) as a vector (a 1 , . . . , a z ) ∈ (R 2 ) z , we let v p (M, B) = v(a 1 , B), v(a 2 , B), . . . , v(a z , B)) p . v p (N, B) is defined in the same way. Note that v ∞ (M, B) = v(M, B), so in the p = ∞ case, this modified algorithm is indeed the same as the one in [62].
The proof of correctness of the algorithm given in [62] extends to the case of arbitrary p with only one substantive change: Where [62] (implicitly) invokes the algebraic stability theorem, we need to instead apply the more general bound d p W ≤ d p I implied by Theorem 1.7 (iv). Using the fact that v p (M, (M, B), the complexity analysis of [62] also extends readily to give our claimed runtime bound.
Universality of the Presentation Distance
In this section, we prove Theorem 1.9, our stability and universality result for the presentation distance on 1-and 2-parameter persistence modules. We also show that the constant n Let us recall the statement of the theorem, keeping in mind our convention 1 ∞ = 0: Theorem 1.9. For any p ∈ [1, ∞] and n ∈ {1, 2}, (i) d p I is p-stable, and also p-stable across degrees with constant n 1 p , on finitely presented n-parameter persistence modules, (ii) if the field of coefficients k is prime and d is any p-stable distance on finitely presented n-parameter persistence modules, then d ≤ d p I . Most of our effort will go into proving Theorem 1.9 (i).
5.1.
Proof of Stability. Our proof of Theorem 1.9 (i) will focus primarily on the n = 2 case, as a slight variant of our argument for this case also handles the n = 1 case; we discuss this variant at the end of the proof. As noted in the introduction, the n = 1 case also follows immediately from Skraba and Turner's result Theorem 1.5 and the inequality d p I ≤ d p W of Theorem 1.7 (iv).
We will need the following notation: Definition 5.1 (Chain Complex of a Monotone Function). For X a finite CW-complex, let X j denote the set of j-cells of X. For any n ≥ 0, a monotone function f : Cells(X) → R n has an associated chain complex of free n-parameter persistence modules where and each boundary map δ f j is induced by the j th cellular boundary map of X.
We note that for all j ≥ 0, we have that is, the j th homology module of the chain complex is equal to the composition of the sublevel filtration with the j th cellular homology functor.
We begin with a brief outline of the proof of Theorem 1.9 (i). Given functions f, g : Cells(X) → R 2 , each module ker ∂ f j and ker ∂ g j is free by a result in [32]. Using an interpolation argument similar to the one used in the proofs of Theorem 1.5 from [85] and Theorem 1.7 (iv), we observe that it suffices to consider the case where the sublevel filtration S(g) is a left Kan extension of S(f ) along a grid function X : R 2 → R 2 . In this case, a basis of ker ∂ f j induces a corresponding basis of ker ∂ g j . This gives us presentations for H j S(f ) and H j S(g) with the same underlying matrix. Using a Gröbner basis argument, we show that for each j-cell σ of X, changing the x-coordinate of f (σ) to that of g(σ) induces a corresponding change in the x-label of at most one row in the presentation. Moreover, distinct j-cells induce changes to distinct labels. The analogous statements with y replacing x are also true. Given this, the result follows readily.
Before proceeding with the details of the proof, we give an example which illustrates the result and illuminates parts of the proof. We will also use the same example in Proposition 5.13 to show that the constant n 1 p appearing in the statement of the proof is tight.
Example 5.2. Let X be a cell complex with two 0-cells and three 1-cells, and assume that each 1-cell is attached to both 0-cells, as illustrated in Fig. 1 (A). Consider a function f : Cells(X) → R 2 where the 1-cells map to (1,4), (3,3) and (4,1), and both 0-cells map to (0, 0). Let σ denote the 1-cell mapping to (3,3), and let g : Cells(X) → R 2 be the function which is identical to f except that g(σ) = (2, 2). Both f and g are monotone. The two functions are shown in Fig. 1 (A). Fig. 1 (B) depicts H 1 S(f ) and H 0 S(f ). These are the kernel and cokernel, respectively, of the boundary map ∂ f 1 : C 1 (f ) → C 0 (f ). We see that H 1 S(f ) is a free module on two generators at (3,4) and (4,3), and that H 0 S(f ) is a direct sum of two indecomposables, one a free module on a single generator at (0, 0), and the other a module with a generator at (0, 0) and relations at (1,4), (3,3) and (4,1). Thus, the following is a presentation of H 0 S(f ): Similarly, Fig. 1 (C) depicts H 1 S(g) and H 0 S(g). We see that H 1 S(g) is a free module on two generators at (2,4) and (4,2), and that H 0 S(g) is a direct sum of two indecomposables, one a free module on a single generator at (0, 0), and the other a module with a generator at (0, 0) and relations at (1,4), (2,2) and (4,1). Thus, the following is a presentation of H 0 S(g): We see that for p ∈ [1, ∞], d p (P f , P g ) = (1 p + 1 p ) as guaranteed by Theorem 1.9 (i). Similarly, we see that and that as also guaranteed by the theorem. In the proof of Proposition 5.13, we will show that each of the inequalities above is in fact an equality. We see in this example that changing the value of f (σ) from (3, 3) to (2, 2) changes the grades of two generators of H 1 S(f ): The x-coordinate of one generator and y-coordinate of the other generator both change from 3 to 2. In addition, the grade of one relation of H 0 S(f ) changes from (3,3) to (2,2).
Kernels of Morphisms of Free Bipersistence Modules. Throughout this section, the term bipersistence module will refer to an R 2 -persistence module. Lemma 5.3 ([32]). If γ : F → G is a morphism of finitely generated free bipersistence modules, then ker γ is free.
Proof. The corresponding result for Z 2 -persistence modules is proven in [32]. To obtain the stated result, note that by Lemma 2.26, γ is the left Kan extension of a morphism γ ′ of finitely generated, free Z 2 -persistence modules along some grid function X . By the result for Z 2 -persistence modules, ker γ ′ is free. Lemma 2.23 then implies that ker γ = E X (ker γ ′ ), and so γ is free by Proposition 2.25 (i).
Recall from Section 2.2 that if B = (b 1 , . . . , b |B| ) is an ordered basis of a finitely generated free bipersistence module F and v ∈ a∈R 2 F a , then [v] B ∈ k |B| records the field coefficients in the unique expression of v as a linear combination of B. Let Proof. Let g = b∈B.c gr(b), and observe that gr(c) ≥ g. Consider the unique element c ′ ∈ F g such that c = F g→gr(c) (c ′ ). For any order on B, we have [c ′ ] B = [c] B , so c ′ ∈ ker γ by Lemma 5.4. Thus, since c is an element of a basis of ker γ and a basis of a free module is a minimal generating set, we must have that c = c ′ . This implies in particular that gr(c) = g, as desired.
For F a bipersistence module and v ∈ a∈R 2 F a , let us write the x-and y-coordinates of gr(v) as v x and v y , respectively. Lemma 5.6. Given a map γ : F → G of finitely generated free bipersistence modules, a basis B of F , and a basis C of ker γ, there exist injective maps j x , j y : C → B such that for all c ∈ C, c x = j x (c) x and c y = j y (c) y .
Our proof of Lemma 5.6 will make light use of the language of Gröbner bases, which we now review in our special case. Let F be a finitely generated, free bipersistence module and fix an ordered basis B of F . For v ∈ a∈R 2 F a , define the leading component of v to be max {b ∈ B.v}. For F ′ ⊂ F a free submodule of F , a basis C of F ′ is called a Gröbner basis (with respect to B) if no two distinct elements of C have the same leading component in F . It is easily checked that this definition agrees with the usual, more general definition of Gröbner basis, though we will not make use of that agreement.
We say B is colexicographically ordered if for Proposition 5.7 ([69, Remark 3.5]). If γ : F → G is a morphism of finitely generated, free bipersistence modules and B is a colexicographically ordered basis of F , then there exists a basis C of ker γ which is also a Gröbner basis with respect to B.
Proof. The corresponding result for Z 2 -persistence modules is proven algorithmically in [69]. Given this, the statement for R 2 -persistence modules follows readily by expressing γ as a left Kan extension along a grid function via Lemma 2.26, and then applying Lemma 2.23 and Proposition 2.25.
Proof of Lemma 5.6. By symmetry, it suffices to show that there exists a map j y as in the statement of the lemma. Given any ordered basis B of F and any basis C of ker γ, we define a map j y : C → B by The definition of j y is well formed, because it follows from Lemma 5.5 that the set over which we take the maximum is non-empty. By definition, c y = j y (c) y for all c ∈ C.
In general, j y is not necessarily injective. But if B is colexicographically ordered and C is a basis of ker γ which is also a Gröbner basis with respect to B (such C exists by Proposition 5.7), then j y is injective; indeed, the fact that C is a Gröbner basis implies that the leading components in F of the elements of C are unique, and by the colexicographic choice of ordering on B, the leading component b of each c ∈ C satisfies b y = c y .
This establishes that for any basis B of F , a map j y : C → B with the desired properties exists for one particular choice of basis C of ker f . But then by Proposition 2.6, the desired map exists for all choices of C, as claimed. Let C = (c 1 , c 2 ) be an ordered basis of ker ∂ 1 = H 1 S(f ) with gr(c 1 ) = (3,4) and gr(c 2 ) = (4, 3). Lemma 5.6 guarantees the existence of maps j x , j y : C → B preserving x-and y-coordinates, respectively. The only possibility is In this example, one can check that if we make a small change in the x-coordinate of an element b ∈ B, then this changes H 1 S(f ) only though a corresponding change in the x-coordinate of (j x ) −1 (b); the analogous statement for j y is also true. For example, if we change gr(b 2 ) x from 3 to 2, then gr(c 1 ) x changes from 3 to 2. And if we change gr(b 1 ) x from 1 to 0, this causes no change to H 1 S(f ), because (j x ) −1 (b 1 ) = ∅. This illustrates the role that the injections j x and j y play in the proof of Theorem 1.9 (i): In general, under suitable assumptions, a perturbation to the x-grade of a basis element b of C i (f ) causes only a corresponding perturbation to the x-grade of (j x ) −1 (b). Again, the same is true if we replace x with y. In what follows, we make this precise by using Kan extensions to encode the perturbations.
Lemma 5.9. Suppose we are given a morphism of finitely generated free bipersistence modules γ : F → G, a grid function X : R 2 → R 2 , and ordered bases B and C of F and ker γ, respectively. Let B ′ = E X (B). Then C induces an ordered basis C ′ of ker E X (f ) with |C ′ | = |C|, such that (i) the inclusions ker γ ֒→ F and ker E X (γ) ֒→ E X (F ) are represented by the same unlabeled matrix with respect to the bases B, B ′ , C, and Proof. By Lemma 2.23, ker E X (γ) = E X (ker γ). Thus, by Proposition 2.25 (ii), we may take C ′ = E X (C) (see Notation 2.24). Then (i) follows immediately from Proposition 2.25 (iii).
We now prove (ii). Let j x , j y : C → B be injections as in the statement of Lemma 5.6. For b ∈ B, let b ′ denote the corresponding element of B ′ , and for c ∈ C, let c ′ denote the corresponding element of C ′ . We claim that for all c ∈ C, and c ′ y = j y (c) ′ y . We will check the first equality; the proof of the second one is the same.
For each c ∈ C, we have [c ′ ] B ′ = [c] B , as the vectors are equal to the same column of the matrix in (i). Thus Since C and C ′ are bases of ker γ and ker E X (γ), it follows from Lemma 5.5 that Since X 1 is order-preserving and b ′ x = X 1 (b x ) for all b ∈ B, the bijection B → B ′ mapping b to b ′ preserves the order of x-coordinates. Therefore, as claimed.
It now follows that for all c ∈ C, we have where the inequality follows from the injectivity of j x and j y .
Recall from Definition 3.17 that for functions f, g : for all x, y ∈ S. We extend this definition to functions f, g : Proof of Theorem 1.9 (i). We first prove the result for the case n = 2, and then briefly discuss the easy adaptation of the proof to the case n = 1. Let X be a finite CW-complex and f, g : Cells(X) → R 2 be a pair of monotone functions. For t ∈ [0, 1], define h t : Cells(X) → R 2 by linear interpolation between f and g, i.e., for all σ ∈ Cells(X). As in the interpolation argument in the proof of Theorem 1.7 (iv), there exists a finite set of real numbers 0 = t 0 < t 1 < t 2 < · · · < t w+1 = 1 such that for i ∈ {0, . . . , w} and any s i ∈ (t i , t i+1 ), both h t i and h t i+1 are h s i -compatible; we may take t 1 , . . . , t w to be the set of points t ∈ (0, 1) where the partial order of the values of h t changes as t increases. Now Thus, by the triangle inequality for d p I , it suffices to prove the result in the special case that g is f -compatible.
Assuming that g is f -compatible, it is easy to check that there exists a grid function X : R 2 → R 2 such that S(g) = E X (S(f )); indeed, we may take Finally, we consider the case n = 1. A simplification of the proof we have given shows that in this case, d p I is p-stable, and also p-stable across degrees with constant 2 1 p . Alternatively, these results can be shown via a reduction to the n = 2 case. To obtain the stronger result that d p I is p-stable across degrees with constant 1 1 p = 1, we need an additional idea, which we now outline.
Let f : Cells(X) → R be a monotone function. Carlsson and Zomorodian [92] have observed that a basis B j of each chain module C j (f ) can be chosen such that with respect to these bases, each matrix [∂ f j ] is in graded Smith normal form, i.e., [∂ f j ] has at most one non-zero entry in each row and each column. The elements of B j corresponding to zero columns of [∂ f j ] then form a basis B ker j of ker ∂ f j , and the submatrix of [∂ f j+1 ] consisting of columns indexed by B j+1 \ B ker j+1 and rows indexed by B ker j determines a presentation matrix P j for H j S(f ). Note that for each j, the rows of P j are indexed by B ker j and the columns of P j−1 are indexed by B j \ B ker j . Thus, if one adapts our proof from the n = 2 case to the n = 1 case, using such presentations P j throughout, then it is easy to see that a change to the function value of a j-simplex in Cells(X) can induce a corresponding change to the label of either a row of P j or a column of P j−1 but (in contrast to the 2-parameter case,) not to both simultaneously. Using this observation, our argument above for the 2-parameter case then adapts easily to give the claimed bound.
5.2.
Tightness of Theorem 1.9 (i). To establish our tightness result, we will need a generalization of the definition of δ-interleavings from Section 2.3: For v ∈ [0, ∞) n , define the v-interleaving category I v to be the thin category with object set R n ×{0, 1} and a morphism (a, i) → (b, j) if and only if either (1) a + v ≤ b, or (2) i = j and a ≤ b. that if v = (v 1 , v 2 ) and v 1 + v 2 < 2, then H 0 S(f ) (2,2)−v→(2,2)+v has rank two. Thus, if H 0 S(f ) and H 0 S(g) are v-interleaved, then H 0 S(g) (2,2) must have dimension at least two, which is false. Applying Lemma 5.12, we get d p I (H i S(f ), H i S(g)) ≥ 2
5.3.
Proof of Universality. Our proof of Theorem 1.9 (ii) is similar to the proof of Theorem 1.4 (ii) given in [67], though simpler. As with that result (and most similar universality results in the TDA literature, e.g., [1,2,14,47]) the argument depends on a lifting result, which we now state: Lemma 5.14. Let M and N be finitely presented bipersistence modules with coefficients in a prime field k. Given presentation matrices P M and P N for M and N with the same unlabeled matrix, there exist a CW-complex X and functions f M , f N : Cells(X) → R 2 such that Proof. Let T denote the unlabeled matrix underlying P M and P N , and assume that T has r rows and c columns. Let X be a CW-complex with a single vertex v, 1-cells σ 1 , . . . , σ r , and 2-cells τ 1 , . . . , τ c , with the 2-cells attached so as to satisfy the following conditions: If k is finite, then the degree-2 cellular boundary matrix ∂ 2 is equal to T ; and if k = Q, then ∂ 2 is obtained from T by multiplying each column of T by an integer. Proof of Theorem 1.9 (ii). We give the proof in the case n = 2; the proof when n = 1 is essentially the same. Let d be a p-stable distance on finitely presented bipersistence modules. We need to show that d ≤ d p I . By Remark 5.15 (Universality for Non-Prime Fields). As noted in the introduction, the condition that the underlying field k be prime is in fact unnecessary for the 1-parameter version of Theorem 1.9 (ii). In the 2-parameter case, the question of whether Theorem 1.9 (ii) holds for arbitrary fields k is an interesting open question. In fact, the analogous question about the interleaving distance on multiparameter persistence modules is also open [67].
Remark 5.16 (Stability and Universality for Three or More Parameters). Our proof of the stability result Theorem 1.9 (i) makes essential use of Lemma 5.3, which says that kernels of morphisms between finitely generated free bipersistence modules are free. Lemma 5.3 does not hold for 3-parameter persistence modules, and as such, we do not expect the 1-Lipchitz stability bound Theorem 1.9 (i) to extend to n-parameter persistence modules, for n ≥ 3. However, we conjecture that for any fixed n ≥ 3, a weaker Lipschitz stability bound does hold for n-parameter persistence modules. It is not clear whether one would have a universality result in the general setting.
Conclusion
In this work, we have introduced two multiparameter generalizations of the p-Wasserstein distance on barcodes, the p-presentation distance and the p-matching distance, which for p = ∞ are equal to the interleaving distance and the matching distance, respectively. We have shown that several fundamental properties of the Wasserstein, interleaving, and matching distances extend to our new distances. As explained in Section 1.7, in a forthcoming companion paper [13] we apply our distances to refine the stability theory for multicover persistent homology developed in [15].
Our work raises a number of mathematical questions that would be worth exploring in future work. We have already mentioned several of these in Conjecture 1.8, Remark 5.15, and Remark 5.16. We conclude by discussing three others.
First, one can define the interleaving distance not only on persistence modules, but also on (multi)filtered topological spaces. Modifying the definition slightly, one can also define a homotopy invariant version [14]. Both versions satisfy universal properties analogous to the one for modules [14,66]. Given our results, it is natural to wonder whether we can define ℓ p -type generalizations of these distances which satisfy similar universality properties.
Second, a natural direction would be to extend our 2-parameter stability result to the multicritical setting, where instead of considering sublevel filtrations of monotone functions on a CW-complex, we consider bifiltrations on a CW-complex in which a cell can be born at multiple incomparable grades. While many interesting cellular bifiltrations do arise as sublevel filtrations, e.g., the function-Rips bifiltrations [26], rhomboid bifiltrations [44], and subdivision bifiltrations [15,83], others of interest in applications, such as the degree bifiltrations [15,68], are multicritical. Third, our main results are stated only for finitely presented modules. The problem of extending them to a larger class of modules, e.g., to p.f.d. modules, is interesting. To obtain such extensions, one would first need to extend the definition of the p-presentation distance, and the question of how to best do this seems to be a subtle one.
|
2021-06-28T01:16:06.071Z
|
2021-06-25T00:00:00.000
|
{
"year": 2021,
"sha1": "73a91e90e85675ce1d5673b9f61bc18574cd05da",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "73a91e90e85675ce1d5673b9f61bc18574cd05da",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
93559506
|
pes2o/s2orc
|
v3-fos-license
|
Wilson’s Disease in Brain Magnetic Resonance Spectroscopy
Wilson's disease (WD) is an inherited disorder of copper metabolism first defined by Dr Samuel Alexander Kinnier Wilson in 1912 (Wilson, 1912). WD is caused by mutations to the gene coding for ATPase copper transporting beta polypeptide (ATP7B), which is located on 13 chromosome 13 and expressed in the liver (Bull et al.,1993; Loudianos et al.,1999; Yamaguchi et al.,1993; Frydman et al., 1985). The disease is inherited in an autosomal recessive manner.
Introduction
Wilson's disease (WD) is an inherited disorder of copper metabolism first defined by Dr Samuel Alexander Kinnier Wilson in 1912(Wilson, 1912. WD is caused by mutations to the gene coding for ATPase copper transporting beta polypeptide (ATP7B), which is located on 13 chromosome 13 and expressed in the liver (Bull et al.,1993;Loudianos et al.,1999;Yamaguchi et al.,1993;Frydman et al., 1985). The disease is inherited in an autosomal recessive manner.
WD is present worldwide in most populations, and particularly in those in which consanguineous marriage is common. The disease frequency is estimated to be between 1 in 5,000 and 1 in 30,000, and the heterozygous carriers (Hzc) frequency is approximately 1 in 90 (Figus et al., 1995).
In the ATP7B gene of WD a variety of mutations were defected. These defects include insertion, deletion, splice site and point mutations. In most ethnic groups, one or a small number of these ATP7B gene mutations are predominant, in addition to many other rare mutations. In Europeans and North Americans, two ATP7B point mutations: His1069Gln and Gly1267Arg account for 38% of mutations described in WD (Thomas et al., 1995). There is still no favorable correlation between phenotype and genotype, but frame shift deletions and nonsense mutations that cause a truncation of the translated protein product usually result in a severe form of the disease because of loss of the functional protein. The knowledge of the prevalence of mutations is helpful in achieving rapid mutational screening. Mutations in ATP7B cause a reduction in the conversion of apoceruloplasmin into ceruloplasmin, which, is therefore present at low levels in WD patients.
A failure to excrete copper into the biliary canaliculi leads to its toxic accumulation in the hepatocytes (Schilsky&Tavill, 2003). The copper excess may damage hepatic mitochondria and oxidative damage to hepatic cells and cause the spillage of copper into the blood, thereby overloading other organs such as the brain, kidney and red blood cells, initiating toxic damage (Schilsky&Tavill, 2003). In the early stages of WD diffuse cytoplasmic copper accumulation can be seen only by special immunohistochemical stains for copper detecting. The early accumulation of copper is associated with hepatic steatosis. The ultrastructural abnormalities range from enlargement and separation of the mitochondrial inner and outer membranes with widening of the intercristal spaces, to increases in the density and www.intechopen.com granularity of the matrix. During the disease progression periportal inflammation mononuclear cellular infiltration lobular necrosis and bridging fibrosis occur. The areas of the brain main affected in WD are the basal ganglia (lenticular nuclei), which macroscopically appear brown in color because of copper deposition (Scheinberg& Sternlieb, 1984). In the early stages of the WD, proliferation of large protoplasmic astrocytes such as Opalski cells and Alzheimer cells occurs. During the disease progression the degeneration occurs leading to necrosis, gliosis and cystic changes. The degeneration can be seen in the brainstem, thalamus, cerebellum and cerebral cortex. During the WD progression, copper deposits can lead to vacuolar degeneration in proximal renal tubular cells, appearance of the 'Kayser-Fleischer' (KF) ring in Descemet's membrane. Copper from the hepatic cells degradation can be released into the circulation and can damage red blood cells, thereby inducing hemolysis (Schilsky&Tavill, 2003).
The majority of patients with WD present or with hepatic or neuropsychiatric symptoms, and with either clinically asymptomatic or symptomatic liver involvement. The remaining patients, may present with symptoms attributable to the involvement of other organs, as acute non-immunological hemolytic anemia, osteoarthritis, arrhythmias, rheumatic-feverlike manifestation, renal function abnormalities, primary or secondary amenorrhea, repeated and unexplained spontaneous abortions. Patients with hepatic WD (hWD) usually present in late childhood or adolescence, and exhibit features of acute hepatitis, fulminant hepatic failure, or progressive chronic liver disease in the form of either chronic active hepatitis or cirrhosis of the macronodular type (Schilsky&Tavill, 2003;. Hoogenraad, 1997). The mean age of onset of neurological WD (nWD) is the second to third decade (Hoogenraad, 1997). Most of patients present with extrapyramidal, cerebellar and cerebralrelated symptoms. Tremor, gait and speech disturbances are the most common initial presentation symptoms. Patients may present also present with dystonia. About 30% of patients experience psychiatric disturbances (Hoogenraad, 1997). These disturbances can manifest as changes in school-related or work-related performance, attention deficit hyperactivity disorder, impulsivity, paranoid psychosis, obsessive behavior, depression, suicidal tendencies or bizarre behavior, and can occur early or late in the disease course.
The WD diagnosis and monitoring is based on history, physical, biochemical, liver biopsy and genetic mutations and imaging assessment. The KF ring is an important marker in nWD examination. The WD is biochemically characterized by low ceruloplasmin and total serum copper levels, increased 24-hour urinary copper excretion, and abnormally high hepatic copper content in the liver biopsy particularly in children performed. Another noninvasive method for assessing copper metabolism is incorporation of radioactive copper into the hepatocytes used with favorable results in Poland specially in presymptomatic gene mutation carriers, siblings of WD patients diagnosis (Członkowska et al. , 1973). The mutation screening to identify defects in the ATP7B gene can confirmation of WD diagnosis, but will not necessarily detect all disease producing mutations.
Magnetic resonance imaging
Neuroimaging plays an important role in the diagnosis and monitoring of WD patients. In brain CT scans hypodensities and atrophy of the bilateral basal ganglia, brainstem, cerebellum and cerebral cortex can be seen (Hoogenraad, 1997). MRI is much more sensitive method for revealing abnormalities in WD (Hoogenraad, 1997;. Sinha et al., 2006). On T1weighted images, generalized brain atrophy is seen in about 75% of cases, and hypointensities in the basal ganglia. On T2-weighted images, hyperintensity in the basal ganglia, white matter, thalamus or brainstem can be seen (Figure 1). These abnormalities are caused by neuronal loss, gliosis, degeneration of fibers, and vacuolization associated with increased water content in the brain. Signal abnormalities vary according to the stage of the disease, and can be reversible with therapy in the early stages. In some cases, T2-weighted images show hypointensity in the basal ganglia (globus pallidus) region may be as a result of deposition of iron in exchange for copper after chelation ( Figure 2).
In some WD patients with WD MRI shows a typical pallidal hyperintensity on T1-weighted images (Cordoba et al., 2003) (Figure 3). This abnormality appears to be secondary to the accumulation of manganese in basal ganglia because of portal-systemic shunting (Cordoba et al., 2003). Axial T2 -weighed MRI image of a 47-years -old women with WD treated for more than 10 years. The picture shows persistent hyperintensities in putamen, caudatum globus pallidus and degree of diffuse brain atrophy and hypointensity in the basal ganglia.
Proton Magnetic Resonance spectroscopy ( 1 H-MRS) is a practical research tool for elucidating the pathophysiology underlying certain diseases (Rudkin & Arnold, 1999). In patients with hepatic encephalopathy (HE), 1 H-MRS has been used to detect metabolic abnormalities in the brain with very high sensitivity (Ross et al., 1999). The biochemical alterations that were detected in HE included an increase in cerebral glutamine compounds (Glx) and a decrease in myoinositol (mI) and choline (Cho) metabolites (Cordoba et al., 2002). Other metabolites detectable in vivo by 1 H-MRS are N-acetylaspartate (NAA), considered to be a marker of neuronal health, and creatine (Cr), often used as an internal standard against which the resonance intensities of other metabolites are normalized. The role of NAA in nervous system can be related to: action as an organic osmolyte that counters the "anion deficit" in neurons, or a cotransport substrate for a proposed "molecular water pump" that removes metabolic water from neurons; NAA is a precursor for the enzymewww.intechopen.com mediated neuronal dipeptide N-acetylaspartylglutamate biosynthesis; NAA provides a source of acetate for myelin lipid synthesis and is involved in energy metabolism in neuronal mitochondria-reflect improvement of neuronal energetic (Moffett et al.,2007). 1 H-MRS is a technique that could help distinguish between brain changes caused by HE and those related to copper toxicosis in WD. To elucidate the pathomechanism of the cerebral pathology of WD a study using 1 H-MRS in 37 newly diagnosed WD patients in the globus pallidus and thalamus examined bilaterally was performed (Tarnacka et al, 2009a). The calculations were performed for: mI, Cho, creatine (Cr), NAA, lipid (Lip), Glx. In all WD patients a significantly decreased mI/Cr and NAA/Cr ratio level and an increased Lip/Cr ratio in the pallidum was observed. Analysis revealed a significantly increased Glx/Cr and Lip/Cr ratio in the thalamus. In the pallidum of nWD patients, Cho/Cr and Glx/Cr and Lip/Cr ratios were higher than in control subjects, and the NAA/Cr was significantly lower (Table 1). In hWD patients, the mI/Cr and Cho/Cr and NAA/Cr ratio levels were lower www.intechopen.com than in controls (Table 1). The Cho/Cr and Lip/Cr ratios were higher in the thalami of nWD patients, and Lip/Cr ratios were higher than controls' in patients only with hWD. On the Figure 4 the examples of 1 H-MRS spectra in nWD and hWD and presymptomatic patients compared with control subjects are provided. Proton spectrum shows expression of Lip; Cr was significantly reduced. The axial image shows the inset from where the spectrum was recorded.
In those study the thalamic changes compared with the basal ganglia were more sensitive to ongoing degenerative changes and portal-systemic encephalopathy.
Spectroscopic changes in presymtomatic patients were performed only on 4 cases, because of it was very difficult to find patients, who did not have any clinical and biochemical (liver failure) abnormalities. In those patients significantly lower levels of mI/Cr and Cho/Cr and increase of Glx/Cr ratios in basal ganglia. In thalami no changes were seen. Those findings can suggest that in presymptomatic patients in MRS early encephalopatic changes with decrease of mI can be noticed.
The decreased Cho/Cr and mI/Cr in the pallidum of hWD patients could be indicative of minimal HE. Mioinositol is one of the chief compatible organic osmolytes responsible for equilibrating an increased intracellular tonicity (Cho, is believed to be also an osmolyte) (Danielsen & Ross, 1999). The mI/Cr and Cho/Cr reduction was also reported by Kraft in one de novo WD patient with hepatic disease (Kraft E et al.,1999). In newly diagnosed WD patients a reduction of NAA/Cr in the pallidum of hWD patients was noted ). This could indicate that in these patients, 1 H-MRS detects a combination of early encephalopathic and neurodegenerative changes, if NAA can be considered a neuronal marker.
Newly diagnosed WD patients with neurological impairment showed an increased level of Cho/Cr and Lip/ Cr in the pallidum and thalamus; furthermore, Glx/Cr was increased and NAA/Cr was decreased in the pallidum. The Cho peak is considered a potential biomarker for the status of membrane phospholipid metabolism (Danielsen & Ross, 1999), so that an elevated Cho signal most likely reflects an increase in membrane turnover. In pathologies characterised by membrane breakdown, such as neurodegeneration, bound Cho moieties may be liberated into the free Cho pool (Danielsen & Ross, 1999). It is possible that the increase of Cho/Cr and Lip/Cr ratios in the pallidum and the thalami can reflect an increase in membrane turnover caused by free copper accumulation, or can be associated with gliosis because Cho is present in high concentrations in oligodendrocytes ( Van der Hart et al., 2000). In nWD patients, the increased level of Glx/Cr in the pallidum, can be related to the shunting of ammonia to the brain and glutamine accumulation. The elevation of Glx, can be also related to neuronal energy impairment. Removal of glucose greatly accelerates glutamate transamination to aspartate in brain synaptosomes, suggesting that glutamate could be an energy source in brain under some circumstances (Erecinska et al, 1988). Aspartate transaminase is the enzyme that accomplishes the task of glutamate transamination. It consumes oxaloacetate and glutamate to produce aspartate and alphaketoglutarate, which can directly enter to the TCA cycle (Moffett et al, 2007). Most studies concerning the relationship between neurospectroscopic abnormalities and the neurological HE manifestation found an association between the presence of HE and Glx/Cr level (Kreis et al., 1992;. Cordoba et al., 2001). In nWD patients no other metabolite changes specific for HE such as mI/Cr and Cho/Cr reduction was detected, so it is conceivable that because of chronic liver failure, in nWD patients the compensatory mechanism to counteract intracellular hypertonicity can take place, with no mI depletion, but with an increase of Glx. This finding, could indicate, that in newly diagnosed nWD patients an HE could exist. Patients with liver cirrhosis do present with parkinsonian signs including: tremor, bradykinesia, dysarthria, hypomimia, or rigidity. Some authors have speculated that the neurological symptoms of WD can be caused by concomitant liver disease (Victor, 1999).
These MRS findings can prove that in newly diagnosed nWD patients in the brain a portosystemic shunting changes, but also neurodegenerative pattern associated with Cho, Lip and NAA/Cr depletion nWD can coexist. Verma compared brain metabolite alternations in patients with acute liver failure, acute-on-chronic liver failure and chronic liver disease (Verma et al, 2008). He found that NAA/Cr ratio was significantly decreased in acute-onchronic liver failure and chronic liver disease. This fact can prove that in chronic liver disease an mitochondrial dysfunction can be detected due to neurotoxic effect. It is also worthy to mentioned that in patients with hepatic and neurological impairment in WD a significant negative correlation in the pallidum between the clinical status and the NAA/Cr was detected .
From the 37 patients newly diagnosed, we followed 17WD cases for more than one year period (Tarnacka et al, 2008). In the 1 H-MRS done during the follow-up of all WD patients, significantly lower levels of mI/Cr and higher levels of Lip/Cr compared to the control group were persistently noted ( Figure 5a). In patients with hepatic signs with improvement, a statistically significant increase of mI/Cr and Glx/Cr in follow-up 1 H-MRS was observed (about one year post-treatment, Figure 5b). In patients with neurological improvement after treatment in the follow-up 1 H-MRS, a statistically significant increase of NAA/Cr was noted (Table 5c). During neurological deterioration in one case, a decrease of Glx/Cr and NAA/Cr was seen, in contrast to another neurologically impaired patient with liver failure exacerbation, where a decrease of mI/Cr and increase of Glx/Cr was observed ).
a. significantly lower than in controls (p<0.005), b. significantly higher than in controls (p<0.0001) and compared with first study p<0.005, c. significantly higher than in controls (p<0.005), The alternations of NAA/Cr ratio in neurologically impaired patients and mI/Cr and Glx/Cr in patients with liver failure could be a sensitive marker of the clinical recovery and deterioration in those WD patients.
A decrease in relative NAA concentrations has been observed in pathological processes known to involve neuronal loss (Rudkin & Arnold, 1999). NAA is synthesized by neuronal mitochondria, which are sensitive to injury (Bates et al., 1996). A reversible decrease of NAA could well be caused by mitochondrial toxin 3-nitropropionate (Dauntry et al., 2000). Mitochondrial proteins seem to be the target of copper toxicity, because a decrease in the www.intechopen.com levels of the subunits of some of the complexes of the respiratory chain occurs (complex I being the most affected) (Aricello et al., 2005). It is possible that in WD, functional changes in neurons affecting oxidative metabolism may result in the reversible changes in relative concentration of NAA are reported in those study here. These findings suggest, that the early diagnosis and treatment of WD patients can reduce neurons metabolic disturbances and avoid their degeneration process. It is a very useful information which can prove that in nWD patients with improvement of clinical signs in the brain reversible functional changes of neurons could be detected.
In WD patients treated for long time in nWD and hWD subgroups with improvement or no improvement of clinical status the MRS findings in Table 2 are provided (Tarnacka et al., 2010). In those study we investigated 4 hWD patients, with no improvement and 8 with marked improvement; and 8 nWD patients with marked improvement and 7 with no improvement of clinical status. In hWD patients with improvement the MRS did not show any important pathological changes and in the nWD with improvement significantly higher Cho/Cr, Glx/Cr and Lip/Cr ratios levels compared with controls were noted (Table 2a). In hWD patients with no improvement the lower Cho/Cr and in nWD significantly lower NAA/Cr and higher Cho/Cr and Lip/Cr ratios were detected (Table 2b).
In all WD patients who showed improvement after treatment, MRS revealed a higher level of Cho/Cr, Glx/Cr and Lip/Cr ratios. In our previous study, we demonstrated a decrease in NAA/Cr ratio in all symptomatic patients, which increased after one year of treatment (Tarnacka et al, 2008). In patients treated for a longer time with improvement of clinical status no NAA/Cr decrease was seen; this could suggest that in WD patients the NAA/Cr depletion is reversible during the first few years of treatment and that treatment with z-s or d-p can protect them from further WD progression. The Cho/Cr and Glx/Cr elevation may mirror the glial proliferation that can be detected in patients even after long-term treatment (Horoupian et al., 1988). In this study, the persistence of NAA/Cr was related to a lack of improvement in neurological status (Tarnacka et al., 2010), what is in concordance with earlier reports in the literature showing a lower level of NAA in the striatum in neurologically impaired patients who received treatment (Page et al,.2004;Lucato et al., 2005). The lower level of NAA/Cr in the pallidum in these patients may suggest that, in patients with neurological involvement, persistent neuronal dysfunction may occur as a result of copper and/or iron deposition. NWD patients with no improvement initially had lower ceruloplasmin levels compared with those in whom neurological recovery was noted. It is possible that in these patients the iron metabolism could also be disturbed because of ceruloplasmin deficiency, which can be enhanced during chelation treatment (Medici et al.,2007).
MRS in heterozygous WD gene carriers
Wilson's disease a genetic disease due to mutation of both alleles of "Wilson's disease" gene ATP7b. The number of heterozygotes -having only one faulty copy of the gene -is high, around 1-2% of the human population (Johnson, 2001). Heterozygote carriers should not have symptoms of hepatic, neurological dysfunctions, but in other recessive disorders such as phenyloketonuria, some deviations in brain function in the literature were reported (Vogel, 1984). Because 1 H-MRS is able to detect brain abnormalities that are invisible in clinical assessment and MRI a study using this method to investigate the probability of brain www.intechopen.com changes in Hzc was performed (Tarnacka et.al, 2009b). We observed statistically significant higher ratios of Glx/Cr and Lip/Cr in 1 H-MRS in Hzc in the pallidum (Table 3). Glx is a glial marker; the increase of Glx/Cr could correlate with the glial proliferation as a result of the protective function of the astrocytes to copper and iron accumulation. The significantly higher level of Lip/Cr may reflect the liberation of lipids from membranes during their breakdown as a reaction to copper and iron overload. Our results may instead suggest that WD Hz carriers accumulate free copper or iron or both in basal ganglia and thalami. Hzc in our study were much older than WD cases are and in an animal model of WD copper and iron levels were found to be augmented during aging in striatum and substantia nigra (Kim et al., 2001). It is possible that during aging in Hz there is an increase of copper and iron accumulation because of ATP7b and ceruolplasmin impairment. There is accumulating evidence that ceruloplasmin, a copper protein with ferroxidase activity, plays an important role in iron metabolism and is also the principal plasma copper-binding protein (Harris et al., 1998). In our Hz the ceruloplasmin level was lower in only four cases, but ceruloplasmin levels in the blood are influenced by acute-phase reactions. The copper accumulation in the brain can also be caused by disruption in its excretion. In animal models, heterozygous mice have a reduced ability to excrete copper, which may indicate that half of the normal liver ATP7b copper transporter activity is insufficient to deal with a large amount of copper intake (Cheach et al., 2007). Like iron, copper is a heavy metal that can induce the Fenton reaction via production of free radicals and copper and iron accumulation increased with aging might have deleterious of liver and brain via oxidative stress. *significantly higher than in controls, p<0.00006 ** significantly higher than in controls, p<0.00002 ¥ significantly higher than in controls, p<0.0006 ¶ significantly higher than in controls, p<0.000001 Table 3. The mean metabolites ratios in heterozygotes (Hz) and control subjects from left and right globus pallidus (Gp) and right and left thalamus (Th) with standard deviations.
MRS versus Wilson's disease treatment
The treatment of WD patients may take many forms, from drug therapy (chelating agents as d-penicillamine (d-p) or trientine, zinc therapy, tetrathiomolibdate) to liver transplantation www.intechopen.com (LT). The aim of medical treatment for WD is to remove the toxic deposit of copper from the body to produce a negative copper balance, and to prevent its reaccumulation. Successful therapy is measured in terms of a restoration of normal levels of free serum copper and its excretion in the urine. In 9 patients treated for more than one year with d-p in author's preliminary study, statistically significant increase of Glx/Cr and NAA/Cr ratios was noticed in contrary to 8 patients treated with zinc sulphate (z-s) were decrease of ml/Cr and NAA/Cr was detected (Figure 6a,b). D-penicillamine seams to be more effective (increase of NAA/Cr) during one year treatment, compared with z-s, but those study was performed on a quite small sample of patients. Those findings can also confirm Brewers observations, that zinc compounds are slow acting drugs and the decrease of mI/Cr can reflect the metabolic deterioration during zinc therapy (Brewer, 2005).
Current therapeutic strategies are often effective in the treatment of WD and provide a negative copper balance, however the course of WD in some treated patients may be unpredictable (Członkowska et al., 2005). The effectiveness of chelation or zinc therapy should therefore be monitored. So far, the only useful tests of monitoring the treatment efficiency are laboratory and imaging methods. Biochemical tests used in monitoring WD treatment include plasma liver tests, caeruloplasmin, copper and zinc in serum, and urinary copper and zinc excretion tests. These methods may sometimes be not sufficiently in particular at the beginning of the treatment when most of treatment side-effects occur as iatrogenic worsening during d-p therapy. In this stage wee postulated that 1 H-MRS can be used in early stages of treatment monitoring when the alternations of NAA, mI and Glx can mirror therapeutic response or deterioration (Tarnacka et al, 2008). We think that 1 H-MRS can also be helpful in differential diagnosis of clinical deterioration in patients with WD.
In patients with progressive liver failure or acute liver failure from fulminant hepatitis with or without intravenous hemolysis, orthotropic hepatic transplantation is an efficient treatment (Hoogenraad, 1997). Hepatic transplantation is also indicated in the absence of liver failure in patients with neurological WD in whom chelation therapy has proved ineffective, and significant improvements in neurological features have been reported (Polson et al., 1987). We presented a study of spectroscopic changes in globus pallidus in 3 patients with WD undergoing LT. The first case was a patient with neurological and hepatic impairment who was undergoing LT because of liver cirrhosis exacerbation. The second and third patients were subjects only with hepatic signs and the LT was performed because of liver failure. In the patient with neurological signs the MRS was performed 4 months after LT because there was no neurological improvement, and again one year after that. In the first MRS in those patient there was an increase in Glx/Cr, NAA/Cr and Lip/Cr ratios levels compared with the controls (Figure 7a). After one year's observation time an improvement of clinical status was observed and no changes of metabolites ratios were seen (Figure 7a). Before LT in the patient with liver cirrhosis mI/Cr ratio level was very low, in both patients with liver failure a decreased level of Glx/Cr, NAA/Cr and increased Lip/Cr ratios levels compared with controls were found (Figure 7b,c). After LT in those patients an increase of mI, Glx/Cr and NAA/Cr was seen (Figure 7.bc). Those data provide that after LT in those WD patients a renormalization of brain metabolites changes detected in 1 H-MRS could be seen.
www.intechopen.com c) Fig. 7. The mean metabolites ratios in globus pallidus in 3 patients undergoing liver transplantation, in first patient a) second-b) third-c), explanation in the text.
Conclusion
In conclusion, the results of the present studies appear to prove that in WD the astrocytic abnormalities occur primarily with dysfunction of the astrocytic-neuronal interactions. This process leads to metabolic derangement in neurons. During the disease progression and increasing copper deposition in the brain the compensation reactions as metallothionein synthesis become inefficient and the degenerative changes of astrocytes prevail over their detoxicative possibilities. The copper liberates from damaged astrocytes, and can also damage the neurons directly. Free copper can induce the neuronal mitochondria dysfunction causing the impaired NAA synthesis, but on this stage the destructive process can be reversible, because the increase of NAA/Cr can be observed in most treated patients. This is a very important conclusion, because it shows that the first year of treatment are pivotal in the therapeutic process. In majority of patients in this stage the central nervous system damage can be reversible. These findings emphasize the need for early diagnosis and treatment of WD. Effective antioxidant therapy introduced in early stages of WD might have modifying effect of metabolic disturbances in neurons. The neuronal dysfunction lasting longer can cause non reversible degenerative process like in patients treated for long time with no neurological improvement. Irreversible astrocytic dysfunction caused by copper can be related with disturbed ceruloplasmin synthesis causing iron overload and lead to lack of neurological improvement in some WD patients.
The 1 H-MRS technique can be useful in treatment monitoring in patients with WD in early stages (until one year). MRS seems to be more effective method in comparison with MRI www.intechopen.com in monitoring effectiveness of treatment during first year. MRS could be helpful in differential diagnosis of the clinical deterioration in treated WD patients. The alternations of NAA/Cr ratio in neurologically impaired patients and mI/Cr and Glx/Cr in patients with liver failure could be a sensitive markers of the clinical recovery and deterioration in those WD patients. MR spectroscopy could be a useful complementary tool in other monitoring treatment methods in WD patients.
Acknowledgment
This work was supported by grant from the Polish Ministry of Education and Science (3P05B/119/23).
|
2019-04-04T13:04:58.949Z
|
2012-03-02T00:00:00.000
|
{
"year": 2012,
"sha1": "9f413f1555f07c5dbcd504181419f018a3b188f0",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/30449",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3ea5a8586fd14bc2ce669d734d954d5b5b3a0ac1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
62151865
|
pes2o/s2orc
|
v3-fos-license
|
Link Reliability Based Greedy Perimeter Stateless Routing for Vehicular Ad Hoc Networks
We propose an enhancement for the well-known greedy perimeter stateless routing (GPSR) protocol for vehicular ad hoc networks (VANETs), which exploits information about link reliability when one-hop vehicles are chosen for forwarding a data packet. In the proposed modified routing scheme, a tagged vehicle will select its one-hop forwarding vehicle based on reliability of the corresponding communication link.We define link reliability as the probability that a direct link among a pair of neighbour vehicles will remain alive for a finite time interval. We present a model for computing link reliability and use this model for the design of reliability based GPSR. The proposed protocol ensures that links with reliability factor greater than a given threshold alone are selected, when constructing a route from source to destination. The modified routing scheme shows significant improvement over the conventional GPSR protocol in terms of packet delivery ratio and throughput.We provide simulation results to justify the claim.
Introduction
Vehicular ad hoc networks (VANETs) are poised to be an integral part of intelligent transportation system (ITS) initiatives all over the world.Such intervehicle communication networks support two distinct communication scenarios: vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications [1][2][3][4].The IEEE 802.11p is an approved amendment to the IEEE 802.11 standard for enabling vehicular communications [5].It specifies the PHY and MAC protocols for wireless access in vehicular environment (WAVE), while higher layer protocols are based on IEEE 1609 standards [3,5].
Ensuring reliable routing is a challenging task in VANETs since vehicles move with very high velocities that result in dynamic network topology.The routes that are established between a source-destination pair may cease to be invalid when at least one communication link along the route fails.The link lifetime is the time duration for which two vehicles are within the communication range of each other.In other words, it is the time period that starts when two vehicles move to the communication range of each other and that ends when they move out of their range (i.e., signal-tonoise ratio perceived by the receiver vehicle becomes less than the minimum required).When a link on a routing path fails, network connectivity properties change rapidly.This results in temporary disruption of information flow and leads to initiation of yet another route discovery process.Route rediscovery is expensive in terms of required signaling and computation overheads.Hence, during route discovery phase, it is very important and desirable for the routing algorithm to choose optimal route connecting source and destination, consisting of the most reliable links in the network [6].
Greedy perimeter stateless routing (GPSR) [7] is a geographic routing protocol that relies on positions (coordinates) of the nodes and destination address of the packet to make forwarding decisions in multihop wireless networks.In GPSR protocol, greedy forwarding is employed to forward packets.Always, a node that is closer to destination is selected as the forwarding node.When greedy forwarding fails, GPSR algorithm will employ perimeter forwarding.Recently many variations of the conventional GPSR protocol have been proposed for VANETs [8][9][10].Studies of GPSR conducted in 2 International Journal of Vehicular Technology [11] suggest that it suffers from many disadvantages especially in VANETs.Due to rapidly changing network topology, a source vehicle may not receive updated position information from its neighbours periodically.Hence, it may make wrong forwarding decisions resulting in failure of greedy forwarding.Perimeter mode forwarding can be used when greedy forwarding fails; however, it leads to sharp increase in delay owing to the higher number of hops required to reach destination.
In this paper, we propose a reliability based GPSR protocol (GPSR-R) for VANETs on highways.In the proposed routing protocol, a tagged vehicle will select its one-hop forwarding vehicle based on reliability of the corresponding communication link.To facilitate this, we use a metric known as link reliability which is defined as the probability that a link will be alive for a finite time duration.The selection of forwarding nodes is executed based on this metric.Thus the proposed protocol ensures that links with reliability factor greater than a given threshold alone are selected when constructing a route from source to destination.Simulation results show that the modified scheme shows improvement over the conventional GPSR protocol.The major contributions of this paper are as follows.
(i) We propose a new analytical model for describing link reliability and derive an analytical expression for computing link reliability.The analysis takes into account free flow uncongested traffic scenario and assumes the vehicle speed to have uniform probability density function.
(ii) We modify the conventional GPSR protocol and design a reliability based GPSR algorithm.We perform detailed evaluation of the modified routing algorithm.Further we compare the performance of GPSR-R against conventional GPSR protocol and three representative reliable VANET routing protocols that exist in the literature and establish that GPSR-R provides more improvement in packet delivery ratio and network throughput.
The rest of this paper is organized as follows.Section 2 describes the related work.Section 3 describes the mathematical model for link reliability.Section 4 describes the reliability based GPSR protocol.The evaluation of the modified routing algorithm is presented in Section 5.The paper is concluded in Section 6.
Related Work
Several papers have addressed the design of reliable routing algorithm for mobile ad hoc networks (MANETs) [12][13][14][15].Such designs are not applicable to VANETs because of distinct mobility and topology characteristics of these networks.
Recently, several papers have appeared that deal with reliable routing in VANETs [16][17][18][19][20][21][22][23][24][25][26][27].In [16], Taleb et al. describe a reliable routing protocol in which vehicles are grouped according to their velocity vectors and the routing algorithm dynamically searches for the most stable route that includes only hops from the same group.The performance of the algorithm depends on prediction of link failures prior to their occurrence.Wan et al. [17] propose a reliable routing protocol for V2I networks on rural highways based on prediction of link lifetime.Namboodiri and Gao [18] describe a routing algorithm that predicts how long a route will last and creates new route before the failure of the existing route.In [19], Menouar et al. describe a routing algorithm that can predict the future coordinates of a vehicle and build new stable routes.
In [20], the same authors propose a movement prediction based routing (MOPR) in which each vehicle estimates the link stability, a measure of link lifetime, for each neighbouring vehicle before selecting the next hop for data forwarding.Authors of abovementioned papers compute link lifetime by assuming both the intervehicle distance and the velocity to be deterministic quantities.However, as is widely known, both of these quantities are random variables.Sofra and Leung [21] propose an estimation method for link quality in terms of link residual lifetime.The same authors in [22] demonstrate that the estimation method proposed in [21] is capable of finding reliable routes in VANETs.However, calculation of residual lifetime requires removal of noise from the data and estimation of various parameters related to the model.In [23], authors present a protocol called GPSR-L, an improved version of GPSR protocol that takes into account the link lifetime for the selection of next hop forwarding node.However, authors present an oversimplified model for finding the link lifetime by assuming vehicle velocity to be a constant.In [24], Eiza et al. propose a reliable routing protocol known as AODV-R by incorporating link reliability metric in the original AODV routing protocol.In [25], Niu et al. describe a QoS routing algorithm based on AODV protocol and a criterion for link reliability.In [26], Yu et al. present a routing procedure, AODV-VANET, that uses vehicles' movement information in the route discovery process.
Notice that the link reliability model employed in [24,25] does not consider the stochastic nature of the intervehicle distance.Further, several studies have reported that topology based routing schemes such as AODV perform badly in VANETs, as compared to geographic routing protocols [6].
In [27], Eiza and Ni propose a routing algorithm that exploits the evolving characteristics of VANETs on highway.Naumov and Gross in [28] propose connectivity aware routing (CAR) in VANETs, which adapts to current network conditions to find a route with sufficient connectivity, so as to maximize the chance of successful packet delivery.In [29] Different from aforementioned category of link stabilitybased routing protocols where the principal objective is to find a reliable packet delivery route between the source and the destination nodes for improving the packet delivery ratio, trajectory-based routing [33][34][35][36] relies on the construction of a predefined trajectory between the source and the destination nodes based on the knowledge of the network topology.The source nodes are required to encode a geographical trajectory into the packet header and each intermediate node uses geographical greedy forwarding strategy along the trajectory.However, encoding and storing of trajectory information can limit the protocol scalability, because, for a longer path, the required header size would be very large.
Knowledge of link lifetime and reliability is essential for the design of link reliability based routing protocols.Recently, there have been certain attempts to analyse the link duration and link reliability in VANETs [37][38][39][40][41][42].In [37], Sun et al. propose an analytical model for the PDF of link lifetime by assuming equidistant nodes and vehicle speed as Gaussian.However, it may be noted that the intervehicle distance is, in general, a random variable.Yan and Olariu [38] investigate the PDF of the link lifetime in a VANET assuming (i) the PDF of intervehicle headway distance to be log-normal and (ii) the vehicle speed to be deterministic.Rudack et al. [39] present an analytical framework for single-hop link duration in VANETs.Wang [40] presents a simulation study of link duration, route lifetime, and route repair frequency in VANETs.Abboud and Zhuang [41] present a probabilistic analysis of communication link in VANETs for three distinct ranges of vehicle density.In [42], Shelly and Babu present an analysis of link duration in VANETs for the free flow traffic state.
One of the major disadvantages of the GPSR protocol is that while the sender routes the packet to the node closest to the destination node the selected forwarding node can be at the edge of the sender's communication range which can lead to packet loss [43].In VANETs, the abovementioned problem can be quite severe due to the dynamic characteristics of the network topology.Hence, for a VANET scenario, the conventional GPSR protocol should be modified to ensure that link reliability is also considered when the next hop forwarding vehicle is chosen.In this paper, we, first of all, present an accurate model for link duration and link reliability in VANETs by considering stochastic characteristics of the intervehicle distance and the vehicle speed.Contrary to the link stability model used in [20,23,32] for the selection of one-hop neighbour, the basic approach followed in our paper is that a vehicle initially finds a continuous time period ( ) in which a currently available link to one of its neighbours will be available from a time .The vehicle then finds the probability that the link would be actually available for the duration (, + ).In our proposed GPSR enhancement protocol, the neighbor vehicle that satisfies the link reliability criterion alone would be eligible for selection as a forwarding node.Accordingly, the proposed reliability based GPSR protocol ensures that most reliable nodes are chosen for forwarding and for building a route from source to destination.We implement the protocol using NS2 and our extensive simulation results show that the proposed protocol outperforms the conventional GPSR protocol.
Analytical Model for Link Reliability in VANETs
We now describe a model for link reliability in VANETs.
System Model.
For the analysis of link reliability, we consider the free flow traffic state and assume vehicle arrival process to be Poisson [44][45][46].Accordingly, the intervehicle distances are i.i.d exponential with parameter [44,45].
In the uncongested free flow traffic state, vehicles move independently of each other in the network.Further, the probability distribution of vehicle speed can be approximated to be uniform [46][47][48].Let be the random variable representing the vehicle speed.Assume to be uniform in the interval (V min , V max ).The PDF of is then given by Now the cumulative distribution function (CDF) of intervehicle distance is given by Here [⋅] is the expectation operator and represents the arrival rate.When the vehicle speed follows uniform PDF, the average vehicle density is computed as follows: (2)
Probability Distribution of Link Duration.
Here, we determine the probability distribution of link duration in VANETs.Consider the one-dimensional VANET forming a single-lane highway shown in Figure 1, where all the vehicles move in the same direction.All the vehicles on the highway have the same mean velocities, but they are permitted to move with variable instantaneous velocities.We assume a fixed transmission range ( meters) and a fixed transmission power for all the vehicles.Consider two vehicles and moving in the network as shown in Figure 1.Even though they have the same speed statistics, their instantaneous velocities are different.Let , , and , respectively, be the random variables that represent the velocities of vehicle , vehicle and the relative velocity between the given pair of vehicles in the network.Since = − , the dynamic range of is limited to (−V , +V ) where V = V max − V min .Further, the PDF of , (V ), can be determined by using the principle of random variable transformation and is given by the proof given in Appendix A: International Journal of Vehicular Technology Let be the link duration, that is, the time duration for which communication link between vehicles and is active.Now is computed as follows: Here is a random variable that represents the active distance over which vehicles and communicate.As described in Section 3.1, since the intervehicle distances are i.i.d and exponential with parameter = [1/], the PDF of is given by Assuming that and are independent, the CDF of , (), can be written as follows: Using the principle of random variable transformation, () can be determined as follows (proof given in Appendix B): where the terms 1 () and 2 () are given as follows: Here , (, V ) is the joint PDF of and .The PDF of , (), is obtained by differentiating (7a) with respect to and is given by Here the two terms 1 () and 2 () that define () are computed as follows (details given in Appendix B): The average link lifetime is then computed as follows: Notice that [] should be determined by numerical integration procedure.
An Analytical Model for Link Reliability.
In this section, we use the expression for link duration PDF obtained in the previous section to determine the link reliability.We follow the probabilistic link reliability model of [27] in which the link reliability for a link at time is defined as follows: where is the duration for which the given link should be available for communication.Given the link duration PDF (), the link reliability is determined as follows (for detailed analytical expressions, refer to Appendix C): The link reliability defined above is a measure of stability of the link and hence a vehicle can use this as a metric for choosing its forwarding node.The most reliable forwarding node, which satisfies the reliability requirements, should be selected by the source node.In the next section we discuss the design of a reliable routing protocol based on this criterion.Notice that computation of link reliability probability according to (11) involves a vehicle to find a continuous time interval ( -the duration vehicle will be connected to its neighbor from a reference time) by assuming that both vehicles associated with the link maintain their current velocity unchanged during .The vehicle then finds the probability that the link will really last till + .It may be noted that the quantity in (10) can be defined as the duration for which the communication link between a given pair of vehicles and is continuously available.To find , we make the following assumptions.(i) During , the vehicles associated with the link do not change their velocities and (ii) the highway width is negligible compared to vehicle's communication range.Now is computed as follows.If ≥ , that is, when vehicle approaches vehicle from behind, the is calculated as = ( + )/ and if ≥ , that is, when vehicle moves forward in front of vehicle , the is calculated as = ( − )/ as shown in Figure 1.Here is the Euclidean distance between the two nodes and is computed as Further we assume that all the vehicles possess the GPS facility to identify their location and velocity.Each node will receive the velocity and position information of its neighbour nodes from the modified beacon structure, which will be explained in Section 4.2.Once these values are obtained, the value of can be computed, and the link reliability can be computed by (11).
Reliability Based GPSR Protocol: GPSR-R
In this section, first of all, we provide a brief overview of the conventional GPSR protocol and then describe the proposed reliability based GPSR protocol (GPSR-R).
4.1.GPSR: An Overview.Geographic based greedy forwarding is one of the most promising routing approaches for VANETs [6].GPSR [7] is a geographic routing protocol that relies on the location coordinates of the nodes and the destination address of the packet to find the next hop forwarding node.In GPSR, a packet is marked by its originator with the corresponding destination address.Assuming that location coordinates are known, the nodes choose to forward the packet to the one-hop neighbour located closer to the destination.This is continued until the destination is reached.If such greedy forwarding is not possible, GPSR employs perimeter forwarding.The protocol assumes that all the nodes that participate in the data transfer process possess the GPS facility to identify their location coordinate.Nodes periodically exchange beacon messages among themselves that contain their ID (address) and the location coordinates.
In the context of one-dimensional VANETs, Figure 2 shows the greedy forwarding method while the perimeter forwarding strategy is shown in Figure 3.
However, in VANETs, GPSR suffers from neighbour wireless link break problem [43].Because of dynamic network topology, a source vehicle may fail to receive updated position information from its neighbours which are located at the edge of its communication range.Consequently, when the source vehicle uses greedy forwarding, there is a high probability that the selected one-hop forwarding vehicle may have gone out of its range, even though this vehicle is still listed in the source vehicle's list of neighbours.Such wrong forwarding decisions lead to packet loss [43].Hence, reliable one-hop neighbour nodes should be selected for greedy forwarding.
Design of Reliability Based GPSR-R Protocol.
As in GPSR, we assume that all the vehicles that participate in the data transfer process possess the GPS facility to identify their location coordinates.The vehicles periodically transmit beacon messages to all the one-hop neighbours.In the proposed protocol, we modify the GPSR beacon frame by adding the following additional fields: (i) speed that contains the current velocity of the vehicle that generates the beacon; (ii) direction that contains the direction of movement of the vehicle that generates the beacon.The modified beacon structure is shown in Figure 4. On receiving the beacons from the neighbours, a tagged vehicle will be able to know the position of its neighbours as well as the velocity and direction with which these vehicles move.By using all these quantities and by using the results presented in Section 3, the tagged vehicle computes the reliability of the communication link that is formed with each of its neighbour nodes.The vehicle then forms the neighbour list by including all one-hop neighbours, their ID's, and the corresponding link reliability probability values.The tagged vehicle also sets the beacon timer for all the vehicles in the neighbour list.Since the tagged vehicle receives the beacon message from its one-hop neighbours periodically, the neighbour list and the link reliability values also get updated periodically.At any point of time, if the tagged vehicle does not receive beacon message from a vehicle that is already included in the neighbour list, it assumes that this neighbour has gone out of its communication range and subsequently removes it from the list of neighbours.
International Journal of Vehicular Technology
Node address X coordinates Y coordinates Velocity Direction Whenever new vehicles enter the transmission range of the tagged vehicle, the neighbour list gets updated with the corresponding link reliability.Figure 5 shows how forwarding will happen in the proposed reliability based GPSR protocol.Assume that the source vehicle has a data packet.In GPSR, the greedy forwarding algorithm will select vehicle as the forwarding node.However, there is a high probability that vehicle would leave the transmission range of the source vehicle even before it gets the data packet, which leads to packet loss.In the proposed scheme, a forwarding node is selected based on the reliability of the corresponding communication link.Given a set of vehicles that satisfies a requirement on link reliability, the vehicle that is closer to the destination acts as the forwarding vehicle.Figure 6 shows the flowchart for the proposed reliability based GPSR.Upon receiving a data packet for forwarding, the tagged vehicle checks whether the received data packet is in greedy or in perimeter mode.If the packet is in greedy mode, the tagged vehicle searches its neighbour table to identify the set of vehicles that satisfies the link reliability criterion.A vehicle belonging to this set that is geographically closer to the packet's destination is selected as the forwarding node.When the set of neighbours that satisfies the link reliability criterion is empty, the tagged vehicle marks the packet to perimeter mode.For the performance evaluation of the proposed protocol, we keep the reliability threshold to be equal to 0.6.When the reliability threshold is too high, only limited number of vehicles will be available for forwarding, which increases the chances for the packet to enter perimeter forwarding mode.This will result in an increase of delay.Keeping very low values for the reliability threshold cannot significantly improve the protocol performance as compared to the conventional GPSR.
Simulation Results
In this section we present the results of our investigation.We evaluate the performance of proposed routing protocol and compare it with that of conventional GPSR protocol.We use the Network Simulator 2.33 (NS2.33) to conduct simulation experiments.Our simulation has two components: a mobility simulator and a wireless network simulator, which are connected by trace files that specify the vehicle mobility during simulation.A realistic vehicular mobility scenario is generated by using MOVE (mobility model generator for vehicular networks) [49] which is built on top of SUMO (Simulation of Urban Mobility) [50], which is an open source microtraffic simulation package.We construct a simulation area that uses a 10 km long highway with vehicles moving in the same direction.As described in Section 3, in the free flow traffic state, the vehicle speed and the traffic flow are independent and hence there are no significant interactions between the individual vehicles.Each vehicle is assigned a random velocity chosen from a uniform distribution.In general, we select the vehicle velocity to be uniform over 36 kmph, 108 kmph with average value 72 kmph.The mobility trace file from MOVE contains information about realistic vehicle movements (such as their location, speed, and direction), which can be fed into discrete event simulators for network simulation.We record the trace files corresponding to vehicle mobility from SUMO, convert these files to NS2-compatible files using MOVE, and use them for network simulation using NS2.33.Each node in the network simulation represents one vehicle of the mobility simulations, moving according to the represented vehicles movement history in the trace file.IEEE 802.11 distributed coordination function is used as the MAC protocol.All the NS2 related settings are given in Table 1.For each simulation experiment, we perform ten runs to obtain the average results.
We assume that all the vehicles possess the GPS facility to identify their own location.As mentioned before, a tagged vehicle identifies the position of its neighbours through the exchange of one-hop beacon packets.In order to avoid synchronization of neighbour beacons, the beacons are transmitted in a time interval that is uniformly distributed over 0.5, 1.5, where is the average interbeacon transmission time [7].When a vehicle receives beacons from its neighbours, it sets the beacon timer for each of its neighbours so that the neighbour gets removed from the list when the corresponding beacon timer expires.In our experiment, we set the beacon timer to be equal to 4.5 [7].If is too small, then the neighbour table will be accurate but the congestion in the network will be high.If is too large, then the accuracy of the neighbour positions in the table will decrease.The correct value of depends on the mobility of the nodes and their communication range.We consider the data traffic to be CBR that is attached to each source vehicle to generate packets of fixed size.We further assume UDP as the transport layer protocol for the simulation studies.A total of 10 source-destination pairs are identified in the simulation which generates packets of size 512 bytes for every 0.25 seconds (we consider the case of variable packet size as well).Total time duration for the simulation is set as 200 seconds.The source vehicle will start generating the data packet after the first 10 seconds of the simulation time and stops generating the data packet at 150 seconds.For each simulation experiment, the sender/receiver node pairs are randomly selected.We consider the following performance metrics for the evaluation of the protocols.
Packet Delivery Ratio (PDR).This quantity is the average ratio of number of successfully received data packets at the destination vehicle to the number of packets generated by the source.
Average End-to-End (E2E) Delay.This is the time interval between receiving and sending time for a packet for a source to destination pair averaged over all such pairs.Here the data packets that are successfully delivered to destinations are only considered for the calculation.
Average Throughput.This quantity represents the average amount of data bits successfully delivered at the destination vehicle for a given source-destination pair average over all such pairs in the network.
We investigate the impact of average velocity on PDR for the proposed reliability based GPSR as well as for the conventional GPSR.In this case, we consider a total of 10 source-destination pairs which generate packets of size 512 bytes every 0.25 seconds.We set the vehicle's communication range to the default value equal to 250 meters.As shown in Figure 7, the average PDR reduces when the average velocity of the vehicles in the network increases.This reduction is due to the fact that the network topology gets changed frequently when the average velocity increases.In GPSR-R protocol, a forwarding vehicle is chosen if and only if the reliability of the communication link with the source vehicle exceeds the minimum required.This reduces the probability of link breakages, resulting in improved packet delivery ratio.Figure 8 shows the impact of average velocity on average end-to-end delay for the GPSR as well as for the proposed GPSR-R protocol.As average velocity increases, the network becomes more dynamic in nature and chances of occurrence of link breakages increase.This increases the end-to-end delay for both protocols.Further, the proposed GPSR-R protocol shows higher average end-to-end delay than the GPSR protocol.The GPSR protocol selects the next hop vehicle by greedy forwarding in which a neighbour vehicle closest to the destination is selected as the next hop.However, in GPSR-R protocol, vehicles with reliability factor greater than the threshold form the set of next hop forwarding vehicles.Accordingly, the next hop forwarding vehicle selected need not be the one-hop vehicle closest to the destination.This results in higher number of hops to reach the destination and hence longer end-to-end delay.As shown in Figure 9, the average throughput of the network gets reduced when average velocity increases.As mentioned before, when the average velocity increases, the network topology gets changed frequently.This decreases the throughput.
We now investigate the impact of packet size on the performance of the two routing algorithms in VANETs.We vary the packet size from 512 bytes to 3072 bytes and keep the mean velocity of vehicles to be equal to 72 kmph.The PDR is plotted against packet size in Figure 10 while Figure 11 shows the variation of throughput.As packet size increases, there is a reduction in both the PDR and the throughput when GPSR protocol is employed.Notice that larger packets may be fragmented.If a fragmented data packet is lost during a link failure, then the whole data packet is lost.Accordingly, under GPSR, both the PDR and the throughput decrease when large size packets are employed.However, PDR and throughput performance of our proposed reliability based GPSR (GPSR-R) are not significantly affected by varying packet size.This is because, in GPSR-R, one-hop forwarding vehicles are chosen based on reliability of the corresponding communication link.Hence the probability of link breakage is very less.Further, it can be observed that, in general, reliability based GPSR algorithm shows improvement in terms of PDR and throughput over the conventional GPSR.In Figure 12, the average end-to-end delay is plotted against the packet size.As explained earlier, the end-to-end delay for the reliable routing protocol GPSR-R is higher as compared to the conventional GPSR since, in GPSR-R, the next hop forwarding vehicle need not be the one closest to the destination.When the packet size exceeds limit, it gets fragmented into smaller size packets.If there is a link failure when a fragment is transmitted it affects the delivery of the fragmented packet.Accordingly, the delivery of the original packet also gets affected.Hence, in conventional GPSR, end-to-end delay increases as the packet size exceeds the fragmentation threshold.In the case of GPSR-R, since the forwarding nodes are selected based on link reliability criterion, the link breakage probability is less and hence there is high probability that all the fragments of a larger packet will be successfully delivered.Accordingly, the International Journal of Vehicular Technology delay performance of GPSR-R is not affected significantly by varying packet size.
Next, we find the impact of varying the communication range of vehicles on the performance of the protocol.Figures 13 and 14, respectively, show the effect of range on PDR and throughput.As shown in Figures 13 and 14, both PDR and throughput increase when the communication range is increased.This happens because, with larger values of communication range and for a given value of vehicle density, there is a high probability for more numbers of vehicles in the neighbourhood of a tagged vehicle.Further, the PDR of reliability based GPSR is higher than that of the conventional GPSR owing to the fact that, in the former case, we consider link reliability as a metric for the selection of forwarding node.As shown in Figure 15, the average end-to-end delay decreases as the range is increased.In the case of conventional GPSR protocol, with larger values of communication range for tagged vehicle, there is a high probability for more numbers of vehicles to be available in the neighbourhood.Consequently, greedy forwarding is always possible and thus vehicles do not have to use perimeter forwarding which improves the delay performance.This is true in the case of GPSR-R as well.However, the selected forwarding node need not be the one close to the destination.This may result in an increase in the number of hops and hence longer end-to-end delay for GPSR-R protocol.
In Figure 16, we plot the PDR by varying the beacon interval time ().Here, we keep the average velocity as 72 kmph and select the packet size as 512 bytes.As we increase , the accuracy of the neighbour table decreases; that is, the positions of the neighbour nodes become more obsolete; this International Journal of Vehicular Technology increases the chances of link failures.Accordingly, the PDR decreases when the beacon interval is increased.However, for the proposed reliability based GPSR-R protocol, the rate of decrease of PDR has been observed to be less as compared to that of the conventional GPSR protocol because, in GPSR-R protocol, vehicles with reliability factor greater than the threshold form the set of next hop forwarding vehicles.
In Figures 17-19, we compare the performance of our proposed protocol GPSR-R with that of conventional GPSR [7], GPSR-L [23], AODV-R [24], and MOPR-GPSR [20].Figure 17 shows the comparison results for the packet delivery ratio of the network for all the abovementioned protocols.We select two distinct values for the average vehicle speed: 72 kmph and 90 kmph.The simulation results show that our proposed routing scheme GPSR-R has the highest packet delivery ratio.At the same time, AODV-R gives the lowest packet delivery ratio compared to all other protocols under consideration, since topology based routing protocols such as AODV that require the exchange of several route requests and route reply messages are not suitable for high mobility applications.Figure 18 shows the comparison results for the network throughput when the abovementioned protocols are employed.Compared to all the protocols, the throughput of GPSR-R is higher.In the case of AODV-R the data transmission is jammed by the transmission of RREQ and RREP, which will decrease the average amount of data bits successfully delivered at the destination vehicle.Figure 19 shows the results for the average end-to-end delay experienced in the network.AODV-R protocol suffers the highest delay compared to other protocols under consideration owing to the exchange of RREQ and RREP route request packets.Further, the results show that the end-to-end delay is least for conventional GPSR protocol, since the packet is forwarded in a greedy forwarding manner in which a neighbour vehicle closest to the destination is selected as the next hop.Perimeter based forwarding will be followed if and only if greedy forwarding fails.The average end-to-end delay for GPSR-R, GPSR-L, and MOPR-GPSR will be slightly higher as compared to that of conventional GPSR since these GPSR enhancements do not follow greedy forwarding; instead they rely on stability of the links for the selection of forwarding vehicle.Accordingly, the next hop forwarding vehicle selected need not be the one-hop vehicle closest to the destination.This can result in higher number of hops to reach the destination and hence longer end-to-end delay.Hence, it can be concluded that even though the GPSR-L and MOPR-GPSR protocols show better results than AODV-R, the proposed routing protocol, GPSR-R, achieves the best performance in terms of network packet delivery ratio and throughput.
Figure 20 larger threshold reduces the PDR.This happens because of the nonavailability of potential forwarding nodes that meet the reliability criterion.When the vehicle density is increased, selection of larger threshold for the link reliability improves the PDR since links with higher reliability are chosen for forwarding the data.Figure 21 shows that end-to-end delay decreases as the vehicle density is increased since more vehicles will be available as forwarding nodes and probability of packet entering perimeter forwarding is less.In this case, keeping larger values for reliability threshold would increase the delay since the next hop vehicle selected as forwarding node need not be the one closest to the destination.
Conclusion
Designing reliable routing protocols for VANETs is quite a challenging task owing to the higher velocity of vehicles and mobility constraints on their movement in the network.In this paper, we have described a modification for the wellknown GPSR protocol, exploiting information about link reliability during selection of one-hop forwarding vehicles.In the proposed modified routing scheme, the vehicle that is closer to destination that satisfies the link reliability criterion will be selected as forwarding vehicle.We have also presented a probabilistic analysis of communication link reliability for one-dimensional VANETs and this model was used for the evaluation of the modified routing scheme.The proposed routing method ensures that most reliable nodes are chosen for forwarding and for building a route from source to destination.Through extensive simulation results, we have showed that the proposed protocol shows performance improvement over conventional GPSR protocol in terms of packet delivery ratio.Further, under the proposed scheme, the link failure rate is significantly reduced; however the delay slightly increases as compared to the conventional GPSR.
Figure 2 :Figure 3 :
Figure 2: Greedy forwarding in GPSR when employed in vehicular networks.
Figure 5 :
Figure 5: When reliability factor is considered in greedy forwarding.
Figure 7 :Figure 8 :
Figure 7: Average packet delivery ratio versus average velocity of vehicles.
Figure 9 :Figure 10 :
Figure 9: Average throughput versus different values of average velocity of the vehicles.
Figure 13 :Figure 14 :
Figure 13: Average packet delivery ratio versus different communication range .
Figure 15 :Figure 16 :
Figure 15: Average end-to-end delay versus different communication range .
Figure 19 :
Figure 19: End-to-end delay comparison between various protocols.
Figure 20 :
Figure 20: Average packet delivery ratio versus density of vehicles for the proposed link reliability based GPSR protocol.
Figure 21 :
Figure 21: Average end-to-end delay versus density of vehicles for the proposed link reliability based GPSR protocol.
|
2019-02-14T14:16:48.947Z
|
2015-03-25T00:00:00.000
|
{
"year": 2015,
"sha1": "c215c3d0c2f3c569b3da064c7e7b659782bc6823",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2015/921414.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8c426b247cdeffa9e26bb10730f0daddfbb40717",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249183828
|
pes2o/s2orc
|
v3-fos-license
|
Progress of Nanomaterials in Photodynamic Therapy Against Tumor
Photodynamic therapy (PDT) is an advanced therapeutic strategy with light-triggered, minimally invasive, high spatiotemporal selective and low systemic toxicity properties, which has been widely used in the clinical treatment of many solid tumors in recent years. Any strategies that improve the three elements of PDT (light, oxygen, and photosensitizers) can improve the efficacy of PDT. However, traditional PDT is confronted some challenges of poor solubility of photosensitizers and tumor suppressive microenvironment. To overcome the related obstacles of PDT, various strategies have been investigated in terms of improving photosensitizers (PSs) delivery, penetration of excitation light sources, and hypoxic tumor microenvironment. In addition, compared with a single treatment mode, the synergistic treatment of multiple treatment modalities such as photothermal therapy, chemotherapy, and radiation therapy can improve the efficacy of PDT. This review summarizes recent advances in nanomaterials, including metal nanoparticles, liposomes, hydrogels and polymers, to enhance the efficiency of PDT against malignant tumor.
INTRODUCTION
PDT mainly relies on PSs to generate 1 O 2 from O 2 under the induction of specific wavelengths of light, causing oxidative damage to tumor cells and killing them, and even triggering immunogenic cell death (ICD). However, the insufficient supply of key factors such as PSs, light, and O 2 in tumor tissue greatly reduces the therapeutic effect of PDT (Lee et al., 2022). Nanoparticles (NPs) as drug carriers have received extensive attention in the field of tumor therapy. They can achieve high-efficiency delivery of PSs to tumor tissues through physicochemically optimized passive targeting, ligand-modified active targeting, and stimulus-responsive release . Attempts have been made to overcome the unfavorable tumor microenvironment through photocatalytic oxygen production, Fenton reaction, and combination with other chemical drugs (Wan et al., 2021). In this review, we first describe the composition and tumor-promoting mechanisms of the tumor microenvironment, and then introduce metal NPs, nanoliposomes, mesoporous silica NPs, dendrimers, hydrogels, polymer micelles and these creative methods to solve the problems faced by tumor PDT. In particular, in this review, we focus on recent advances in diverse metal NPs including metal-organic frameworks (MOF), which provide a promising approach for the design of integrative therapeutics in clinical treatments.
TUMOR MICROENVIRONMENT
TME refers to the non-cancerous cells and components presented in the tumor, including blood vessels, extracellular matrix (ECM), fibroblasts, the surrounding immune cells, molecules produced and released by them ( Figure 1) . Hypoxic and acidic microenvironment, caused by tumor vascular tissue distribution disorder and structural abnormality, is the most important supporting component in TME and immunosuppressive microenvironment and has offered a favorable niche for tumor growth, proliferation and invasion (Yuan X. et al., 2022). The immune cells, including granulocytes, lymphocytes, and macrophages, are involved in various immune responses and activities orchestrated by the tumor to promote tumor survival. Among them, the macrophages abundantly infiltrating TME are called tumor-associated macrophages (tumor-associated macrophages, TAMs), which are the most prominent immune cell type in the TME. According to the difference of phenotype and function, activated macrophages can be divided into M1 and M2, and their polarization direction is regulated by microenvironment . For example, tissue microenvironment, external factors and inflammatory response factors can activate macrophages in different forms (Cenowicz et al., 2021). The macrophages that promote tumor growth are M2 phenotype and have the function of repairing injury and inhibiting inflammatory response in normal tissues. The macrophages that inhibit tumor growth are M1 phenotype, which can induce inflammatory response and activate immune response to kill tumor cells . In the process of tumorigenesis, TAMs can often stimulate angiogenesis, promote tumor cell migration and invasion, and mediate tumor immune escape (De Lerma et al., 2021). In the site of tumor metastasis, TAMs promote tumor cell exosmosis, survival and follow-up activity. At the same time, TAMs affect the clinical therapeutic effect of tumors by enhancing the genetic instability of tumor cells, nourishing tumor stem cells, and promoting infiltration and metastasis, and they are the key driving force for tumor growth. The degree of macrophage infiltration in tumor tissues is related to the poor prognosis of patients, and the number of TAMs is negatively related to the survival time of patients (Munir et al., 2021). In addition, an increase in neutrophils in blood also accounts for a sign of poor prognosis for cancer. Imitating the naming method used to define macrophages (M1-like and M2-like/TAM), tumor-associated neutrophils (TANs) can obtain at least two different phenotypes: N1 neutrophils, endowed with anti-tumor activities, and N2 neutrophils endowed with immunosuppressive and pro-angiogenic properties to support tumor progression . The tumor-promoting activity of TAN can cover a variety of mechanisms (Taucher et al., 2021). TAN can't only promote tumor angiogenesis by secreting matrix metalloproteinase-9 (MMP-9) and vascular endothelial growth factor (VEGF) from the extracellular matrix (ECM), but also inhibit CD8T cells and produce immunosuppressive environment by secreting arginase 1 .
Cancer cells harbor a different metabolic profile with respect to healthy cells (Chou and Yang, 2021). Cancer cells can maintain a high rate of glycolysis even in the presence of O 2 , consume large amounts of glucose and significant areas in tumors exhibit lactic acidosis. This phenomenon known as "aerobic glycolysis" or the "Warburg effect" (Yuan et al., 2021). The tumor-promoting mechanisms of TME include: 1) HIF-1α nuclear transport pathway: Hypoxia inducible factor-1 (HIF-1) is a key transcriptional activator responsible for regulating target genes that contribute to survival and growth of cells in hypoxia condition (You et al., 2021). It consists of two subunits (HIF-1β and HIF-1α) and HIF-1α is sensitive to hypoxia. Under hypoxic conditions, HIF-1 becomes a stable and transcriptionally active dimer and then induces transcriptional and post-transcriptional regulation of target genes. In addition, overexpression of HIF-1α also promotes the transcription and activation of angiogenic factors, such as angiogenin, plateletderived growth factors, plasminogen activator inhibitors and VEGF (Sun et al., 2022). 2) STAT3 phosphorylation pathway: Acidic TME inhibits T cell activation and cytotoxicity by inducing STAT3 phosphorylation . 3) mTOR signaling pathway: The persistent stress of hypoxic TME leads to excessive activation of mTOR signaling in NK cells, mitochondrial fission, and impaired metabolism, ultimately leading to NK cell depletion and reduced anti-tumor ability . 4) VEGF pathway and nuclear transcription factor (NF-κB) pathway: Enriching cytokines such as VEGF and eosinophil chemokine (Eotaxin) in tumor tissue, inducing macrophage polarization into M2 type. M2 TAMs further produce factors such as CC motif chemokine 2 (CCL2), CC motif chemokine 5 (CCL5) or macrophage colony stimulating factor 1 (CSF-1) to participate in immunosuppression . 5) Promote the production of factors such as hyaluronic acid (HA) and VEGF, thereby maintaining tumor growth and migration (Niu et al., 2021).
PHOTODYNAMIC THERAPY
Taking advantage of the fact that PSs can't generate ROS in the dark environment, and is not toxic to cells and tissues. By finely controlling the illumination area, the PDT process can be confined within the tumor tissue, achieving highly selective killing of tumor cells and reducing the side effects of normal cell death (Lee et al., 2020). According to the different types and production methods of ROS, PDT can be divided into two mechanisms, type I and type II (Algorri et al., 2021). Type I reactions can directly react with biomolecules to produce radicals by transferring protons or electrons. In Type II reactions, the excited PS transfers energy to oxygen molecules to produce singlet oxygen ( 1 O 2 ). Both oxygen-containing free radicals and 1 O 2 have extremely high reactivity, which can damage a variety of biomolecules and kill tumor cells. Type II PDT requires PSs to generate ROS. However, PSs are often excited by visible light, which limits the efficacy of PDT for deep tumors. Furthermore, due to the short intracellular half-life of ROS (≈10-320 ns), the light penetration depth and the distribution of PSs in tumor tissue limit the efficacy of PDT (Lee et al., 2022). Nanomaterials provide a powerful tool to overcome many drawbacks of PSs in cancer PDT, such as hydrophobicity, short blood circulation time after intravenous injection, which lead to insufficient accumulation, retention and internalization in tumor tissues (Silva et al., 2021). Furthermore, some multifunctional nanomaterials can increase the levels of O 2 and ROS in tissues by mediating photocatalytic oxygen production and Fenton reaction. In addition, upconverting NPs can enhance light delivery in tumor tissues by converting more penetrating NIR to visible light or preparing them as persistent luminescent NPs . In addition to enhancing PSs in tumor tissues through physicochemically optimized passive targeting, ligand-modified active targeting, and stimulus-responsive release. Nanomaterials can also be combined with chemotherapy, gene therapy, immunotherapy, photothermal therapy, hyperthermia/ magnetothermal therapy, radiotherapy, sonodynamic therapy to overcome the limitations of PDT (Yang G. et al., 2021;Zhang P. et al., 2022). This study focuses on the commonly used PSs nanomodification techniques and the application of different types of novel nanomaterials for cancer treatment and diagnosis, such as nanoparticles, liposomes, hydrogels, polymers, etc.
STRATEGIES FOR NANOPARTICLE-BASED PHOTOCONTROLLED DELIVERY 4.1 Metal Nanoparticles
Among the NPs, metal NPs have the advantages of high biocompatibility and stability, adjustable size, good optical properties, easy surface functionalization, and a long activity period (Desai et al., 2021). They can be used as PSs, delivery carriers and up-conversion tools to improve the delivery of chemotherapy, radionuclides and antibody drugs to tumor cells. Au NPs have surface plasmon resonance (SPR), chemical inertness and excellent biocompatibility, which are mainly used as passivation agents, drug delivery agents, imaging agents, and photothermal agents, which have different characteristic shapes, such as particles, rod-shaped, clustershaped, shell-shaped, spike-shaped and star, etc., (Younis et al., 2021). In PDT, gold nanoparticles can be used alone or as part of a multifunctional nanomaterial hybrid system for PSs delivery.
DNA is considered to be one of the best components for building nanomaterials due to its excellent sequence specificity and programmable supramolecular self-assembly. Among them, functional modification of nanomaterials can significantly improve the cellular stability of DNA nanomaterials. Based on this, Yu et al. utilized doxorubicin (Dox), antisense DNA (target Survivin mRNA) that could inhibit Survivin expression, photosensitizer (ZnPc), and Au NPs with excellent plasmonic properties, to develop a multifunctional nanotherapeutic platform with both diagnostic and therapeutic functions, termed Apt-DNA-Au nanomachines, for in situ imaging and targeted PDT/PTT/CDT synergistic therapy of breast cancer ( Figure 2) Yu et al. (2021). Moreover, the tightly packed DNA sequences on the surface of nanomaterials were used as highly specific aptamers, which not only resisted the enzymatic hydrolysis of DNA sequences, but also improved the tumor targeting of PDT (Lv et al., 2021). Su et al. took persistent luminescent nanoparticles (PLNPs) as the core, and formed a novel nanoprobe TCPP-gDNA-Au/PLNP for persistent luminescence imaging-guided photodynamic therapy by coupling DNA sequences containing AS1411 aptamers through AuNPs Su et al. (2021). PLNPs could emit long-term fluorescence under near-infrared light irradiation. The AS1411 aptamer could specifically recognize the overexpressed nucleolin in cancer cells and improve the tumor targeting of the photosensitizer TCPP. Furthermore, increasing the types of DNA modified on the surface of NPs can improve the accuracy of PDT tumor identification. Cai et al. designed and synthesized a nano-therapeutic platform for fully automatic diagnosis and treatment of Au/Pd nanomachines with the main marker miRNA-21 and two auxiliary markers miRNA-224 and TK-1 mRNA as the targeted detection unit (Au/Pd ONP-DNA nanomachine) Cai X. et al. (2021). Using ONPs as a carrier, when all specified targets were detected [logic system input was (1, 1, 1), output was (1, 1)], the 808 nm laser could be programmed to automatically irradiate the tumor and perform PDT and PTT. However, the stability of the Au-S bond is poor, and the DNA is non-specifically detached from the surface of AuNPs, resulting in false positive signals and severe side effects. Zhang et al. used a simple and highly stable amide bond (-CO-NH-) instead of the Au-S bond and combined the DNA probe on Au@graphene (AuG) to prepare Label-rcDNA-AuG Zhang J. et al. (2021). Label-rcDNA-AuG could improve the antiinterference ability of nanoprobes against nucleases, GSH and other biological agents. By accurately monitoring the level of intracellular miR-21, Label-rcDNA-AuG could identify positive cancer cells even in a mixture of cancer cells and normal cells, improving the precision of PDT treatment and reducing damage to normal cells.
Besides DNA modification, surface modification of biotin (Bt) is also an effective way to improve the tumor targeting of NPs . The targeting of Bt-modified Au-NPs (BT@Au-NPs) to C6 glioma cells was more than 2 times that of Au-NPs (He F. et al., 2021). Modification of arginine (R)-glycine (G)aspartic acid (D) (RGD) on the surface of NPs, which could bind to integrin α v β 3 integrin with high affinity, could not only improve the anticancer ability of NPs, but also reduced tumor migration rate. The HB-AuNRs@cRGD prepared by Liu et al. had a tumor inhibition rate of up to 77.04% in the ECA109 esophageal cancer model, and significantly reduced the migration and invasion of cancer cells in ECA109 cells .
Besides improving the PDT efficiency of Au-NPs in combination with other nanomaterials, oxygen production can also be optimized by changing the physical structure of Au-NPs.
By adjusting the ratio of silicon core radius and gold shell thickness, Sajid et al. enhanced the field strength of Au nanoshells with 40/20 core radius/shell thickness by 35 times and increased 1 O 2 yield by 320% compared to before optimization (Farooq and de Araujo, 2021). Furthermore, the introduction of ionic complexes assembled by heterometallic colloids (Mo 6 -Au 2 ) can affect the cytotoxicity, cellular internalization and PDT activity of NPs by regulating the order of their supramolecular stacking (Kirakci et al., 2021). For example, through the 1:1 binding of [Mo 6 I 8 (L′) 6 ] 2− (L′ = I-, CH 3 COO-) with [Au 2 L 2 ] 2+ (L was the ligand of cyclic double amines and phosphine), Faizullin et al. synthesized a heterometallic colloids composed of positively and negatively charged metal-organic complexes Faizullin et al. (2021). Among them, the cellular internalization of Mo 6 -Au 2 (L′ = CH 3 COO-) assembled with poly-DL-lysine (PL) exhibited a three-fold enhancement when L′ = CH 3 COO-and could accumulates in the cytoplasm by fast endo-lysosomal escape. In addition, the photodynamic effect of [Mo 6 I 8 (L′) 6 ] 2− clusters was much higher at L′ = CH 3 COO-than at L′ = I-. In recent years, bimetallic NPs are also improving the therapeutic efficiency of PSs due to the inherent properties of the introduced metal elements and the interaction between two metal atoms (Park et al., 2021). He et al. synthesized Au 1 Bi 1 -SR NPs by introducing Bi into captoprilcoated Au NPs (Au-SR NPs) by utilizing the X-ray CT signal enhancement effect of Bi He G. et al. (2021). Au 1 Bi 1 -SR NPs not only exhibited higher ROS yield than Au-SR NPs, but also enabled CT imaging-guided and light-mediated PDT for synergistic tumor therapy. In addition, Jia et al. further modified Au-Bi bimetallic nanoparticles with IR808 fuel and prepared Au-BiGSH@IR808 to obtain higher NIR photon capture ability, which effectively solved the problem of low absorption rate of Au-Bi nanoparticles in the near-infrared region Jia et al. (2021). Noble metal Pt nanozymes have catalase-like activity . Truncated octahedral Au (ToHAu) can enhance LSPR by increasing the spatial separation and realizing the simultaneous participation of holes and electrons in the reaction due to the special structure of twin planes and stacking faults (Yoon et al., 2017). Accordingly, Bu and his team designed and synthesized a comprehensive phototherapy nanomodulator (ToHAu@Pt-PEG-Ce6/HA) based on the enhanced LSPR effect Bu et al. (2021). Pt was deposited on ToHAu to form a spatially separated structure, which enabled the reactive molecules to freely enter the hot holes and electron fluxes, and had strong photothermal and photodynamic properties. The antitumor effect of NPs is also closely related to their size. For example, large NPs (>100 nm), despite their enhanced permeability and penetration (EPR) effect, cannot fully infiltrate into tumor tissue due to dense extracellular matrix and elevated interstitial fluid pressure (Zmerli et al., 2021). In contrast, small NPs (<20 nm) exhibited better tumor penetration but were easily cleared by the blood circulation. Therefore, it is particularly important to design a TME-responsive size-tunable NPs. To this end, Liu et al. designed and prepared Au-MB-PEG NPs by exploiting the efficient polymerization ability of AuNPs and the excellent performance of TME ROS-triggered HOCl-responsive platform . Small-sized Au-MB-PEG NPs responded to highly expressed HOCl in the tumor region through a HOCl-sensitive molecule (FDOCl-24). After reaching the tumor tissue, Au-MB-PEG NPs were cleaved by HOCI to release Au NPs that rapidly aggregated into larger aggregates in the tumor through electrostatic interactions, and simultaneously released methylene blue as a photosensitizer for photodynamic therapy (PDT). The aggregated AuNPs redshifted the light absorption to the NIR region, resulting in enhanced photoacoustic imaging (PAI) and PTT under laser irradiation.
Ag Based Nanoparticles
Ag NPs have higher 1 O 2 yield than Au NPs and Pt NPs due to stronger SPR (Younis et al., 2021). Inspired by the ability of ICD to transform tumors from cold to heat by inducing tumor cells to release tumor antigens and damage-associated molecular patterns (DAMPs) and then enhance T cell proliferation and infiltration . Jin et al. developed a corn-like Au/Ag nanorod (Au/Ag NR) that could induce ICD in tumor cells under NIR-II (1064 nm) light irradiation ( Figure 3) Jin F. et al. (2021). They covered the Ag shell on Au NRs, and by changing the amount of Ag + , the SPR band in the vertical region with higher 1 O 2 yield was red-shifted to the NIR-II window. In addition, Au/Ag NRs could maintain the immune memory effect for up to 40 days in animal experiments by enhancing the therapeutic effect of immune checkpoint blocking (ICB). Wang and his team further used DNA probe technology to prepare the DNA-functionalized nanoprobe Au-AgNP-Ag-HM loaded with the photosensitizer hematoporphyrin monomethyl ether (HMME) Wang et al. (2021b). Au-AgNP-Ag-HM not only had the ROS production mediated by the LSPR signal change of Au-Ag-HM and photosensitizer HMME, but also had fluorescent probe technology mediated by caspase-3 specific recognition sequence (DEVD), which effectively integrate the functions of pro-apoptosis and detection, and can be used for the treatment and efficacy evaluation of tumor cells.
Cu Based Nanoparticles
Cu-based Fenton reagents have superior ROS yields to Fe-based systems (Zhu F. et al., 2021). In addition, Cu-doped layered double hydroxide (Cu-LDH) nanosheets can not only further enhance the yield of ROS, but also take advantage of the small size and positive charge to actively infiltrate cancer cells for deep tumor therapy . But positively charged NPs have a shorter residence time in the blood circulation than negatively charged NPs (Smith et al., 2019). To this end, Wu and his team used negatively charged liposomes to encapsulate Cu-LDH (Cu-LDH@Lips) and embedded HMME into the bilayer of CuLDH@ Lips, forming a dual size/charge switchable reactive oxygen species generator (Cu-LDH/HMME@Lips) ( Figure 4) Wu et al. (2021). Liposomes could prolong the residence time in the circulatory system by reducing the clearance of Cu-LDH/ HMME@Lips by immune cells, and HMME could disintegrate Cu-LDH/HMME@Lips in response to ultrasound to release positively charged Cu-LDH, which could penetrate deep into tumor cells and then caused oxidative stress damage to tumor cells through Fenton-like reaction. Similarly, Wang et al. prepared ICG/CAC-LDH nanosheets by intercalating indocyanine green (ICG) into a hydrophobic bilayer of Cedoped Cu-Al layered double hydroxide (CAC-LDH) . ICG/CAC-LDH could not only induce the depletion of intracellular GSH, but also decompose to generate Cu + and Ce 3+ to stimulate the Fenton-like reaction to generate OH.
Ruthenium Based Nanoparticles
Ru complexes are commonly used as PDT PSs due to their high water solubility, photostability, and high ROS yield (Smithen et al., 2020). However, it has the disadvantages of dark toxicity, DNA mutation and being excited only by short-wave visible light, which limit its clinical application . To this end, He et al. synthesized a new red light-responsive Ru complex PSs (Ru-I) without two-photon activation, by using large conjugated indolepyridine benzopyrans as ligands He Y. et al. (2021). Positively charged Ru-I could effectively target cancer cell lysosomes and activate PSs under 660 nm red light to induce apoptosis. In addition, Karges et al. proposed to wrap Ru (II) polypyridine complexes with amphiphilic polymer DSPE-PEG 2000 -folate to prepare nanoparticles without dark toxicity, which could target cancer cells overexpressing the folate receptor Karges et al. (2021). It exhibited significant phototoxicity under irradiation at 480 or 595 nm and induced tumor cell apoptosis through the caspase3/7 pathway. Furthermore, this complex not only shown the highest 1-and 2-photon absorption to date, but also exhibited properties of inhibiting multidrug-resistant tumors FIGURE 3 | Illustration of Corn-like Au/AgNR-mediated antitumor immune responses. Corn-like Au/Ag NR-mediated NIR-II PTT/PDT significantly increased the expression of calreticulin, high-mobility group box 1, and adenosine triphosphate in tumor cells, reprogramming the immunosuppressive cold tumor microenvironment to immunogenic heat tumor, which achieve the combined anti-cancer activity with the ICB antibody and effectively inhibit the growth of distant tumors and prevent tumor recurrence. Reproduced with permission from (Jin L. et al., 2021). (B) Schematic diagram of the working principle of Cu-LDH/HMME@Lips. Dual-size/charge-switchable Cu-LDH/HMME@Lips utilizes negatively charged liposomes to prolong circulation residence time in low-permeability solid tumor models, and then HMME decompose Cu-LDH/HMME@Lips in response to ultrasound to release positively charged Cu-LDH, which can penetrate deep into tumor cells. HMM generates 1 O 2 under ultrasound irradiation, while Cu-LDH infiltrates deep in tumor generates ROS through a Fenton-like reaction. Reproduced with permission from .
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org May 2022 | Volume 10 | Article 920162 in a rat model. In addition to the introduction of folate groups, bio-orthogonal labels such as copper-catalyzed azide-alkyne cycloaddition (CuAAC) are also effective methods to improve tumor cell recognition (Kappenberg et al., 2022). Lin et al. prepared the first bio-orthogonal two-photon PSs based on Ru (II) complexes by bio-orthogonally labeling Ru-alkynyl-2 Lin C. -C. et al. (2021). In addition to generating ROS to exert cytotoxic effects, it could also exert anti-tumor effects by specifically binding to cancer cell membranes and inducing membrane damage.
Iridium Based Nanoparticles
In recent years, new photoredox catalyst systems based on Ir compounds have been widely used in catalysis and PDT, and can also be used to prepare new PSs (Raza et al., 2021). Liu and his team prepared two Ir (III) complex dimers that could selfassemble into NPs in aqueous media, named Ir1 and Ir2 . Ir1 and Ir2 not only enhance cancer cell uptake through positive charges on the surface, but also exhibit type I and type II PDT activity in the 350-500 nm (UV-Vis spectrum) range, even in hypoxic microenvironments. Ir NPs might be promising alternatives to traditional organic PSs. Ir (III) complexes show great potential in the construction of oxygen-sensitive sensing probes due to their unique oxygen quenching pathway (Yasukagawa et al., 2021). Xiao et al. designed and synthesized a red light-excited Ir (III) complex encapsulated in the hydrophobic pocket of Cyanine7-modified β-cyclodextrin (β-CD) Xiao and Yu (2021). Ir (III) complexes achieved different degrees of oxygen quenching according to the change of oxygen concentration in the environment. β-CD could not only be used to improve the water solubility of Ir (III) complexes, but also be used to carry Cyanine7 to establish a proportional oxygen fluorescent probe. The results of in vitro and in vivo experiments shown that the prepared probe had remarkable oxygen sensitivity and could be used for quantitative determination of the oxygen level in the hypoxic microenvironment of solid tumors. Mitochondria are not only the decisive regulators of cellular metabolic function and apoptosis, but also the organelles with the highest intracellular oxygen concentration (Tabish and Narayan, 2021). Therefore, the preparation of PSs that target the mitochondria of cancer cells will be the key to maximize the therapeutic potential of PDT. Recently, Redrado et al. prepared four novel mitochondria-selective trackable PSs, which are called bifunctional Ir (III) complexes of the type [Ir (C^N) 2 (N^N-R)]+, where N^C is either phenylpyridine (PPY) or benzoquinoline (BZQ), N^N is 2,20-dipyridylamine (Dpa), and R either anthracene (1 and 3) or acridine (2 and 4) (Redrado et al., 2021). Only complex 4 ([Ir (Bzq) 2 (dpaacr)]+) clearly shown a dual emission mode. The organic luminescent chromophore (acridine) could display the cellular localization of the complexes under irradiation at 407-450 nm. Ir (III) had more than 110-fold higher photosensitivity values under 521-547 nm irradiation than under dark conditions, and promoted apoptotic cell death and a possible apoptotic pathway by generating ROS. Although two-photon near-infrared photoactivation can partially overcome the lack of absorption or weak absorption of Ir (III) complexes in the red to near-infrared region, the requirements of ultrafast femtosecond laser source and small irradiation area limits its application in solid tumor therapy. To this end, Liu and his team designed and synthesized bifunctional micelles (Micelle-Ir) for synergistic PDT and PTT therapy in vivo ( Figure 5) Liu N. et al. (2021). They were prepared by micellization of a neutral Ir (III) complex (BODIPY-Ir) containing a distyryl boron-dipyrrole methylene group (BODIPY-Ir). BODIPY enabled BODIPY-Ir to acquire the ability to absorb in the far-red/near-infrared region, and micellization enabled BODIPY-Ir to acquire the ability to synergize PDT and PTT therapy, not only destroying primary 4T1-Luc tumors, but also preventing lung metastases.
Metal Oxide-Based Nanoparticles
Metal oxide nanomaterials have found promising biomedical applications for fluorescent labeling due to the advantages of high photostability, large extinction coefficient, high emission quantum yield and easy surface modification (Younis et al., 2021).
In view of the fact that nano-heterostructures can promote photoinduced electron-hole separation and the generation of ROS, 2D nano-heterostructure-based PSs can provide a major advancement in PDT. Qiu et al. designed and synthesized a bismuthene/bismuth oxide (Bi/BiO x )-based lateral nanoheterostructure synthesized by a regioselective oxidation process Qiu et al. (2021). Upon irradiation at 660 nm, the heterostructure could effectively generates 1 O 2 under normal oxygen conditions but produces cytotoxicity OH and H 2 under hypoxic conditions, which synergistically improves the intensity of PDT. In addition, this Bi/biox nano-heterostructure had biocompatibility and biodegradability, with the surface molecular engineering used here, it improved the penetrability of tumor tissue and increased the cellar uptake, and then produced an excellent oxygen-independent tumor ablation effect. Iridium dioxide (IrO 2 ) with semiconductor behavior has high catalytic activity for oxygen release reaction (OER) in a wide pH range, and also has excellent photocatalytic efficiency (Arias-Egido et al., 2021). For this, Yuan et al. synthesized IrO 2 -Gox@ HA NPs that could target TME by combining glucose oxidase (GOx) and IrO 2 NPs on hyaluronic acid (HA) Yuan Y. et al. (2022). First, Gox converted the high levels of glucose in tumors to H 2 O 2 , and then IrO 2 NPs converted H 2 O 2 to O 2 , thereby enhancing type II PDT, which could effectively alleviate hypoxia in tumor tissues. Relevant literature points out that the larger the specific surface area of the sheet-like structure, the higher the zeta potential, and the greater the drug-carrying capacity of nanomaterials (Duo et al., 2021). Dai et al. synthesized a radiosensitizer based on 4-layer O-Ti 7 O 13 nanosheets by using two-dimensional titanium peroxide nanomaterials with mesoporous structure to support DOX Dai Y. et al. (2021). O-Ti 7 O 13 could quickly load the drug within 5 min while other nanomaterials need 24 h and release the drug continuously in an acidic microenvironment. In addition, under the action of high-energy X-rays, titanium dioxide could absorb radiation energy to synthesize ROS and kill tumor cells. Zeng et al. (2021). After ICG@PEI−PBA−HA/CeO 2 targeted cancer cells with HA, the CeO 2 released from the pH cleavage reaction of phenylboronic acid catalyzeed H 2 O 2 to generate O 2 through the cerium valence cycle of Ce 3+ /Ce 4+ . The regenerable CAT-like nanozyme activity of CeO 2 increased the bioavailability of ICG and promoted tumor cell apoptosis by improving the tumor hypoxic microenvironment.
The efficacy of PDT/PTT combination therapy was better than that of PDT or PTT monotherapy. Traditional PDT/PTT synergistic therapy requires two light sources to excite the PSs of PDT and the thermosensitive agent of PTT respectively, which increases the difficulty of nanoparticle preparation. Guo and his team used B-TiO 2 with oxygen vacancies and narrow band gaps to prepare a nanothermosensitive system (B-TiO 2 @SiO 2 -HA) with full-spectrum response to light stimulation Guo et al. (2021). Under NIR-II laser irradiation, B-TiO 2 @SiO 2 -HA could not only provide PDT/PTT synergistic therapy for tumors, but also perform high-resolution photoacoustic imaging (PAI) to achieve precise nanothermothermal effects. Similarly, Gao and his team used SnO 2-x with oxygen vacancies to prepare a multifunctional nano-thermosensitive material (SnO 2-x @SiO 2 -HA) with a target-specific synergistic PDT/PTT with full spectrum response Gao et al. (2021a). In addition to exerting precise PDT/PTT synergistic antitumor therapeutic effect through PAI, SnO 2-x @SiO 2 -HA also had antibacterial effect, which effectively promotes the healing of skin wounds. But most metal or carbon NPs are toxic to normal cells or tissues. Sengupta et al. used the biosafety and magnetic properties of magnetite (Fe 3 O 4 ) nanoparticles to synthesize a new PS E-NP with anti-inflammatory and immunoprotective effects Sengupta et al. (2022). E-NP could not only upregulate the expression of cyclin kinase inhibitory protein p21, but also inhibit cancer cell cycle arrest in sub-G0G1 phase. It also acted as an antiinflammatory by reducing macrophage myeloperoxidase (MPO) and nitric oxide (NO) release, thereby minimizing collateral damage to healthy cells.
Upconversion Nanoparticles
UCNPs have nonlinear anti-Stokes properties and can emit highenergy photons under low-energy NIR light excitation through lanthanide ion doping. UCNPs also have the advantages of low toxicity, narrow emission bandwidth, large decay time, resistance to photobleaching, and no autofluorescence background (Liu et al., 2022). The emission wavelength of UCNPs can be controllably adjusted from ultraviolet light to near-infrared light to match PSs with different absorption wavelengths, which provides a new method to solve the problem of PDT light penetration depth . The photosystem-I/photosystem-II (PS-I/PS-II) PDT system can only be excited by red light to generate O 2 , so that it can quickly supply its own O 2 consumption in 1 O 2 production, showing a spatiotemporal synchronous system of O 2 self-supply and ROS production. However, the tissue penetration ability of red light is unsatisfactory, so it is unsuitable for the removal of deep tissue tumors (Fakurnejad et al., 2019). Recently, Cheng et al. used the ability of UCNPs can emit red light to activate PS-I and PS-II under NIR light, decorated thylakoid membrane of chloroplasts on UCNPs to form UCTM NPs, and developed a new photosynthesis-based PDT strategy for realizing spatiotemporally synchronous O 2 self-supply and ROS production ( Figure 6) Cheng X. et al. (2021). Both in vitro and in vivo assessments prove that UCTM NPs can effectively relieve hypoxia, induce cell apoptosis, and eliminate tumors with NIR light irradiation. A large amount of APT produced during the ICD process can be hydrolyzed by the extracellular enzyme CD73 into ADO (immunosuppressant), which prevents the cytotoxic T-cell immune response (Workenhe et al., 2021). To this end, Jin and his team prepared cancer-cell-biomimetic UCNPs (CM@UCNP-Rb/PTD) by exploiting the properties of anti-CD73 antibody to block the adenosine pathway Jin F. et al. (2021). CM@UCNP-Rb/PTD utilized the cancer cell membrane (CM) to target cancer cells and avoid macrophage uptake. After reaching the tumor tissue, UCNPs converted NIR into visible light to generate ROS, and released DOX to achieve a Chemo-PDT synergistic combination therapy. CD73-blocked CM@ UCNP-Rb/PTD enhanced spontaneous antitumor immunity through the combined effect of chemotherapeutic drugs, PDTtriggered ICD and CD73 blockade. In immunotherapy, ligands that block PD-L1 can also directly prevent PD-1/PD-L1 immune blockade (Lee et al., 2021). Liu and his team modified UCNPs with MIPs formed by Pd-L1 peptide phase transfer imprinting, and prepared MC540/MNPs@MIPs/UCNP composite imprinted particles . MC540/MNPs@MIPs/UCNP utilized MIPs to target tumor cells, which improved the binding rate to tumor cells and achieved targeted PDT. The nucleus is the control center of cellular biological activities, and targeting PDT to the nucleus can lead to severe DNA damage and inactivation of nuclear enzymes (Teng et al., 2021). Base on this, Chen and his team proposed a PDT strategy of "one treatment, multiple irradiation" . They modified hollow mesoporous silica nanoparticles with amine group with acidification effect and loaded RB and UCNPs to synthesize UCNP/RB@mSiO 2 -NH based on the lysosome-nucleus pathway. After UCNP/RB@mSiO 2 -NH entered the lysosome, it entered the nucleus through the nuclear pore by generating ROS and destroying the lysosome under the first 980 nm (3 min) NIR irradiation. Subsequently, efficient nuclear-targeted PDT was achieved under a second 980 nm NIR irradiation. Dual PSs PDT is also one of the effective strategies to improve ROS yield. Pham and his team utilized SiO 2 -coated core-shell UCNPs to support two PSs (Rb and Ce6) to form UCNP/RB, Ce6 Pham et al. (2021). UCNP/RB, Ce6 produced a large amount of 1 O 2 under 1550 nm NIR-IIb irradiation, which had higher PDT efficiency than single PS. However, the application of UCNPs in the biomedical field still faces challenges due to the low quantum yield and superheating effect of the 980 nm light source, and the low drug loading capacity (Lee et al., 2020).
Carbon-Based Nanoparticles
However, g-C 3 N 4 NSs have poor PDT efficacy due to wide band gap and low utilization of visible light (He F. et al., 2021). Graphitic carbon nitride nanosheets (g-C 3 N 4 NSs) are a recently reported promising carbon-based nanomaterial with the advantages of high biocompatibility, high photoluminescence quantum yield and surface modifiability. However, g-C 3 N 4 NSs have poor PDT efficacy due to wide band gap and low utilization of visible light (Younis et al., 2021). It has been proposed that the PDT efficiency of g-C 3 N 4 NSs can be enhanced by doping metal ions or UCNPs. Although Ru (II) polypyridine complexes have high 1 O 2 generation and stability, the hypoxic microenvironment limits their PDT efficacy (Karges et al., 2020). Therefore, Wei's group synthesized a novel oxygen self-sufficient PS (Ru-g-C 3 N 4 ) by grafting [Ru (Bpy) 2 ] 2+ onto g-C 3 N 4 nanosheets through Ru-N bonds Wei et al. (2021). The incorporation of [Ru (Bpy) 2 ] 2+ enabled Ru-g-C 3 N 4 to obtain high loading capacity, narrow bandgap and high stability, thereby greatly improving the PDT efficiency. In addition, Ru-g-C 3 N 4 not only had catalase-like activity in hypoxic environment, but also could effectively react with H 2 O 2 to generate free radicals, causing oxidative stress damage to tumor cells. Molybdenum carbide (MoxC) has an electronic structure similar to that of noble metals, among which, Mo 2 C nanospheres can simultaneously induce PTT and PDT under illumination in the entire near-infrared radiation region due to their metallic properties and interband/ intraband transitions, exhibiting highly efficient redox capacity (Zhang et al., 2019). Based on this, Hou and his team prepared a novel nanocomposite, Mo 2 C@N-Carbon-3@PEG, by combining Mo 2 C nanospheres with N-carbon. N-carbon could improve the photogenerated charge separation of Mo 2 C@N-Carbon-3@PEG, which doubles its photocatalytic performance Hou et al. (2022). However, the doping of metal ions and integration with other nanomaterials increases the risk of biological toxicity and side effects on the one hand, and increases the complexity of the PSs multistep synthetic protocols on the other hand. Based on this, Liu's group had designed and synthesized a nitrogen-rich graphitic carbon nitride nanomaterial, 3-amino-1,2,4-triazole (3-AT) derived g-C 3 N 5 NSs . The photocatalytic activity of g-C 3 N 5 NSs was 9.5 times higher than that of g-C 3 N 4 NSs. Compared with g-C 3 N 4 , the nitrogen-rich triazole group could achieve low-energy transition by reducing the band gap of g-C3N5NSs conjugation, thereby improving the utilization efficiency of visible light.
Sulfur-Based Nanoparticles
Metal sulfides are often used as photothermal agents for photothermal therapy, among which Ni 3 S 2 , CuS and Co 3 S 4 nanoparticles can be used to synthesize semiconductor PSs, which is an effective way to combine PTT/CDT/PDT (Shukla et al., 2021).
Co 3 S 4 NPs will be degraded in the acidic microenvironment and trigger a Fenton-like reaction to generate hydroxyl radicals (·OH), which exerts the efficacy of CDT. Based on this, Jiang and his group prepared a nanocomposite Co 3 S 4 -ICG, which could responsively release PSs in an acidic microenvironment and realize the synergistic antitumor effect of CDT/PTT/PDT under NIR by loading indocyanine green (ICG) into hollow Co 3 S 4 . The combination of Fenton-like reaction and PDT enhanced ROS production and antitumor effect. Similarly, Feng and his team designed and prepared a nanocomposite FeS 2 @SRF@BSA with bovine serum albumin (BSA) as a carrier to encapsulate FeS 2 NPs and SRF, for combining CDT, PTT and PDT to achieve trimodal synergistic tumor therapy Feng et al. (2021). In addition, the Z-scheme heterostructure possesses both narrow band gap and highly oxidative holes, which are helpful for inducing nearinfrared photocatalytic oxygen production (Mafa et al., 2021). Based on this, Sang's group had designed and synthesized a Z-scheme nanoheterostructures, Ni 3 S 2 /Cu 1.8 S@HA Sang et al. (2021). The doping of Cu improves the near-infrared absorption and photothermal conversion efficiency by 7.7% compared with Ni 3 S 2 . It not only had catalase/peroxidase-like activity to generate endogenous O 2 to relieve hypoxic internal environment, but also had a targeted recognition effect on cancer cells with high CD44 receptor expression, showing good anti-cancer properties effect.
Phosphorus-Based Nanoparticles
Black phosphorus (BP) is a kind of post-graphene twodimensional (2D) nanomaterials with special structural inplane anisotropy, which have better optical and electrical properties than carbon-based metal NPSs and sulfur-based metal NPSs (Younis et al., 2021). In recent years, microorganisms with photosynthetic properties and biocompatibility, such as cyanobacteria, have been rapidly developed and used in cancer therapy and other diseases related to oxygen tension (Dahiya et al., 2021;Qamar et al., 2021). Qi and his team hybridized cyanobacterial cells with 2D BP nanosheets to form a microbe-based nanoplatform, Cyan@ BPNSs Qi et al. (2021). In vivo experiments, Cyan@BPNSs could effectively increase the oxygenation level in the tumor and maintain a tumor inhibition rate of more than 100%.
Although black phosphorus quantum dots (BPQDs) have larger surface area and higher 1 O 2 quantum yield than BP (up to 0.91 in oxygen-saturated solution), the instability and poor tumor targeting of BPQDs in physiological environment limit their further research and clinical applications (Ding S. et al., 2021;. To this end, Liu and his team used the cationic polymer polyethyleneimine (PEI) to modify BPQDs, and modified RGD peptides targeting tumor cells on their surfaces to prepare BPQDs@PEI + RGD-PEG + DMMA, which had pH-responsive charge-switching and tumor-targeting properties Liu et al. (2021i). The results shown that BPQDs@PEI + RGD-PEG + DMMA had good stability in vivo, and could increase the uptake of BPQDs by tumor cells through charge conversion under light, and achieved tumor target enrichment. Most tumor microenvironmentresponsive nanoparticles improve the targeting of nanomaterials by responding to the hypoxic microenvironment, but many tumor tissues do not exhibit hypoxia (Huang C. et al., 2020). The hypoxic sites of tumors are mostly located deep in the core of metabolically active tumor tissues, but traditional nanomaterials have the disadvantage of weak tissue penetration . To this end, Ding and his team were the first to combine BPQDs and genetically engineered E. coli expressing catalase by electrostatic adsorption to form hybrid engineered E. coli/BPQDs (EB) Ding D. et al. (2021). EB could dissolve the cell membrane of E. coli carrying catalase under light exposure, and then the released catalase could generate oxygen to improve hypoxia in the tumor.
Besides BP, red phosphorus (RP) can also generate reactive oxygen species under visible light irradiation . Among them, Z-type RP/BP nanosheets prepared by exploiting the high separation efficiency of electron-hole pairs not only have photocatalytic activity, but also ROS can be generated with higher efficiency . To this end, Kang and his team prepared M-RP/BP@ZnFe 2 O 4 NSs hybrid nanomaterials with higher stability and tumor targeting by loading Z-type RP/BP nanosheets with ZnFe 2 O 4 and tumor cell membranes Kang et al. (2022). Among them, ZnFe 2 O 4 could not only increase the productivity of ROS by catalyzing the Fenton reaction, but also induced apoptosis of MB-231 cells through oxidative stress.
Metal-Organic Frameworks
MOFs are a new class of molecular crystal materials, composed of metal ions or clusters bridged by organic connectors. MOFs can integrate NPs and/or biomolecules into a single framework hierarchically through taking advantage of their synthetic tunability and structural regularity, and can be used as highly active sites for multifunctional therapy . Focusing on the key factors of O 2 production, Ren et al. designed and assembled a novel two-stage intelligent oxygen generation nanoplatform based on metal organic framework core modified by Pt and CaO 2 NPs (UIO@Ca-Pt) Ren et al. (2021). It was based on the porphyrin metal-organic framework (UIO), and at the same time loaded by CaO 2 NPs with polydopamine (PDA), and then used Pt to further improve biocompatibility and efficiency. In TME, CaO 2 could react with water to increase the content of H 2 O 2 . These H 2 O 2 were further decomposed into O 2 by Pt NPs, thereby promoting the TCPP in the nanometer parent nucleus to convert the surrounding O 2 into 1 O 2 under laser irradiation. On the other hand, as the most toxic ROS,·OH has stronger oxidizing property and better therapeutic properties than 1 O 2 (Manivasagan et al., 2022). Recently, the strategy to induce in situ generation of OH has been proposed by introducing Fenton-based agents into tumor cells to induce over-expressed H 2 O 2 in tumor cells. Chen et al.'s research focused on designing nanocarriers for the transport of Fenton catalysts or metal ions such as iron ions. They synthesized MIL-101(Fe)@TCPP using nanoscale iron-based metal-organic framework MIL-101(Fe) loaded with 5,10,15,20-tetrakis (4-carboxyphenyl) porphyrin (TCPP) PS Chen et al. (2021d). MIL-101(Fe) catalyzed the conversion of H 2 O 2 to OH through Fenton reaction under acidic TME, and served as a nanocarrier to deliver TCPP PS to generate photoactivated 1 O 2 for tumor-specific therapy without serious side effects, showing antitumor great potential for applications.
Iron-based MOFs have the advantages of high drug loading capacity, adjustable degradability and flexible structure. Porphyrin-based MOFs can be used as PSs for high-efficiency PDT with high stability (Ren et al., 2021). Hypoxic TME and endogenous antioxidant defense (AOD) (such as high expression of glutathione and GSH) can weaken the therapeutic effect of PDT. Therefore, breaking cellular redox balance through ROS enrichment and AOD inactivation may lead to effective tumor suppression with high clinical significance (Zhao et al., 2020). By combining iron with porphyrin-based MOFs, the maximum antitumor efficacy can be exerted by intelligently responding to exogenous and endogenous stimuli. Inspired by all this, Yu et al. first prepared a metal organic framework nanosystem (NMOF) based on coordination between Fe (III) and TCPP by a one-pot method (Fe-TCPP NMOF). Then, after capping the surface of silk fibroin (SF) to form NMOF@SF NPs, this nanoplatform can load a hypoxia-activated precursor tirapazamine (TPZ) to form NMOF@SF/TPZ (NST) Yu et al. (2022). Utilizing Fe (III) in Fe-TCPP, NST could effectively react with tumorous GSH to generate glutathione disulfide (GSSG) and Fe (II), for the ineffectiveness of AOD system. On the other hand, Fenton-like activity of Fe (II) and TCPP-mediated PDT promoted the accumulation of OH and ROS, and aggravated the intracellular oxidative stress under light laser irradiation. The redox metabolism disorder caused by the ineffective AOD and the enrichment of ROS might cause irreversible tumor cell damage. In addition, the deoxygenation of PDT led to an increase in hypoxia and then activated TPZ to transform into cytotoxic benzotriazinyl (BTZ) for tumor-specific chemotherapy. NST could achieve complete tumor elimination in vitro and in vivo. Similarly, Wang et al. also prepared nano-carrier system Zr-MOF@PPa/AF@PEG based on the consideration that PDT oxygen consumption could activate chemotherapy (CT) drugs to solve the hypoxic challenge and improve anti-tumor effect . Nano-carrier system Zr-MOF@PPa/AF@ PEG included zirconium ion metal organic framework (UIO-66, carrier), pyropheophorbide-a (PPa, PS) and 6-amino flavone (AF, hypoxic-sensitive drug). Under ultraviolet light, although Zr-MOF@PPa/AF@PEG accepted part energy of PPa, leading to the 1 O 2 yield (40%) was lower than that of PPa (60%), it solved the problem of self-agglomeration caused by hydrophobicity of PPa, thereby improving the PDT effect. In addition, Zr-MOF@ PPa/AF@PEG produced 1 O 2 under light stimulation with temporal-spatial selectivity. Therefore, Zr-MOF@PPa/AF@PEG took advantage of the PDT-induced hypoxia to activate HIF-1 inhibitor AF to enhance the anti-tumor effect and achieve the synergistic PDT-chemotherapy (PDT-CT) therapeutic effects. In addition, tuning the PDT efficiency by adjusting the thickness of the MOF shell is also a good approach. The Au@ MOF core-shell hybrids prepared by Cai et al. could tune the thickness of the MOF shell by controlling the interlayer coordination reaction Cai Z. et al. (2021). As the MOF shell thickness increased, the percentage of TCPP and the efficiency of PDT also increased.
The use of solar energy to drive the photocatalytic reaction has always been considered an ideal way to obtain O 2 (Liu et al., 2021j). Compared with the previous oxygen-generating substances, water-splitting materials have the unique advantage of the O 2 self-supply and generation of ROS, because there is abundant water in the organism. At present, ultraviolet (UV) light, visible light (Nguyen et al., 2021) and a limited region of the first near-infrared (NIR-I) (650-950 nm) light (Huang L. et al., 2020) can activate water-splitting materials to achieve light-driven endogenous water oxidation (photocatalytic water-splitting reaction) to obtain O 2 and ROS, but water splitting materials with NIR-II light triggered molecular O 2 generators have not been reported for tumor therapeutics, even though NIR-II can provide deeper tissue penetration. For this, Liu et al. synthesized a new type of plasmonic Ag-AgCl@Au core-shell nanomushrooms (NMs) by selectively photodepositing plasmonic Au at the bulge sites of the Ag-AgCl nanocubes (NCs) (Figure 7) . Under NIR-II light irradiation, the plasma effect of the Au nanostructure could overcome hypoxia to oxidize endogenous H 2 O to produce O 2 , thereby alleviating the hypoxic microenvironment. Almost at the same time, O 2 could react with the electrons on the AgCl nuclear conduction band to generate superoxide anion radicals (O 2 − ) for photodynamic therapy. In addition, the combination index value of PDT for Ag-AgCl@Au NMs was 0.92, indicating that Ag-AgCl@Au NMs with excellent PDT properties toward further and promoting the PDT effect in deep O 2deprived tumor tissues. It is also a research hotspot in recent years to use the functionalization of NMOFs with stimuliresponsive gating units to develop signal-controlled drug delivery systems for biomedical applications. One important subcategory of gated drug-loaded NMOF includes stimulusresponsive nucleic acid-locked drug-loaded NMOF, gaining from the remarkable versatility of nucleic acid sequences to generate recognition elements and structural elements (Zhao et al., 2020). For this, Zhang et al. designed and synthesized the UIO-66 metal-organic framework nanoparticles (NMOF) modified with aptamer-functionalized DNA tetrahedra functionalized by ATP-aptamer or VEGF-aptamer could be loaded with the DOX, which responded to ATP or VEGF to release the drug . They utilized VEGFresponsive tetrahedral-gated NMOFs to load the photosensitizer Zn (II) protoporphyrin IX (Zn (II)-PPIX) to synthesize Zn (II)-PPIX/G-quadruplex VEGF aptamertetrahedral nanostructures. VEGF triggered the release of Zn (II)-PPIX from the complex. Association of the released Zn (II)-PPIX with the G-quadruplex structure yielded a highly fluorescent supramolecular Zn (II)-PPIX/G-quadruplex VEGF aptamer-tetrahedral structure, enabling efficient PDT treatment of malignant cells.
Nanoliposomes
Nanoliposomes are single-lamellar or multilamellar nanosystems formed spontaneously when phospholipids are dispersed in aqueous medium. Except biocompatibility and biodegradability, liposomes have high structural flexibility, which can combine a variety of hydrophilic and hydrophobic drugs to improv their solubility and pharmacokinetics (Cheng X. et al., 2021). In addition, liposome nanotechnology can also copackage a variety of PSs and/or drugs, provide sufficient binding sites for conjugation with a variety of functional ligands.
Several mechanisms have demonstrated that high concentrations of soluble NKG2DLs derived from tumor cells may inhibit tumor immunity and NK cell-mediated target cell lysis by downregulating the expression of NKG2DL, thereby contributing to tumor immune escape (Curio et al., 2021). Based on this, Wang's group had designed and synthesized a Chlorin-based photoactivable Galectin-3-inhibitor Frontiers in Bioengineering and Biotechnology | www.frontiersin.org May 2022 | Volume 10 | Article 920162 13 nanoliposome (PGIL), which could combine photosensitizer chlorin e6 (Ce6) and low molecular citrus pectin (LCP) (Figure 8) Wang et al. (2019). The intracellular release of LCP inhibits the activity of galectin-3, which increases the affinity of major histocompatibility complex (MHC) proteins on tumor cell membrane for NKG2D on NK cell membranand, and then increases the tumor cell apoptosis, inhibits the invade ability, and enhances the recognition ability of NK cells to tumor cells in melanoma cells after PDT. In addition, pharmacological inhibition of MMPs reduce the level of released NKG2DLs, which could increase the tumor cell surface expression of NKG2DLs, reverse their immunosurveillance escape properties, and make them easier to be cleared by immune cells (mainly NK cells) (Tampa et al., 2021). Based on this, Liu's group had designed and synthesized a PS-MMP inhibitor nanoliposome (i.e., Ce6-SB3CT@Liposome [Lip-SC]), which combines the PS, Ce6, and a matrix metalloproteinase (MMP) inhibitor (i.e., SB3CT) . Nanoliposomes have significant anti-tumor proliferation and metastasis efficacy after laser irradiation in A375 cells. The relatively fast internalization of Lip-SC could accumulate in the tumor area under 660 nm light irradiation, induce apoptosis in cancer cells, which could trigger an immune response. In addition, it could also induce the expression of NK group 2 member D ligand (NKG2DL), while activate NKG2D, thus, NK cells could better recognize and kill tumor cells. The subsequent release of SB-3CT could further activate NK cells effectively and strengthen the immune system though inhibiting the shedding of soluble NKG2D ligands. As a result, Lip-SC caused induce apoptosis in cancer cells regardless of the presence or absence of irradiation. In the process of PDT agents inducing tumor cell necrosis and apoptosis, local inflammation occur and a variety of tumor cell antigens are exposed, thereby inducing local immune responses. Therefore, by supplementing immunomodulators to enhance immune activity, the anti-tumor effect of PDT can be increased. Dual-ligandmodified NPs increased the number of total liposomes bound to cancer cells through dual-ligand modification, and had higher tumor enrichment capacity and PDT efficiency than single-ligandmodified NPs. For example, Li and his team prepared Fru-Bio-Lip by co-modifying liposomes with fructose (targeting fructose transporter) and biotin (targeting multivitamin transporter) ligands . Besides targeted aptamer modification, the combination with near-infrared light-activated photon thermodynamic therapy (PTDT), PTT and other phototherapy methods is also an effective strategy to improve the tumor killing effect of PDT. Dai et al. synthesized thiadiazoloquinoxaline semiconductor polymer (PTT), 2, 2′-azobis [2-(2-imidazolin-2yl) propane] dihydrochloride (AIPH) (PTDT prodrug) and γaminoacetic acid (GA, heat shock protein inhibitor) were incorporated into thermosensitive liposomes, which were then modified with targeting aptamers to form Lip(PTQ/GA/AIPH) Dai Z. et al. (2021). Under NIR-II laser irradiation, Lip (PTQ/GA/ AIPH) could achieve precise diagnosis and effective suppression of deep triple-negative breast cancer. However, the clinical application of nanoliposomes in PDT still faces challenges due to poor stability in vivo and low drug loading (Cheng Y. et al., 2021).
Mesoporous Silica Nanoparticles
Among various nanomaterials, mesoporous silica nanoparticles (MSNs) have become an important nano-delivery system for PDT and multiple combination therapy due to their unique physical and chemical advantages, such as high loading capacity, controllable pore size and morphology, versatile surface chemistry, satisfying biocompatibility and biodegradability, which can ensure stable and efficient loading of PSs, targeted delivery of PSs, and regulation of drug release and cellular uptake behavior . Hypoxia is a typical feature of tumor TME, which seriously affects the efficacy of PDT. The development of nano-enzymes with the ability to produce oxygen is a promising strategy to overcome the oxygendependent of PDT. In this regard, Chen's group had designed and synthesized a dual-nanozymes based cascade reactor HMSN@ Au@MnO 2 -Fluorescein Derivative (HAMF), which was composed of hollow mesoporous silica nanoparticles (HMSN), high-efficiency photosensitizer 4-DCF-MPYM (4-FM), ultrasmall Au-NPs and MnO 2 Chen et al. (2021e). 4-Fm was a thermally activated delayed fluorescence (TADF) fluorescein derivative with high fluorescence quantum yield, photostability, two-photon excitation and low biological toxicity. Au NPs exhibited glucose oxidase (GOx)-mimic ability that could catalyze glucose into gluconic acid and H 2 O 2 , simultaneously, the consumption of glucose could cut off the energy supply of tumor cells (Zhang Y. et al., 2021). With the response to the hypoxic microenvironment, MnO 2 could catalyze H 2 O 2 into O 2 and accelerate the oxidation of glucose by Au NPs to generate additional H 2 O 2 , which was used as a substrate for the catalytic reaction of MnO 2 , which can be used in light irradiation, thereby constantly producing 1 O 2 for enhanced PDT upon light irradiation. HAMF could alleviate tumor hypoxia and achieve an effective tumor inhibition in vitro and in vivo studies. Yin's team had also designed and synthesized a kind of H 2 O 2 -responsive and oxygen-producing nanozyme by loading a large number of gold nanoclusters (AuNCs) into MSNs to form nanoassembly, and wrapped MnO 2 nanosheets in the form of a switch shielding shell (denoted as AuNCs@mSiO 2 @MnO 2 ) Yin et al. (2021). In a neutral physiological environment, stable MnO 2 shells could switch off PDT by eliminating the generation of 1 O 2 . However, in an acidic TME, the MnO 2 shell reacted with H 2 O 2 , and simultaneously sufficient O 2 generation guaranteed a 74% high 1 O 2 yield, which showing strong PDT performance. In addition, multifunctional NPs loading with chemotherapeutic drugs and PSs are also a promising method for effective tumor combination therapy. Zhang's group had designed and synthesized a novel pH-sensitive and bubble-generating mesoporous silica-based drug delivery system (denoted as M (A) D@PI-PEG-RGD) (Figure 9) . DOX and NH 4 HCO 3 were loaded into MSNs pores, MSNs was coated with polydopamine (PDA) layer, and then ICG as a photothermal and photodynamic agent was loaded onto the PDA layer surface, finally the nanoparticles were modified with polyethylene glycol (PEG) and RGD. RGD is a ligand for the recognition site of integrins and displays great adhesion capacity between extracellular matrix cells and cells, which can improve the accuracy of M (A)D@PI-PEG-RGD. Under NIR irradiation, M (A)D@PI-PEG-RGD could generate ROS and induce the temperature rise performed by ICG. In addition, acidic environment and high temperature would also decompose NH 4 HCO 3 , thereby accelerating the release of DOX. In summary, the multifunctional pH-sensitive and bubbleproducing M (A)D@PI-PEG-RGD combines chemotherapy, PTT and PDT to improve the therapeutic effect of tumors.
Dendrimers
Dendrimers are synthetic high molecular polymers with tree-like and highly branched structure. They are composed of the small initial core, the internal cavity formed by multiple branches, and the peripheral functional groups. Due to their special structure, dendrimers have the advantages of easy surface modification with PSs along with other functional moieties and structural configuration with other components or nanoformulations via the hyperbranched units for enhanced tumor accumulation and penetration (Ouyang et al., 2021). The combination of molecular targeted therapy of epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) and photodynamic therapy PDT can combat non-small cell lung cancer (NSCLC) with effective synergistic results (Qiao et al., 2021). However, the hypoxic TME not only affects the efficacy of PDT, but also induces EGFR-TKIs resistance. In this regard, Zhu and researchers had designed a nanocomplex APFHG by loading gefitinib (Gef) and PS hematoporphyrin (Hp) into an aptamer modified fluorinated dendrimer (APF) Zhu L. et al. (2021). Due to the targeting effect of EGFR-TKIs and the good oxygen-carrying capacity of APF, APFHG could specifically recognize EGFR-positive NSCLC cells and release Gef and Hp in response to the hypoxic acidic microenvironment. Under laser irradiation, APFHG could significantly increase the production of intracellular ROS, effectively improve the tumor hypoxia microenvironment, and overcome hypoxia-related drug resistance. In other work, X-rayinduced photodynamic therapy (XPDT) is overwhelmingly superior in treating deep-seated cancers (Chuang et al., 2020). However, low energy transfer efficiency of the therapeutic nanoplatform and hypoxic environment presented in the tumor tissue limits the therapeutic effect of XPDT. In order to improve the therapeutic effect of XPDT in deep-seated cancers, it is necessary to develop a delicate architecture to support the organization of nanoscintillator and multiple agents to improve the effectiveness of XPDT (Ahmad et al., 2019). In this regard, Zhao and researchers developed a dual-core-satellite architecture nanosmart system (CCT-DPRS) that the polyamidoamine (PAMAM) dendrimer would be used as an intermediate framework with nanoscintillator, PSs, and sunitinib (SU) (Figure 10) . It had high XPDT and antiangiogenetic capabilities by systematic optimizing the scintillation efficiency and nanoplatform structure. After exposure to ultralow dose radiation, the codoped CaF 2 NPs converted the trapped energy into green emission, which enable further excitation of Rb to produce 1 O 2 to kill malignant tumor cells. At the same time, the antiangiogenic drug SU effectively blocked tumor vascularization aggravated by XPDT-mediated hypoxia, rendering a pronounced synergy effect. However, PAMAM dendrimer and indocyanine green (ICG) have inevitable interaction with proteins and cells, which induces biological toxicity and reduces therapeutic In addition, the acidic microenvironment accelerates the release of DOX by breaking down NH 4 HCO 3 , which combine chemotherapy, PTT and PDT to enhance the therapeutic effect. Reproduced with permission from (Zhang J. et al., 2021).
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org May 2022 | Volume 10 | Article 920162 efficacy in vivo (Mindt et al., 2018). To overcome these shortcomings, Cui and researchers had designed a new drug delivery system G5MEK7C (n)-ICG with a "stealth" layer Cui et al. (2021). The surface of G5MEK7C (n)-ICG was modified with p (EK) peptide, which was double-layer super hydrophilic zwitterionic material. When the pH was lower than 6.5, the surface of G5MEK7C (n)-ICG showed a positive charge, which made it more likely to interact with the cell membrane in the tumor tissue. Therefore, under laser irradiation in vitro and in vivo, due to the good targeting effect of G5MEK7C(70)-ICG, G5MEK7C(70)-ICG was more effective in killing tumors than free ICG, while the damage to the liver was less than free ICG. The combination of chemotherapy and other therapeutic modalities can overcome chemoresistance through different mechanisms of action to achieve the purpose of enhancing anti-tumor efficacy. Furthermore, adding chemicals during mitosis to block cell division may be a promising approach to promote nuclear uptake of PSs. Recently, based on dendrimers and phenylboronic acid-sialic acid interactions, Zhong and researchers modified phenylboronic acid (PBA) into the surface of dendrimers, which can selectively recognize the sialic acids, meanwhile, conjugated lipoic acid modified Ps onto the core of dendrimer Zhong D. et al. (2021). In this work, a novel tumor targeting and penetrating, GSH/ROS heterogeneity responsive and PTX-loaded dendrimeric nanoparticles (P-NPs) was developed for mutually synergetic chemo-photodynamic therapy of PTX-resistant tumors. Lentinan was coated on the periphery of dendrimers through boronate bonds, which could avoid non-specific binding of P-NPs with normal cells during blood circulation. P-NPs penetrated into tumor tissues and actively entered into the cells through the PBA-SA interactions, showing enhanced cellular uptake and tumor penetration. Subsequently, P-NPs released PTX in response to high concentrations of glutathione and H 2 O 2 in tumor cells, arresting the cells in the G2/M phase and exerting anti-tumor effects. At the same time, the time of nuclear membrane disintegration increased caused the enhanced intranuclear photosensitizer accumulation, thereby increasing the efficiency of PDT by increasing nuclear DNA damage.
Hydrogels
Hydrogel is a type of nano-delivery carrier system formed by a hydrophilic polymer material with a three-dimensional network structure through chemical cross-linking or physical crosslinking. Hydrogels are biocompatible and have similar physical properties like living tissues. Some drugs are easily dispersed in the hydrogel matrix (Hu et al., 2022). Therefore, hydrogels have been widely used for the delivery of hydrophilic drugs. Among them, local injection of hydrogels has received special attention in tumor treatment and prevention of tumor recurrence, because this injection method can achieve desired drug accumulation in tumors. Compared with oxygen-generating nanomaterials, prodrugs that can be activated by external light show the unique advantages of highly consistent responsiveness and high temporal and spatial selectivity (Rapp and DeForest, 2021). Recently, Liu's group constructed a kind of original oxygen-generating hydrogel (OPeH) with photoactivated enzyme activity by loading the oxygen-generating MnO 2 nanoparticles conjugated with protoporphyrin IXpt (PPIX), and the proenzyme nanoparticles (PeN) crosslinked by a 1 O 2 cleavable linker into alginate hydrogels ( Figure 11) . Under NIR laser irradiation, MnO 2 NPs converted H 2 O 2 into O 2 , which further promoted the production of 1 O 2 from PpIX and improved the efficiency of 1 O 2 generation. In addition, after PEN was cross-linked with the 1 O 2 cleavable linker, it CUtilization for PDT, PTT, bioimaging, diagnosis, and therapy CPotential toxicity CNarrow emission bandwidth, large decay time, resistance to photobleaching, and no autofluorescence background
CLimited biodegradability
CUnique optical property and utilization for luminescence imaging CLow drug loading capacity CEasy surface modification and functionalization CLow quantum yield and superheating effects under 980 nm light source CAbility to absorb light in the NIR region Carbon-Based NPs CStrong optical absorbance and utilization for PTT, PAI CInduce inflammatory reactions and cytotoxicity CUnique electrical property CLimited biodegradability CEasy surface modification CLow utilization of visible light CHigh surface-to-volume ratio CExpensive and complex synthetic method CThermal stability CHigh photoluminescence quantum yield Sulfur-based NPs CUtilization for PTT, CDT, PDT CThe degradation products have potential toxicity CGood biocompatibility CKilling efficiency on hypoxic tumor cells is limited CHigh photothermal conversion efficiency CCheap and simple manufacturing method CBiodegradability and rapid metabolism Phosphorusbased NPs COptical and electrical properties better than carbon-based metal NPSs and sulfurbased metal NPSs CWeak absorption in the biowindow and low photo catalytic activity in a TME CFor making photosensitizers CThe inherent instability of BP NSs and BP QDs in water-air environments MOFs CFacile diffusion of ROSs through their porous structures CComplex design, lengthy preparation steps and high operating costs CHigh specific surface area CEarly clearance by body immune system CControllable size, shape and function of the pore COff-target accumulation CEffectively enhance the ROS generation effect CUntimely drug release ability CHigh PSs loadings Frontiers in Bioengineering and Biotechnology | www.frontiersin.org May 2022 | Volume 10 | Article 920162 induced cell death and suppressed metastasis by inhibiting the extracellular trap (NET) of neutrophils. In animal experiments, it was found that OPeH integrates PDT and NIR light-activated enzymes to achieve the combined therapeutic effect of inhibiting tumor growth and lung metastasis. In addition, phototherapy against deep tumors may greatly limited by the lack of light flux and the chemotherapy drugs against tumor cells may limited by insufficient resident time. Therefore, Zhong's group developed a dual drug carrying system DOX-CA4P@Gel, which achieved the best curative effect, efficacy and safety through local sequential delivery of drugs Zhong Y. et al. (2021). With dextran oxide, chitosan, porphyrin and hollow mesoporous silica (HMSN), DOX-CA4P@Gel was constructed, in which combretastatin A4 phosphate (CA4P) and DOX were both loaded. In weakly acidic condition, the degradation rate of the hydrogels increased significantly. CA4P was released rapidly at the early stage and relatively stable after 48, while DOX released slowly at first and then quickly released after 48 h, showing an obvious sequential release behavior. The porphyrin in hydrogel could trigger the formation of ROS, DOX could kill cancer cells at different stages of proliferation, while CA4P could inhibit the establishment of blood vessels around the tumor and increase the sensitivity of cancer cells to DOX.
Polymers
Due to the designability and diversity of composition, structure and function, polymers have become one of the preferred carrier materials for nanomedicine. Conjugated polymers can control tissue penetration by adjusting the conjugation length. Among them, the integration of multichromophoric conjugated polymer base NPs into PSs enhanced ROS generation by improving PSs solubility, permeability, and targeting (Jiang and McNeill, 2017). According to the characteristics of carbon-based fluorescent nanomaterials such as carbon dots (CDs) and carbon-based polymer dots (CPDs) with good biocompatibility and high fluorescence yield, Sajjad et al. conjugated green-emitting CPDs to PPa to enhance the photocatalytic performance of PSs through covalent and π-π interactions Sajjad et al. (2022). Semiconducting polymer NPs (PSBTBT NPs) can't only load PSs, but also act as photothermal agents for PDT/PTT synergistic therapy. Inspired by "biomarker-triggered image"-guided therapy, Bao et al. loaded the fluorescence quenchers Rhodamine B (Rhod B) and Ce6 on PSBTBT NPs to prepare a smart "Sensing and Healing" nanoplatform for PTT/PDT combination therapy (PSBTBT-Ce6@Rhod NPs) ( Figure 12) Bao et al. (2021). Similarly, IR780 iodide can generate heat and oxygen for PDT/PTT synergistic therapy. In addition, it has strong fluorescence intensity and inherent specificity for various tumor cells, which is suitable for near-infrared imaging of tumor cells. In view of this, Potara et al. used temperature-sensitive block copolymer Pluronic F127 to wrap IR780 iodide, and coupled folic acid (FA) to Plu-IR780 micelles through chitosan to designe a thermoresponsive and thermal reversible size distribution and spectral properties of a novel FA-targeted near-infrared phototherapy NPs (Plu-IR780chit-FA) Potara et al. (2021). Plu-IR780-chit-FA not only retained the PTT, PDT and NIR imaging properties of IR780 iodide, but also adjusted the size of the nanocapsules to the smallest (30 nm) at physiological temperature to ensue cellular uptake, while also had maximum absorption and fluorescence emission intensity. Polymeric micelles, which constructed by amphiphilic polymers, have been regarded as ideal carriers for nanomedicine due to their good biocompatibility, degradability, easy modification of the structure, and special "core-shell" structure (Tian and Mao, 2012). Using PSs delivery DNA-modified NPs Apt-DNA-Au nanomachines Tumor-associated TK1 mRNA-responsive PSs release and survivin targeting by antisense DNA 2021 DNA-modified NPs TCPP-gDNA-Au/PLNP Nucleolin targeting by AS1411 aptamer 2021 DNA-modified NPs Au/Pd ONP-DNA nanomachine (Cai et al., 2021a) Using the primary marker miRNA-21 and two auxiliary markers miRNA-224 and TK-1 mRNA to improve the accuracy of tumor identification 2021 DNA-modified NPs Label-rcDNA-AuG Recognition of cancer cells by miR-21 2021 Biotin-modified NPs BT@Au-NPs Movement to cellular sites and efficient binding sites in tumor cell lines by biotin 2021 AuNRs-grafted RGD HB-AuNRs@cRGD (Liu et al., 2021f) Binding of RGD to integrin avb3 in tumor cells and tumor neovascular endothelial cells 2021 Au nanoshells 40/20 core radius/shell thickness optimized gold nanoshell (Farooq and de Araujo, 2021) Optimization of nanoshells structure (silica core radius and gold shell thickness) to increase the singlet oxygen production 2021 Heterometallic colloids (L' = I−, CH 3 COO−) Mo 6 Au 2 colloids (Faizullin et al., 2021) Affecting NPs cytotoxicity, cellular internalization, and PDT activity by modulating the order of supramolecular stacking by Mo 6 -Au 2 2021 Polymer-coated AuNRs Au-MB-PEG NPs (Liu et al., 2021g) Response Graphitic carbon nitride g-C 3 N 5 NSs (Liu et al., 2021h) Due to the addition of nitrogen-rich triazole groups, the visible light utilization and photocatalytic activity of g-C 3 N 5 NSs are higher than those of g-C perfluorocarbons (PFCs) as oxygen carriers to directly deliver oxygen to tumors is a common way to relieve tumor hypoxia and enhance the PDT effect. Recently, Tseng et al. developed a folateconjugated fluorinated polymeric micelle (PFFA)-Ce6 micellar system, which exhibited a higher ROS production, good longterm stability, higher oxygen carrying capacity and improved PDT efficacy in inhibiting tumor growth as compared to those of non-PFC system Tseng et al. (2021). The fluorinated segment in PFFA-Ce6 could not only maintain the oxygen carrying capacity of polymer micelle without the problem of PFC molecule leakage, but also acted as a reservoir to accommodate the hydrophobic Ce6 to enhance its solubility. The folic moiety in PFFA-Ce6 provided a function as a specific targeting ligand for cancer cells.
In the in vitro cell study, due to the selective internalization of PFFA-Ce6, the cell growth inhibition of HeLa cells after irradiation was higher. Cancer cells are accustomed to oxidative stress caused by PDT through overexpressing glutathione (GSH) and other antioxidants. Therefore, preferentially amplifying the oxidative stress of tumor cells by consuming GSH or producing ROS is a reasonable treatment strategy to enhance the efficacy of PDT. To this end, Xia et al. designed a GSH-scavenging and ROS-generating polymeric micelle mPEG-S-S-PCL-Por (MSLP), which was composed of methoxy polyethylene glycol (MPEG)-SS-poly (ε-caprolactone)protoporphyrin (POR) amphiphilic polymer and the anticancer drug DOX, for amplifying oxidative stress and enhanced anticancer therapy of PDT Xia et al. (2021). MSLP combined Chemotherapy-photodynamic therapy (Chemo-PDT)-based synergy therapy exhibited significant antitumor activity both in vitro (IC50 = 0.041 μg/ml) and much better antitumor efficacy than that of mPEG-PCL-Por (MLP) micelles in vivo.
In addition, Yang et al. also designed a GSH-responsive dual receptor targeting nanomicelle system, which could be used for precise fluorescent bioimaging and superior synergistic chemophototherapy of tumors . IR780/PTX/FHSV micelles were composed of amphiphilic hyaluronic acid derivative (FHSV), paclitaxel (PTX) and photosensitizer IR780 iodide (IR780). Once they accumulated at the tumor site through enhanced permeability and retention (EPR) effects, IR780/PTX/FHSV micelles could effectively enter tumor cells through receptor-mediated endocytosis, and then rapidly release PTX and IR780 in the GSH-rich tumor microenvironment. Under near-infrared laser irradiation, IR780 generated local high temperature and sufficient reactive oxygen species to promote tumor cell apoptosis and necrosis. The results of in vivo and in vitro experiments consistently shown that compared with single chemotherapy and phototherapy, IR780/PTX/FHSV micelle-mediated chemophototherapy could more effectively synergize anti-tumor effects to kill tumor cells. However, further clinical applications of polymers are limited due to the disadvantages of poor storage stability, potential tissue toxicity, limited loading capacity for hydrophilic drugs, and complicated preparation processes . Co 3 S 4 -ICG Promoting Fenton reaction to generate ROS through PTT 2021 Bovine serum albumin (BSA) NP FeS 2 @SRF@BSA (Feng et al., 2021) The combination of Fenton-like reaction and PDT enhanced ROS production and antitumor effect 2021 Metal-Organic Framework Zr-MOF@PPa/AF@PEG Zr-MOF@PPA/AF@PEG take advantage of the PDTinduced hypoxia to activate HIF-1 inhibitor AF to enhance the anti-tumor effect and achieve the synergistic PDTchemotherapy (PDT-CT) therapeutic effects
THE APPLICATIONS AND THE PHOTOSENSITIZERS FOR CLINICAL TUMOR TREATMENT
PDT has been used to treat a variety of cancers, including lung, head and neck, brain, pancreas, peritoneal cavity, breast, prostate, skin, and liver cancer (Kubrak et al., 2022). The first-generation PS are hematoporphyrin derivative. They have long plasma half-life and lack of sensitivity. For example, Photofrin ® has been licensed for use in the oesophagus, lung, stomach, cervix and bladder. Since the absorption spectrum of Porfimer sodium peaks at 405 nm, its depth of action is limited to 0.5 cm (Ramsay et al., 2021).
The third-generation PS are characterized by combining the second-generation PSs with targeting entities or moieties, such as antibodies, amino acids, polypeptides, or by encapsulation into highly biocompatible nanocarriers to improve the ability of PS improve accumulation of PSs at the targeted tumor sites (Baskaran et al., 2018). Now, third-generation PSs that can significantly improve cancer targeting efficiency through chemical modification, nano-delivery systems, or antibody conjugation are widely studied for preclinical studies, and if satisfactory results are obtained, then more third-generation PSs will be promoted to enter the clinical research stage (Mfouo-Tynga et al., 2021).
CONCLUSION AND PERSPECTIVE
In summary, we have introduced the advantages and disadvantages (Tables 1, 2)and recent studies ( Table 3) based on Metal NPs, Nanoliposomes, MSNs, Dendrimers, Hydrogels, Polymers, which overcome the obstacles of PDT in tumor tissues, such as poor biocompatibility and low delivery efficiency of PSs, compromising poor light transmittance of deep tissues, and hypoxia, reactive oxygen species scavenging and immunosuppression in tumor TME (Lee et al., 2022). Insufficient supply of pivotal factors including PSs, light, and O 2 highly reduces the therapeutic efficacy of PDT. Therefore, the primary source of photodynamic therapy should be optimized by setting parameters such as input dose, intratumoral drug level, light source, and tissue oxygen conditions. Based on recent progress in the combination of nano-and biotechnology, various kinds of NPs have been developed for PDT, and they showed promising potential to overcome these obstacles. PSs are an indispensable key ingredient in photodynamic therapy. In most cases, PSs are aggregated inside the nanoparticles, and 1 O 2 has the disadvantages of short half-life (<40 ns) and short intracellular distance (<20 nm), resulting in 1 O 2 needing time and space to diffuse out from NPs to attack cells of biomolecules and further impairs the efficiency of PDT . Therefore, it is very necessary to release free PSs from nanoparticles before irradiating cancer cells or tumor tissue with light. In addition to the optimization of the physical structure and chemical composition of passive targeting of PSs and the modification of active targeting ligands for efficient, safe, and better tissue penetration, improving drug delivery and combination therapy are also effective strategies. In recent years, stimuli-responsive nanomaterials have received increasing attention because they can react to both endogenous stimuli (low pH, enzymes, redox agents, hypoxia) and exogenous stimuli (light, temperature, magnetic field, ultrasonic) and other stimuli to change its physicochemical properties and release PSs . However, the release of PSs from endogenous stimuli-responsive nanocarriers is slow due to weak stimulation intensity. In contrast, exogenous stimuli are easily modulated remotely in intensity and timing to release drugs on demand in diseased tissues or cells. However, the poor penetration of drugs into deep tumor tissue and the poor penetration of excitation light source into deep tumor tissue are important issues that need to be solved for NPs delivery systems. The use of UCNPs, XPDT, and fluorescence imaging techniques provides clues for efficient light delivery to deep tissues. Among them, the potential toxicity of heavy metals is a serious limitation, and their potential toxicity needs to be carefully considered, and favorable distribution, degradation and excretion should be achieved (Medici et al., 2021). Attempts to overcome hypoxia caused by hypoxia in tumor tissues include artificial oxygen production, Fenton reaction, and the combined application of chemical drugs related to hypoxia. In addition to promising results, combined with the latest methods to normalize tumor blood vessels and reduce hypoxia itself, it is also expected to become an effective method for PDT to treat tumors. Although many of these ideas are at the level of cell experiments and animal experiments, these studies have made significant progress over traditional PDT research. Therefore, we hope that the results of these trials can maximize the clinical efficacy of PDT in the future. In addition, future research and development of new nanomaterials should focus on targeted therapy and personal medicine. In order to improve the specificity and safety of drugs, targeted delivery and on-demand controlled/triggerable release are still the focus of drug delivery platform development.
AUTHOR CONTRIBUTIONS
The manuscript was written through the contributions of all authors. LC, JH, and SL conceived this work. LC and JH mainly wrote the manuscript. All authors have given approval to the final version of the manuscript.
|
2022-05-31T13:11:27.111Z
|
2022-05-31T00:00:00.000
|
{
"year": 2022,
"sha1": "81b0f829703ca3010767bda6d8132ac86b516864",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2022.920162/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "81b0f829703ca3010767bda6d8132ac86b516864",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202675187
|
pes2o/s2orc
|
v3-fos-license
|
of Birmingham Kinetic analysis of the accumulation of a half-sandwich organo-osmium pro-drug in cancer cells
The organo-osmium half-sandwich complex [( Z 6 - p -cymene)Os(Ph-azopyridine-NMe 2 )I] + ( FY26 ) exhibits potent antiproliferative activity towards cancer cells and is active in vivo . The complex is relatively inert, but rapidly activated in cells by displacement of coordinated iodide. Here, we study time-dependent accumulation of FY26 in A2780 human ovarian cancer cells at various temperatures in comparison with the chlorido metabolite [( Z 6 - p -cymene)Os(Ph-azopyridine-NMe 2 )Cl] + ( FY25 ). Mathematical models described the time evolution of FY26 and FY25 intracellular and extracellular concentrations taking into account both cellular transport (influx and efflux) and the intracellular conversion of FY26 to FY25 . Uptake of iodide complex FY26 at 37 1 C was 17 (cid:2) faster than that of chloride complex FY25 , and efflux 1.4 (cid:2) faster. Osmium accumulation decreased markedly after 24 h of exposure. Modelling revealed that this phenomenon could be explained by complex-induced reduction of osmium uptake, rather than by a model involving enhanced osmium efflux. The intracellular osmium concentration threshold above which reduction in drug uptake was triggered was estimated as 20.8 m M (95% confidence interval [16.5, 30]). These studies provide important new insight into the dynamics of transport of this organometallic anticancer drug candidate. complex FY26 , a potent prodrug which is active in vitro and in vivo and is not cross-resistant with platinum drugs. The cellular transport and metabolism of FY26 were investigated in human ovarian cancer cells through combined mathematical and experimental tools. Such data are likely to be important for furthering the clinical development of FY26 . The presented interdisciplinary methodology guides the design of experiments, thus decreasing the costs associated to drug development.
Introduction
Around half of all cancer chemotherapy treatments currently use platinum compounds, including cisplatin, carboplatin and oxaliplatin, introduced over 30 years ago. Despite the success of such therapies, resistance to platinum is now a clinical problem, together with drug-induced side-effects. 1,2 Complexes of other precious metals may provide anticancer drugs with new mechanisms of action to overcome such resistance, and fewer side-effects. Organometallic complexes, and osmium(II) arene complexes in particular, have recently shown promising results. [3][4][5][6][7] In general, metallodrugs are pro-drugs which undergo ligand exchange or redox reactions before they reach the target site. For example, cisplatin contains square-planar 5d 8 Pt(II) and is activated in cells by hydrolysis (aquation), in which chlorido ligands are substituted by water. The aqua adducts, and resulting positivelycharged complexes, are much more reactive towards target DNA. Octahedral platinum(IV) anticancer complexes are even more unreactive, as is common with low-spin 5d 8 complexes, and are activated in vivo by reduction to Pt(II). Such inertness allows some low-spin d 6 complexes to reach their target sites intact and bind by outer sphere interactions, e.g. between the ligands and amino acid side-chains in an enzyme binding site, for example potent kinase inhibitors of staurosporine conjugated to Ru(II) and Ir(III). 8 However, the inertness of such metal ions is dependent not only on the electronic configuration of the metal, but also on the nature of the ligands to which it is bound.
Here we study the inert iodido complex [(Z 6 -p-cymene)Os II (N,Ndimethyl-phenylazopyridine)I] + (FY26) which is ca. 49Â more active than cisplatin, in a panel of over 800 cancer cell lines screened by the Sanger Institute, and also active in vivo. 9,10 FY26 is not cross-resistant with cisplatin, nor the second generation Pt(II) drug oxaliplatin, retaining potency in resistant/non-sensitive cell lines, which suggests a different mechanism of action. [10][11][12] The complex appears to act as a redox modulator, disrupting mitochondria and inducing generation of reactive oxygen species (ROS), especially superoxide, which has been observed in both cancer cells and in vivo. [10][11][12] Such a mechanism exploits an inherent weakness in cancer cells, as their malfunctioning mitochondria cannot cope with overproduction of ROS. This likely contributes to the increased selectivity of FY26 towards cancer cells versus normal cells compared to cisplatin. 13 Surprisingly, FY26 does not readily undergo hydrolysis in the extracellular medium. However, we discovered by 131 I radiolabelling that FY26 is rapidly activated inside cancer cells by displacement of its iodide ligand. 14 Intriguingly, this activation appears to be mediated by attack of the intracellular tripeptide glutathione (GSH) on the azo bond of FY26 in a catalytic mechanism which weakens the Os-I bond. 15 Under intracellular cytoplasmic conditions where the chloride concentration is ca. 25 mM, HPLC studies show that this reaction can lead to the generation of the chlorido complex FY25. 11 We have detected the chlorido complex FY25 as a metabolite in liver microsomes after reactions with FY26 (unpublished data).
The importance of pharmacodynamic (PD) and pharmacokinetic (PK) studies in preclinical drug development based on mathematical modelling has recently been emphasised. 16 As a key part of such studies, we have investigated the kinetics of the influx and efflux of both the iodido organo-osmium prodrug FY26 and its chlorido metabolite FY25 in human ovarian cancer cells. We have constructed a mathematical model which describes the time evolution of each chemical species, taking into account both cellular transport, and intracellular conversion of FY26 into FY25 (Fig. 1).
Experimental section
Ovarian cell line A2780 human ovarian carcinoma cells were obtained from the European Collection of Cell Cultures (ECACC) and grown in Roswell Park Memorial Institute medium (RPMI-1640) supplemented with 10% of foetal calf serum, 1% of 2 mM glutamine and 1% penicillin/streptomycin using a 5% CO 2 humidified atmosphere and passaged at approximately 70-80% confluence.
Drugs and reagents
FY25 and FY26 were synthesized and characterized with purities 495% according to reported methods. 11 Osmium stock solution standardization: stock solutions of FY25 and FY26 were freshly prepared in RPMI-1640 cell culture medium using 5% DMSO to aid solubilisation. An aliquot was taken and diluted using 3.6% nitric acid (containing thiourea (10 mM) and ascorbic acid (100 mg L À1 ) to stabilize Os in nitric acid solution). 17 The resulting solution was analysed using a PerkinElmer Optima 5300DV ICP-OES. Calibration standards for Os (50-700 ppb) were freshly prepared in 3.6% nitric acid (containing 10 mM thiourea and 100 mg L À1 ascorbic acid). 17 Data were acquired and processed using WinLab32 for Windows.
Cellular osmium accumulation (without recovery time)
Briefly, 4 Â 10 6 A2780 ovarian cancer cells were seeded on a Petri dish. After 24 h of pre-incubation time in drug-free medium at either 4 1C, 23 1C or 37 1C, the osmium complexes were added to give final concentrations equal to their IC 50 (1.8 AE 0.1 mM and 0.15 AE 0.01 mM for FY25 and FY26, respectively). Cells were exposed to the Os(II) complexes with variable exposure time, but without recovery time in drug-free medium. Cells were then washed, treated with trypsin/EDTA and counted. Cell pellets were collected and washed again with PBS. Each pellet was digested overnight in 200 mL freshly-distilled concentrated nitric acid (72%) at 353 K. Experiments were carried out in triplicate and the standard deviations were calculated. Statistical significances were determined using Welch's unpaired t-test.
Cellular osmium accumulation (with recovery time)
These experiments were carried out as above, using fixed 24 h drug exposure time and equipotent IC 50 concentrations. After drugexposure, supernatants were removed, cells were washed with PBS and fresh medium was added to each plate. Multiple recovery times in drug free medium were allowed before collecting cell pellets that were again processed as for the previous experiment.
Osmium cell pellet quantification
The resulting solutions after cell digestion were diluted using doubly-distilled (MilliQ) water containing thiourea (10 mM) and ascorbic acid (100 mg L À1 ) 17 to achieve a final working acid concentration of 3.6% v/v HNO 3 . Osmium ( 189 Os) was quantified using an Agilent 7500 series ICP-MS in no-gas mode with an internal standard of 166 Er (50 ppb). Calibration standards (0.1-1000 ppb) were freshly prepared in 3.6% nitric acid supplemented with thiourea (10 mM) and ascorbic acid (100 mg L À1 ) to stabilize osmium in nitric acid solution. 7,17 Osmium concentrations were first expressed in ng per million cells using cell counts from biological experiments. Experiments were carried out as triplicates and standard deviations were calculated. Data were acquired and processed using ICP-MS-TOP and Offline Data
View Article Online
Analysis (ChemStation version B.03.05, Agilent Technologies, Inc.). The osmium content of cells was converted to intracellular molar concentration assuming a single cell volume of 1 pL. 18,19 See ESI, † for full numerical data.
Mathematical modeling
Mathematical models are based on ordinary differential equations computing the time evolution of intracellular and extracellular FY25 and FY26 concentrations (see ESI, † for details). All models include two physiological compartments corresponding to the extracellular medium and the intracellular medium of one million cells. The volume of the culture medium was set to 6 mL and that of the intracellular compartment was estimated as 1 mL assuming a single cell volume of 1 pL. 18,19 Parameter estimation consisted in a weighted least-square approach using the CMAES algorithm for the minimization task. All individual data points were fitted by iteratively running the estimation procedure and updating initial conditions with the current best-fit parameters until reaching convergence. Model identifiability was performed using the profile likelihood method 20 (see ESI †). Parameter 95% confidence intervals were derived from the likelihood profiles.
All computations were carried out in Matlab (Mathworks, USA).
FY25 and FY26 cellular transport
First, A2780 human ovarian cancer cells were treated with chlorido complex FY25 or iodido complex FY26 for 72 h at physiological temperature (37 1C) using equipotent IC50 concentrations in this cell line (1.5 mM and 0.15 mM respectively). The intracellular concentration of Os in treated cells was subsequently determined using ICP-MS after acidic digestion of cell pellets. During exposure to FY25, Os accumulated in A2780 cells in the course of the first 24 h. After that time, the intracellular Os concentration decreased gradually for the next 48 h ( Fig. 2A). Similarly, treatment of ovarian cancer cells with FY26 resulted in an intracellular accumulation of Os over the first 12 h and then an overall decrease in Os concentration over the following 48 h of exposure. Intracellular Os concentration versus time profiles are very similar for exposure to either FY25 or FY26 ( Fig. 2A).
To further investigate the cellular transport of FY25 and FY26, the decrease in intracellular accumulation of Os was determined after exposing the cells to these complexes for 24 h and then replacing the culture medium with fresh complex-free culture medium (so called 'recovery time' in complex-free medium). During this experiment, cells were washed with PBS before recovery time to ensure that the Os measured was indeed in the intracellular space and not on the outer cellular surface of the monolayer nor in the leftover medium in the well. After such careful removal of either complex from the extracellular medium, the intracellular concentration of Os decreased over the duration of the experiment, with similar kinetics for either FY25 or FY26 exposure (Fig. 2B).
To assess whether energy-dependent mechanisms are involved in the cellular transport of the complexes, ovarian cells were exposed to FY25 or FY26 at 4 1C, 23 1C or 37 1C over 8 h. For all of these time points, intracellular accumulation of Os increased with temperature. After 8 h of exposure to either FY25 or FY26, the
View Article Online
Os intracellular concentration decreased by 3.6 and 4-fold respectively at 23 1C, and by 20.3 and 31.1 fold respectively at 4 1C, compared to those determined at 37 1C (Fig. 2C).
Mathematical modeling of FY25 and FY26 cellular pharmacokinetics
The designed models of the cellular pharmacokinetics (PK) of FY25 and FY26 aimed to investigate quantitatively the cell uptake and efflux of each complex together with the activation of FY26 into FY25 (Fig. S1 in ESI †). The first accumulation study without recovery time showed that Os intracellular concentration decreased after 24 h of exposure to either complex (Fig. 2). The hypothesis that FY26 and FY25 concentrations in the culture medium decreased over time was ruled out since (i) the complexes are stable in culture medium 11 and (ii) the extracellular volume (10 mL) was much greater than the total intracellular volume of all cells (ca. 4 mL, calculated as 4 million cells  1 pL each) so the quantity of Os accumulating in the cells was negligible with respect to extracellular concentrations of complexes. Thus, FY26 or FY25 were assumed not to disappear spontaneously from the extracellular compartment. The Os concentration versus time profiles which first increased and then decreased towards zero values could not be accounted for using a simple model incorporating linear uptake and efflux terms with constant reaction rates which allowed only for increasing profiles of Os intracellular concentration ultimately reaching a non-zero steady state level. Thus, two mathematical models were designed to investigate the intracellular mechanisms that might operate during exposure to complexes and affect their cellular transport. Such mechanisms could enable cancer cells to evade damage caused by the metal complex, and so, their molecular understanding might be important in further preclinical studies. Two hypotheses were investigated: the presence of either (i) enhanced efflux or (ii) reduced uptake, both as a result of exposure to the organo-osmium complexes (Fig. 3). The ''enhanced efflux'' model assumes that FY25 activates a Nuclear Factor (NF) that in turn enhances the transcription of efflux transporters, hence increasing efflux of the complex from the cells. Such assumptions rely on experimental studies demonstrating that the expression of the ATP-Binding Cassette transporters Abcb1 and Abcg2 is mediated by nuclear factors NFkB and Nrf2, respectively. 21 In contrast, the ''reduced uptake'' model assumes that the decrease in intracellular Os accumulation results from the activation of an unspecified chemical species that increases the degradation of influx transporters leading to a decrease in cellular uptake of the complex. For both models, complex-induced activation leading to modified transport was assumed to occur when the intracellular concentration of the complex reaches a critical activation threshold. 21 The models for FY25 exposure account only for the uptake and efflux of the complex, and the activation threshold depends on the intracellular concentration of FY25. For modeling FY26 exposure, an additional step corresponding to the transformation of FY26 into FY25 in the intracellular compartment was included. The transport of both FY25 and FY26 was included, and the activation threshold was assumed to depend on the sum of FY25 and FY26 intracellular concentrations. Equations for the models and their associated parameters are presented in the ESI, † (Tables S1-S3).
Model fit to data provides molecular insights into cellular PK of complexes
Parameters of either the ''enhanced efflux'' or the ''reduced uptake'' models were fitted to the two available datasets for Os accumulation experiments, with and without recovery time ( Fig. 4 and 5). The ''enhanced efflux'' model was not able to reproduce both datasets at the same time, the best-fit model achieving a poor fit for the experiment without recovery time and a good fit for the experiment with recovery time. That result suggested that this model did not correspond to a reasonable molecular mechanism applicable to these experiments (Fig. 4). In contrast, the ''reduced uptake'' model achieved a very good fit to the data (Fig. 5). The model-to-data distance evaluated by the Sum of Squared Residuals (SSR) was equal to 2826 for the ''enhanced efflux'' model and 291 for the ''reduced uptake'' model confirming the superiority of the second model (Table 1).
To further investigate the validity of the ''reduced uptake'' model, parameter practical identifiability was investigated (see Methods and ESI, † Fig. S6). All model parameters were identifiable except two parameters, which proved the reliability of the model. The first parameter which was not identifiable corresponded to the rate constant for conversion of FY26 into FY25, whose optimal value was estimated as zero. However, this was actually an artefact arising from the datasets used in the estimation procedure as setting this parameter to zero made the estimation of transport parameters for FY25 and FY26 independent. Next, the second parameter which was not identifiable was the Hill power, which set the steepness of the Hill kinetics assumed for compound-induced uptake decrease. This parameter was estimated to the highest investigated value, that is 14, which corresponded to the highest steepness. However, any value of this parameter above 10 achieved the optimal value of the likelihood, hence the best fit to the data (Fig. 6).
In the ''reduced uptake'' model, the cellular transport rate for FY26 was estimated to be faster than that of FY25, by 1.5fold for efflux and 17.7-fold for uptake (Table 1). This appears to correlate with the lower cytotoxicity of FY25 compared to that of FY26, 22 and structurally related ruthenium complexes, 23 thus providing a partial validation of the model. The activation threshold corresponding to the intracellular osmium concentration above which alteration in the rate of complex transport was triggered was estimated as 20.8 mM (95% confidence interval [16.5, 30]), and was reached 4 to 5 h after the start of FY25 or FY26 exposure. After that time, the unknown species was then activated in the mathematical model and enhanced the degradation of uptake transporters with similar kinetics for FY25 and FY26 exposure (Fig. 5E and F).
Next, we investigated whether these mathematical results could depend on the choice of kinetics used in the mathematical modelling to represent the PK of the complexes. Linear kinetics of FY25 and FY26 transport and of FY26 activation were changed to Michaelis-Menten terms. We observed that Michaelis-Menten kinetics did not enable the ''enhanced efflux'' model to fit the experimental data (SSR = 2650), but still allowed for a very good fit of the ''reduced uptake'' model (SSR = 291), thus indicating that the superiority of the second model did not depend on the choice of kinetics. Altogether, these results advocated decreased uptake of the complexes rather than increased efflux as an explanation for the overall decrease in osmium accumulation after 24 h of FY25 or FY26 exposure.
Temperature dependence of cellular transport parameters
We further investigated the temperature-dependence of Os accumulation during the first 8 h of exposure. Intracellular accumulation of Os in ovarian cancer cells exposed to either FY25 or FY26 decreased as a function of temperature (Fig. 7). The model developed above was used to analyse the temperature dependence of transport parameters. For the sake of simplicity and in the absence of detailed data, the influence of the complexes on their own cellular uptake was neglected as the analysis above predicted that the transport of the compounds was altered only after 4 to 5 hours of exposure. FY25 and FY26 uptake parameters were unchanged for the experiment performed at 37 1C and were estimated from the osmium accumulation datasets for experiments at 23 1C and 4 1C. All other model parameters were assumed not to be affected by temperature. The model calibrated from the datasets of Fig. 4 and 5 achieved a good fit to this new dataset obtained at 37 1C (Fig. 7). Furthermore, the model predicted that rate constants for FY25 uptake at 23 1C and 4 1C were equal to 36% and 3.1%, respectively, of those at 37 1C. Similarly, the rate constants of FY26 uptake at 23 1C and 4 1C were equal to 29% and 2.8%, respectively, of those at 37 1C (Fig. 7).
Discussion
Knowledge of the kinetics of drug uptake, efflux and accumulation in cancer cells is important for optimisation not only of drug design, but also of treatment regimens, including the selection of combination therapies. In general, our understanding of the uptake and efflux mechanisms for metallodrugs is poor. The most widely used drug in cancer chemotherapy is cisplatin and even now, 40 years after its introduction into clinical use, its transport mechanisms have not been fully elucidated.
Here we have studied the time-and temperature-dependent accumulation of a promising iodido organo-osmium anticancer drug candidate [(Z 6 -p-cymene)Os(Ph-azopyridine-NMe 2 )I] + (FY26) by A2780 human ovarian cancer cells. This complex is inert and a pro-drug which is activated inside cancer cells to form the more reactive chlorido complex [(Z 6 -p-cymene)Os(Ph-azopyridine-NMe 2 )Cl] + (FY25) as a major metabolite. Other species that are likely to be formed include the hydroxido complex, the glutathione adduct, and oxidised glutathione sulfenate adduct. 14 The ability of chloride and hydroxide to bind strongly to Os(II) even in the presence of a large molar excess of glutathione is notable and contrasts with the behaviour of Pt(II) in cisplatin.
We have compared the time-dependent accumulation of the chlorido complex FY25 and iodido complex FY26 in human ovarian cancer cells and constructed models which account for extent and time dependence of the influx and efflux of the complexes and the intracellular conversion of FY26 into FY25. The marked decrease in the accumulation of Os after 24 h of exposure to the complexes is well described by mathematical modelling which reveals that this can be explained by complexinduced decrease in osmium uptake, but not by enhanced osmium efflux ( Fig. 4 and 5). The validity of the ''reduced uptake'' model was further strengthened by its accurate prediction of the 17Â faster uptake of FY26 at 37 1C and 1.4Â faster efflux of FY26 as compared to those of FY25, as demonstrated in previous experiments which were not used for model design nor calibration. Drug-induced uptake reduction could constitute a defence mechanism to slow down the accumulation of antiproliferative drugs into cancer cells, thus leading to a decreased cytotoxicity. This combined experimental and modelling result suggests that specific active transporters are involved in the uptake of these organo-osmium complexes. Indeed, there are reported examples of transport proteins involved in the uptake and efflux of metallodrugs. Early on it was thought that the main route of cellular accumulation of cisplatin, present in extracellular media as the neutral molecule cis-[PtCl 2 (NH 3 ) 2 ], relied Likelihood profiles for parameters of the ''reduced uptake'' model. Stars represent the optimal parameter value. Red lines are the 95% confidence threshold value. If the likelihood profile crosses the threshold value twice (i.e. when increasing and decreasing parameter value starting from optimal value), this proves parameter identifiability. The points at which the likelihood profile crosses the threshold are the ranges of the parameter 95% confidence interval. only on passive diffusion. However, more recent studies have suggested that active transport, including specific membrane proteins, might be also involved. These include the copper transporters Ctr1 and Ctr2, 24,25 although the relevance of these to platinum treatment is currently in doubt. 26 Alternatively, the p-type copper transporters ATP7A and ATP7B have also been linked to cisplatin resistance. However, the mechanism by which copper transporters might transport cisplatin (or other platinum drugs) is unclear. These proteins tend to be rich in sulfur ligands (methionine and cysteine side chains), and the strong trans effect of sulfur can readily lead to trans-ligand displacement from Pt(II). Organic cation transporters and multidrug resistance transporters (e.g. MDR1, p-glycoprotein) may also play a role in platinum transport and resistance. 26 In comparison, the arsenic drug, arsenic trioxide, is transported into cells via aquaglyceroporin membrane transporters, perhaps understandable due to its structure in solution, [As(OH) 3 ], which is a glycerol mimic. 27 We can expect that the transporters used by metallodrugs will depend intimately on the structures of the complexes, including the metal and its oxidation state, the types and number of bound ligands, and the coordination geometry. Previously, we studied the A2780 cellular uptake and accumulation of iminopyridine complexes of Ru(II) arene iminopyridine complexes [Ru(Z 6 -p-cymene)(N,N-dimethyl-N 0 -[(E)-pyridine-2ylmethylidene]benzene-1,4-diamine)X] + where X = Cl or I. Remarkably, whereas the chlorido complex was taken up largely by active transport, passive transport dominated for the iodido complex. Cellular efflux of Ru was slow, and was partially inhibited by the efflux pump inhibitor verapamil, which suggested a role for P-glycoprotein in the efflux of the drug. 23 Furthermore, for the diamino arene Ru(II) complex [Ru(biphenyl)(ethylenediamine)Cl] + which is cross-resistant with Adriamycin, drug resistance in A2780 cells was almost completely reversed by verapamil, suggesting that this protein plays a major role in the efflux of the ruthenium anticancer complex. 28
Conclusions
Most metallodrugs are pro-drugs so understanding their transport and activation becomes important for understanding their mechanisms of action. Experimental work on the mechanisms of action of metallodrugs is challenging since it requires determination of not only the metal and its oxidation state, but also the nature of the bound ligands, both monodentate and chelated.
Here we are fortunate that osmium is not a biologically essential metal, and thus its concentrations can be readily quantified using ICP-MS at pharmacologically-relevant doses. Although both cisplatin and the organo-osmium complexes studied here are activated by exchange of monodentate ligands, the mechanisms of exchange are quite different. In the case of the azopyridine complexes, it appears to involve attack on the azo bond of the chelated ligand by intracellular glutathione (GSH). 14,15 Moreover the organo-osmium complexes appear to localise in the cytoplasm of cancer cells, 29 whereas cisplatin exerts its effect in the nucleus.
This combined experimental and mathematical study allowed us to identify a probable effect of FY25 and FY26 on their own cellular uptake leading to decreased intracellular accumulation. Future work will involve investigations of possible specific membrane transporter proteins for FY26. This complex induces redox changes in cancer cells, 10 especially the production of reactive oxygen species (ROS). Whether this leads to protein damage and loss of function of a transporter, or whether attack on mRNA could be involved, requires further investigation. ROS-induced attack on intracellular proteins may be related to the apparent block of osmium uptake after an intracellular threshold osmium concentration has been reached. Further work is required to investigate whether this is due to degradation of a specific transporter protein or another mechanism. Such considerations are important for the preclinical development of FY26 and will affect the choice of treatable cancers and the choice of therapeutic combinations with other drugs.
|
2019-09-19T09:03:26.971Z
|
2019-10-16T00:00:00.000
|
{
"year": 2019,
"sha1": "0f453aa740b657ad2165de5e9fca1ec473e2bfa4",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/metallomics/article-pdf/11/10/1648/34536703/c9mt00173e.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5552dfc12aa45d8fd2fbdbcf697ae6341c371a14",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
52847536
|
pes2o/s2orc
|
v3-fos-license
|
Unsupervised Induction of Modern Standard Arabic Verb Classes Using Syntactic Frames and LSA
We exploit the resources in the Arabic Treebank (ATB) and Arabic Gigaword (AG) to determine the best features for the novel task of automatically creating lexical semantic verb classes for Modern Standard Arabic (MSA). The verbs are clas-sified into groups that share semantic elements of meaning as they exhibit similar syntactic behavior. The results of the clustering experiments are compared with a gold standard set of classes, which is approximated by using the noisy English translations provided in the ATB to create Levin-like classes for MSA. The quality of the clusters is found to be sensitive to the inclusion of syntactic frames, LSA vectors, morphological pattern, and subject animacy. The best set of parameters yields an F β =1 score of 0 . 456 , compared to a random baseline of an F β =1 score of 0 . 205 .
Introduction
The creation of the Arabic Treebank (ATB) and Arabic Gigaword (AG) facilitates corpus based studies of many interesting linguistic phenomena in Modern Standard Arabic (MSA). 1 The ATB comprises manually annotated morphological and syntactic analyses of newswire text from different Arabic sources, while the AG is simply a huge collection of raw Arabic newswire text. In our ongoing project, we exploit the ATB and AG to determine the best features for the novel task of automatically creating lexical semantic verb classes 1 http://www.ldc.upenn.edu/ for MSA. We are interested in the problem of classifying verbs in MSA into groups that share semantic elements of meaning as they exhibit similar syntactic behavior. This manner of classifying verbs in a language is mainly advocated by Levin (1993). The Levin Hypothesis (LH) contends that verbs that exhibit similar syntactic behavior share element(s) of meaning. There exists a relatively extensive classification of English verbs according to different syntactic alternations. Numerous linguistic studies of other languages illustrate that LH holds cross linguistically, in spite of variations in the verb class assignment. For example, in a wide cross-linguistic study, Guerssel et al (1985) found that the Conative Alternation exists in the Austronesian language Warlpiri. As in English, the alternation is found with hitand cut-type verbs, but not with touchand break-type verbs.
A strong version of the LH claims that comparable syntactic alternations hold cross-linguistically. Evidence against this strong version of LH is presented by Jones et al (1994). For the purposes of this paper, we maintain that although the syntactic alternations will differ across languages, the semantic similarities that they signal will hold cross linguistically. For Arabic, a significant test of LH has been the work of Fareh and Hamdan (2000), who argue the existence of the Locative Alternation in Jordanian Arabic. However, to date no general study of MSA verbs and alternations exists. We address this problem by automatically inducing such classes, exploiting explicit syntactic and morphological information in the ATB using unsupervised clustering techniques.
This paper is an extension of our previous work in Snider and Diab (2006), which found a preliminary effect of syntactic frames on the precision of MSA verb clustering. In this work, we find effects of three more features, and report results using both precision and recall. This project is inspired by previous approaches to automatically induce lexical semantic classes for English verbs, which have met with success (Merlo and Stevenson, 2001;Schulte im Walde, 2000) , comparing their results with manually created Levin verb classes. However, Arabic morphology has well known correlations with the kind of event structure that forms the basis of the Levin classification. (Fassi-Fehri, 2003). This characteristic of the language makes this a particularly interesting task to perform in MSA. Thus, the scientific goal of this project is to determine the features that best aid verb clustering, particularly the language-specific features that are unique to MSA and related languages.
Inducing such classes automatically allows for a large-scale study of different linguistic phenomena within the MSA verb system, as well as crosslinguistic comparison with their English counterparts. Moreover, drawing on generalizations yielded by such a classification could potentially be useful in several NLP problems such as Information Extraction, Event Detection, Information Retrieval and Word Sense Disambiguation, not to mention the facilitation of lexical resource creation such as MSA WordNets and ontologies.
Unfortunately, a gold standard resource comparable to Levin's English classification for evaluation does not exist in MSA. Therefore, in this paper, as before, we evaluate the quality of the automatically induced MSA verb classes both qualitatively and quantitatively against a noisy MSA translation of Levin classes in an attempt to create such classes for MSA verbs.
The paper is organized as follows: Section 2 describes Levin classes for English; Section 3 describes some relevant previous work; In Section 4 we discuss relevant phenomena of MSA morphology and syntax; In Section 5, we briefly describe the clustering algorithm; Section 6 gives a detailed account of the features we use to induce the verb clusters; Then, Section 7, describes our evaluation data, metric, gold standard and results; In Section 8, we discuss the results and draw on some quantitative and qualitative observations of the data; Finally, we conclude this paper in Section 9 with concluding remarks and a look into future directions.
Levin Classes
The idea that verbs form lexical semantic clusters based on their syntactic frames and argument selection preferences is inspired by the work of Levin, who defined classes of verbs based on their syntactic alternation behavior. For example, the class Vehicle Names (e.g. bicycle, canoe, skate, ski) is defined by the following syntactic alternations (among others): 1. INTRANSITIVE USE, optionally followed by a path They skated (along the river bank).
INDUCED ACTION (some verbs)
Pat skated (Kim) around the rink.
Levin lists 184 manually created classes for English, which is not intended as an exhaustive classification. Many verbs are in multiple classes both due to the inherent polysemy of the verbs as well as other aspectual variations such as argument structure preferences. As an example of the latter, a verb such as eat occurs in two different classes; one defined by the Unspecified Object Alternation where it can appear both with and without an explicit direct object, and another defined by the Connative Alternation where its second argument appears either as a direct object or the object of the preposition at. It is important to note that the Levin classes aim to group verbs based on their event structure, reflecting aspectual and manner similarities rather than similarity due to their describing the same or similar events. Therefore, the semantic class similarity in Levin classes is coarser grained than what one would expect resulting from a semantic classification based on distributional similarity such as Latent Semantic Analysis (LSA) algorithms. For illustration, one would expect an LSA algorithm to group skate, rollerblade in one class and bicycle, motorbike, scooter in another; yet Levin puts them all in the same class based on their syntactic behavior, which reflects their common event structure: an activity with a possible causative participant. One of the purposes of this work is to test this hypothesis by examining the relative contributions of LSA and syntactic frames to verb clustering.
Related Work
Based on the Levin classes, many researchers attempt to induce such classes automatically. No-tably the work of Merlo and Stevenson (2001) attempts to induce three main English verb classes on a large scale from parsed corpora, the class of Ergative, Unaccusative, and Object-drop verbs. They report results of 69.8% accuracy on a task whose baseline is 34%, and whose expert-based upper bound is 86.5%. In a task similar to ours except for its use of English, Schulte im Walde clusters English verbs semantically by using their alternation behavior, using frames from a statistical parser combined with WordNet classes. She evaluates against the published Levin classes, and reports that 61% of all verbs are clustered into correct classes, with a baseline of 5%.
Arabic Linguistic Phenomena
In this paper, the language of interest is MSA. Arabic verbal morphology provides an interesting piece of explicit lexical semantic information in the lexical form of the verb. Arabic verbs have two basic parts, the root and pattern/template, which combine to form the basic derivational form of a verb. Typically a root consists of three or four consonants, referred to as radicals. A pattern, on the other hand, is a distribution of vowel and consonant affixes on the root resulting in Arabic derivational lexical morphology. As an example, the root k t b, 2 if interspersed with the pattern 1a2a3 -the numbers correspond to the positions of the first, second and third radicals in the root, respectively -yields katab meaning write. However, if the pattern were ma1A2i3, resulting in the word makAtib, it would mean offices/desks or correspondences. There are fifteen pattern forms for MSA verbs, of which ten are commonly used. Not all verbs occur with all ten patterns. These root-pattern combinations tend to indicate a particular lexical semantic event structure in the verb.
Clustering
Taking the linguistic phenomena of MSA as features, we apply clustering techniques to the problem of inducing verb classes. We showed in Snider & Diab (2006) that soft clustering performs best on this task compared to hard clustering, therefore we employ soft clustering techniques to induce the verb classes here. Clustering algorithms partition a set of data into groups, or clusters based on a similarity metric. Soft clustering allows elements to be members of multiple clusters simultaneously, and have degrees of membership in all clusters. This membership is sometimes represented in a probabilistic framework by a distribution P (x i , c), which characterizes the probability that a verb x i is a member of cluster c.
Features
Syntactic frames The syntactic frames are defined as the sister constituents of the verb in a Verb Phrase (VP) constituent, namely, Noun Phrases (NP), Prepositional Phrases (PP), and Sentential Complements (SBARs and Ss). Not all of these constituents are necessarily arguments of the verb, so we take advantage of functional tag annotations in the ATB. Hence, we only include NPs with function annotation: subjects (NP-SBJ), topicalized subjects (NP-TPC), 3 objects (NP-OBJ), and second objects in dative constructions (NP-DTV). The PPs deemed relevant to the particular sense of the verb are tagged by the ATB annotators as PP-CLR. We assume that these are argument PPs, and include them in our frames. Finally, we include sentential complements (SBAR and S). While some of these will no doubt be adjuncts (i.e. purpose clauses and the like), we assume that those that are arguments will occur in greater numbers with particular verbs, while adjuncts will be randomly distributed with all verbs.
Given Arabic's somewhat free constituent order, frames are counted as the same when they contain the same constituents, regardless of order. Also, for each constituent that is headed by a function word (PPs and SBARs) such as prepositions and complementizers, the headword is extracted to include syntactic alternations that are sensitive to preposition or complementizer type. It is worth noting that this corresponds to the FRAME1 configuration described in our previous study. (Snider and Diab, 2006) Finally, only active verbs are included in this study, rather than attempt to reconstruct the argument structure of passives. Verb pattern The ATB includes morphological analyses for each verb resulting from the Buckwalter Analyzer (BAMA). 4 For each verb, one of the analyses resulting from BAMA is chosen manually by the treebankers. The analyses are 3 These are displaced NP-SBJ marked differently in the ATB to indicate SVO order rather than the canonical VSO order in MSA. NP-TPC occurs in 35% of the ATB. 4 http://www.ldc.upenn.edu matched with the root and pattern information derived manually in a study by Nizar Habash (personal communication).This feature is of particular scientific interest because it is unique to Semitic languages, and, as mentioned above, has an interesting potential correlation with argument structure.
Subject animacy In an attempt to allow the clustering algorithm to use information closer to actual argument structure than mere syntactic frames, we add a feature that indicates whether a verb requires an animate subject. Merlo and Stevenson (2001) found that this feature improved their English verb clusters, but in Snider & Diab (2006), we found this feature to not contribute significantly to Arabic verb clustering quality. However, upon further inspection of the data, we discovered we could improve the quality of this feature extraction in this study. Automatically determining animacy is difficult because it requires extensive manual annotation or access to an external resource such as WordNet, neither of which currently exist for Arabic. Instead we rely on an approximation that takes advantage of two generalizations from linguistics: the animacy hierarchy and zero-anaphora. According to the animacy hierarchy, as described in Silverstein (1976), pronouns tend to describe animate entities. Following a technique suggested by Merlo and Stevenson(2001), we take advantage of this tendency by adding a feature that is the number of times each verb occurs with a pronominal subject. We also take advantage of the phenomenon of zeroanaphora, or pro-drop, in Arabic as an additional indicator subject animacy. Pro-drop is a common phenomenon in Romance languages, as well as Semitic languages, where the subject is implicit and the only indicator of a subject is incorporated in the conjugation of the verb. According to work on information structure in discourse (Vallduví, 1992), pro-drop tends to occur with more given and animate subjects. To capture this generalization, we add a feature for the frequency with which a given verb occurs without an explicit subject. We further hypothesize that proper names are more likely to describe animates (humans, or organizations which metonymically often behave like animates), adding a feature for the frequency with which a given verb occurs with a proper name. With these three features, we provide the clustering algorithm with subject animacy indicators.
LSA semantic vector This feature is the semantic vector for each verb, as derived by Latent Semantic Analysis (LSA) of the AG. LSA is a dimensionality reduction technique that relies on Singular Value Decomposition (SVD) (Landauer and Dumais, 1997). The main strength in applying LSA to large quantities of text is that it discovers the latent similarities between concepts. It may be viewed as a form of clustering in conceptual space.
Data Preparation
The four sets of features are cast as the column dimensions of a matrix, with the MSA lemmatized verbs constituting the row entries. The data used for the syntactic frames is obtained from the ATB corresponding to ATB1v3, ATB2v2 and ATB3v2. The ATB is a collection of 1800 stories of newswire text from three different press agencies, comprising a total of 800, 000 Arabic tokens after clitic segmentation. The domain of the corpus covers mostly politics, economics and sports journalism. To extract data sets for the frames, the treebank is first lemmatized by looking up lemma information for each word in its manually chosen (information provided in the Treebank files) corresponding output of BAMA. Next, each active verb is extracted along with its sister constituents under the VP in addition to NP-TPC. As mentioned above, the only constituents kept as the frame are those labeled NP-TPC, NP-SBJ, NP-OBJ, NP-DTV, PP-CLR, and SBAR. For PP-CLRs and SBARs, the head preposition or complementizer which is assumed to be the left-most daughter of the phrase, is extracted. The verbs and frames are put into a matrix where the row entries are the verbs and the column entries are the frames. The elements of the matrix are the frequency of the row verb occurring in a given frame column entry. There are 2401 verb types and 320 frame types, corresponding to 52167 total verb frame tokens. For the LSA feature, we apply LSA to the AG corpus. AG (GIGAWORD 2) comprises 481 million words of newswire text. The AG corpus is morphologically disambiguated using MADA. 5 MADA is an SVM based system that disambiguates among different morphological analyses produced by BAMA. (Habash and Rambow, 2005) We extract the lemma forms of all the words in AG and use them for the LSA algorithm. To extract the LSA vectors, first the lemmatized AG data is split into 100 sentence long pseudo-documents. Next, an LSA model is trained using the Infomap software 6 on half of the AG (due to size limitations of Infomap). Infomap constructs a word similarity matrix in document space, then reduces the dimensionality of the data using SVD. LSA reduces AG to 44 dimensions. The 44-dimensional vector is extracted for each verb, which forms the LSA data set for clustering.
Subject animacy information is represented as three feature columns in our matrix. One column entry represents the frequency a verb co-occurs with an empty subject (represented as an NP-SBJ dominating the NONE tag, 21586 tokens). Another column has the frequency the NP-SBJ/NP-TPC dominates a pronoun (represented in the corpus as the tag PRON 3715 tokens). Finally, the last subject animacy column entry represents the frequency an NP-SBJ/NP-TPC dominates a proper name (tagged NOUN PROP, 4221 tokens).
The morphological pattern associated with each verb is extracted by looking up the lemma in the output of BAMA. The pattern information is added as a feature column to our matrix of verbs by features.
Gold Standard Data
The gold standard data is created automatically by taking the English translations corresponding to the MSA verb entries provided with the ATB distributions. We use these English translations to locate the lemmatized MSA verbs in the Levin English classes represented in the Levin Verb Index (Levin, 1993), thereby creating an approximated MSA set of verb classes corresponding to the English Levin classes. Admittedly, this is a crude manner to create a gold standard set. Given lack of a pre-existing classification for MSA verbs, and the novelty of the task, we consider it a first approximation step towards the creation of a real gold standard classification set in the near future. Since the translations are assigned manually to the verb entries in the ATB, we assume that they are a faithful representation of the MSA language used. Moreover, we contend that lexical semantic meanings, if they hold cross linguistically, would be defined by distributions of syntactic alternations. Unfortunately, this gold standard set is more noisy than expected due to several factors: each MSA morphological analysis in the ATB has several associated translations, which include both polysemy and homonymy. Moreover, some of these translations are adjectives and nouns as well as phrasal expressions. Such divergences occur naturally but they are rampant in this data set. Hence, the resulting Arabic classes are at a finer level of granularity than their English counterparts because of missing verbs in each cluster. There are also many gaps -unclassified verbs -when the translation is not a verb, or a verb that is not in the Levin classification. Of the 480 most frequent verb types used in this study, 74 are not in the translated Levin classification.
Clustering Algorithms
We use the clustering algorithms implemented in the library cluster (Kaufman and Rousseeuw, 1990) in the R statistical computing language. The soft clustering algorithm, called FANNY, is a type of fuzzy clustering, where each observation is "spread out" over various clusters. Thus, the output is a membership function P (x i , c), the membership of element x i to cluster c. The memberships are nonnegative and sum to 1 for each fixed observation. The algorithm takes k, the number of clusters, as a parameter and uses a Euclidean distance measure. We determine k empirically, as explained below.
Evaluation Metric
The evaluation metric used here is a variation on an F -score derived for hard clustering (Chklovski and Mihalcea, 2003). The result is an F β measure, where β is the coefficient of the relative strengths of precision and recall. β = 1 for all results we report. The score measures the maximum overlap between a hypothesized cluster (HYP) and a corresponding gold standard cluster (GOLD), and computes a weighted average across all the GOLD clusters: A is the set of HYP clusters, C is the set of GOLD clusters, and V tot = C∈C C is the total number of verbs to be clustered. This is the measure that we report, which weights precision and recall equally.
Results
To determine the features that yield the best clustering of the extracted verbs, we run tests comparing seven different factors of the model, in a 2x2x2x2x3x3x5 design, with the first four parameters being the substantive informational factors, and the last three being parameters of the clustering algorithm. For the feature selection experiments, the informational factors all have two conditions, which encode the presence or absence of the information associated with them. The first factor represents the syntactic frame vectors, the second the LSA semantic vectors, the third the subject animacy, and the fourth the morphological pattern of the verb.
The fifth through seventh factors are parameters of the clustering algorithm: The fifth factor is three different numbers of verbs clustered: the 115, 268, and 406 most frequent verb types, respectively. The sixth factor represents numbers of clusters (k). These values are dependent on the number of verbs tested at a time. Therefore, this factor is represented as a fraction of the number of verbs. Hence, the chosen values are 1 6 , 1 3 , and 1 2 of the number of verbs. The seventh and last factor is a threshold probability used to derive discrete members for each cluster from the cluster probability distribution as rendered by the soft clustering algorithm. In order to get a good range of the variation in the effect of the threshold, we empirically choose five different threshold values: 0.03, 0.06, 0.09, 0.16, and 0.21. The purpose of the last three factors is to control for the amount of variation introduced by the parameters of the clustering algorithm, in order to determine the effect of the informational factors. Evaluation scores are obtained for all combinations of all seven factors (minus the no information condition -the algorithm must have some input!), resulting in 704 conditions.
We compare our best results to a random baseline. In the baseline, verbs are randomly assigned to clusters where a random cluster size is on average the same size as each other and as GOLD. 7 The highest overall scored F β=1 is 0.456 and it results from using syntactic frames, LSA vectors, subject animacy, 406 verbs, 202 clusters, and a threshold of 0.16. The average cluster size is 3, because this is a soft clustering. The random baseline achieves an overall F β=1 of 0.205 with comparable settings of 406 verbs randomly assigned to 202 clusters of approximately equal size.
To determine which features contribute significantly to clustering quality, a statistical analysis of the clustering experiments is undertaken in the next section.
Discussion
For further quantitative error analysis of the data and feature selection, we perform an ANOVA to test the significance of the differences among information factors and the various parameter settings of the clustering algorithm. This error analysis uses the error metric from Snider & Diab (2006) that allows us to test just the HYP verbs that match the GOLD set. The emphasis on precision in the feature selection serves the purpose of countering the large underestimation of recall that is due to a noisy gold standard. We believe that the features that are found to be significant by this metric stand the best chance of being useful once a better gold standard is available.
The ANOVA analyzes the effects of syntactic frame, LSA vectors, subject animacy, verb pattern, verb number, cluster number, and threshold. Syntactic frame information contributes positively to clustering quality (p < .03), as does LSA (p < .001). Contrary to the result in Snider & Diab (2006), subject animacy has a significant positive contribution (p < .002). Interestingly, the morphological pattern contributes negatively to clustering quality (p < .001). As expected, the control parameters all have a significant effect: number of verbs (p < .001), number of clusters (p < .001), and threshold (p < .001).
As evident from the results of the statistical analysis, the various informational factors have an interesting effect on the quality of the clusters. Both syntactic frames and LSA vectors contribute independently to clustering quality. This indicates that successfully clustering verbs requires information at the relatively coarse level of event structure, as well as the finer grained semantics provided by word co-occurrence techniques such as LSA.
Subject animacy is found to improve clustering, which is consistent with the results for English found by Merlo and Stevenson. This is definite improvement over our previous study, and indicates that the extraction of the feature has been much improved.
Most interesting from a linguistic perspective is the finding that morphological pattern information about the verb actually worsens clustering quality. This could be explained by the fact that the morphological patterns are productive, so that two different verb lemmas actually describe the same event structure. This would worsen the clustering because these morphological alternations that are represented by the different patterns actually change the lemma form of the verb, unlike syntactic alternations. If only syntactic alternation features are taken into account, the different pattern forms of the same root could still be clustered together; however, our design of the pattern feature does not allow for variation in the lemma form, therefore, we are in effect preventing the useful exploitation of the pattern information. Further evidence comes from the positive effect of the LSA feature, which effectively clusters together these productive patterns hence yielding the significant impact on the clustering.
Overall, the scores that we report use the evaluation metric that equally weights precision and recall. This metric disfavors clusters that are too large or too small. Models perform better when the average size of HYP is the same as that of GOLD. It is worth noting that comparing our current results to those obtained in Snider & Diab (2006), we show a significant improvement given the same precision oriented metric. In the same condition settings, our previous results are an F β score of 0.51 and in this study, a precision oriented metric yields a significant improvement of 17 absolute points, at an F β score of 0.68. Even though we do not report this number as the main result of our study, we tend to have more confidence in it due to the noise associated with the GOLD set.
The score of the best parameter settings with respect to the baseline is considerable given the novelty of the task and lack of good quality resources for evaluation. Moreover, there is no reason to expect that there would be perfect alignment between the Arabic clusters and the corresponding translated Levin clusters, primarily because of the quality of the translation, but also because there is unlikely to be an isomorphism between English and Arabic lexical semantics, as assumed here as a means of approximating the problem. In fact, it would be quite noteworthy if we did find a high level of agreement.
In an attempt at a qualitative analysis of the resulting clusters, we manually examine four HYP clusters. In summary, we observe very interesting clusters of verbs which indeed require more in depth lexical semantic study as MSA verbs in their own right.
Conclusions
We found new features that help us successfully perform the novel task of applying clustering techniques to verbs acquired from the ATB and AG to induce lexical semantic classes for MSA verbs. In doing this, we find that the quality of the clusters is sensitive to the inclusion of information about the syntactic frames, word co-occurence (LSA), and animacy of the subject, as well as parameters of the clustering algorithm such as the number of clusters, and number of verbs clustered. Our classification performs well with respect to a gold standard clusters produced by noisy translations of English verbs in the Levin classes. Our best clustering condition when we use all frame information and the most frequent verbs in the ATB and a high number of clusters outperforms a random baseline by F β=1 difference of 0.251. This analysis leads us to conclude that the clusters are induced from the structure in the data Our results are reported with a caveat on the gold standard data. We are in the process of manually cleaning the English translations corresponding to the MSA verbs. Moreover, we are exploring the possibility of improving the gold standard clusters by examining the lexical semantic attributes of the MSA verbs. We also plan to add semantic word co-occurrence information via other sources besides LSA, to determine if having semantic components in addition to the argument structure component improves the quality of the clusters. Further semantic information will be acquired from a WordNet similarity of the cleaned translated English verbs. In the long term, we envision a series of psycholinguistic experiments with native speakers to determine which Arabic verbs group together based on their argument structure.
|
2014-07-01T00:00:00.000Z
|
2006-07-17T00:00:00.000
|
{
"year": 2006,
"sha1": "2ddf05aa9ea5005e1e9db53e1a8b81e865a71928",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1273175&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "9e038ab8ca2a7d7affb891df0b4696cdbbb39db3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249574069
|
pes2o/s2orc
|
v3-fos-license
|
Modern Machine-Learning Predictive Models for Diagnosing Infectious Diseases
Controlling infectious diseases is a major health priority because they can spread and infect humans, thus evolving into epidemics or pandemics. Therefore, early detection of infectious diseases is a significant need, and many researchers have developed models to diagnose them in the early stages. This paper reviewed research articles for recent machine-learning (ML) algorithms applied to infectious disease diagnosis. We searched the Web of Science, ScienceDirect, PubMed, Springer, and IEEE databases from 2015 to 2022, identified the pros and cons of the reviewed ML models, and discussed the possible recommendations to advance the studies in this field. We found that most of the articles used small datasets, and few of them used real-time data. Our results demonstrated that a suitable ML technique depends on the nature of the dataset and the desired goal. Moreover, heterogeneous data could ensure the model's generalization, while big data, many features, and a hybrid model will increase the resulting performance. Furthermore, using other techniques such as deep learning and NLP to extract vast features from unstructured data is a powerful approach to enhancing the performance of ML diagnostic models.
Introduction
Infectious diseases constitute major health issues. Viruses, fungi, bacteria, and parasites cause infectious diseases, which are transmitted from infected humans or a pathogen of an animal to other humans. The World Health Organization (WHO) has reported that infectious diseases, including lower respiratory infections, malaria, tuberculosis, and HIV/AIDS, were among the top ten leading causes of death worldwide in 2019 [1].
Several applied techniques focused on developing models for classifying different infectious diseases. In recent years, infectious disease outbreaks were still a common problem worldwide. Therefore, early detection and diagnosis of infectious diseases are critical to preventing and controlling them efficiently. According to Goodman et al. [2], timely and accurate diagnosis of infectious diseases is vital to managing the disease correctly.
Artificial intelligence (AI) is used to classify and predict the spread of infectious diseases [3]. AI assists computers in performing tasks that require human intelligence. John McCarthy introduced the term "AI" in 1956. Still, studies on machine ways of thinking were published before this time, with Vannevar Bush's seminal work and Alan Turing's paper in 1950 about machines' ability to think and simulate human work [4].
Machine-learning (ML) is a branch of AI that learns from data and makes a prediction. ML algorithms come in three main types: supervised, unsupervised, and reinforcement learning. In supervised learning, a model is trained on given data and independent variables to predict the dependent variable and then achieve the desired accuracy, such as the decision tree, random forest (RF), logistic regression (LR), and k-nearest neighbors. On the other hand, unsupervised learning identifies patterns in training data that are not classified or labeled, then categorizes them based on the extracted features, such as the Apriori algorithm and k-means. Moreover, the reinforcement learning model trains a machine to learn from experience and make an accurate decision through trial and error.
This review paper is interested in using ML to assist in the diagnosis of several infectious diseases and answers the following research questions: (i) What ML techniques are used for diagnosing infectious diseases?
(ii) What are the types of datasets used in the diagnostic models?
(iii) What are the performance metrics used for infectious disease diagnosis models?
The large number of techniques that are emerging makes it necessary to provide a precise overview of diagnosing infectious diseases. To the best of our knowledge, this is the first review to investigate existing works on diagnosing, detecting, and classifying infectious diseases to answer these questions and thereby assist with new and precise ML techniques for detecting infectious diseases early. Furthermore, early and accurate detection of infectious diseases plays an important role in future possible outbreak prevention. In addition, we provide trends in diagnosing infectious diseases using ML algorithms and future research directions.
Most of the previous works demonstrated that ML approaches were adopted by many researchers, but there is no review to examine them deeply. To that end, we propose a systematic review dealing with the ML models applied to the diagnosis of infectious diseases. We discuss and describe the dataset, utilized technique, and performance for each model. Based on the obtained explanations and discussion, we provide recommendations for future research and trends in the area to assist in creating an accurate and perhaps generalized model for detecting infectious diseases early to avoid possible outbreaks.
We organized the structure of this review as follows. First, the related work section covers reviews and systematic review papers for infectious disease classification and diagnosis models. Then, in the search strategy section, we define the criteria for the included articles. After that, we provide a detailed explanation of every article in the result section. Then, we present a discussion and future research trends. The final section is the conclusion.
Other papers are concerned with several infectious diseases, but these reviews specified applications of digital technologies, artificial intelligence, and machine-learning in infectious disease laboratory testing to overcome human analytical limitations [13,14]. Other reviews focused on all kinds of infections and noninfectious diseases [15,16]. Table 1 summarizes the main characteristics of related works to differentiate between them and our systematic review paper.
Search Strategy
We rely on the process of systematic review preferred reporting items for systematic review and meta-analysis (PRISMA) [17] to collect, review, and answer the research questions. It has three main stages: identification, screening process, and included studies, as shown in Figure 1.
Identification and Search
Sources. This paper explored four databases: Web of Science, PubMed, Springer, IEEE, and ScienceDirect, to find related research articles from 2015 to 2022. The combination of keywords used for search queries is as follows: "classification model for infectious disease," "ML for diagnosing infectious diseases," and "artificial intelligence for infectious diseases' classification." The search yielded 9490 English language records, 4241 from the Web of Science, 1771 from PubMed, 3299 from ScienceDirect, 147 from IEEE, and 32 from Springer. In addition, we used an automation tool to remove 2168 duplicated records and the 56 non-English language articles.
Screening Process.
This study focuses only on ML algorithms for human studies. It categorizes retrieved diagnosis models into six categories based on the applied dataset. We have models that used signs and symptoms, models that used image processing, models that used clinical tests, models that used clinical reports, models that used electronic health records, and models that used a combination of several predictors.
This study excludes 2364 ineligible articles. Of these, 2148 were irrelevant, and 216 were literature and systematic reviews. The remaining articles were 4902; eliminated articles were 185 as they provide diagnosis models about infectious diseases besides other noninfectious diseases and 1456 articles about transmission and spread models of infectious disease. Both are not related to this review. The final retrieved articles were 3261.
Included Studies.
This review screened each of the 3261 retrieved full-text articles by the authors of this study through the systematic review accelerator tool. We used the checklist for critical appraisal and data extraction for systematic reviews of prediction modeling studies (CHARMS) [18] to assess these retrieved articles. The abstract was reviewed with the CHARMS checklist, as illustrated in Table 2. We found 1091 articles based on other techniques such as graph models, infrared spectroscopy, risk detection models, and severity assessment models. In addition, 115 articles focused on deep learning techniques, which we did not cover in this review. In addition, we excluded animal studies (356 articles), models for predictive therapy and drugs (819 articles), and studies about infectious diseases' genes, genome, sequencing, and other analysis methods (866 articles). Finally, we found only 14 articles that matched our inclusion criteria.
Search Results
This paper reviewed each selected article descriptively. In the upcoming subsections, we covered explanations of datasets, types of ML techniques for each study, and the models' performance, such as precision, accuracy, specificity, and sensitivity.
Dataset Specification.
The most retrieved ML diagnosis models are based on signs and symptoms [19][20][21][22][23][24], followed by models based on laboratory tests [25,26]. Moreover, we found ML models based on clinical and electronic health records (EHRs) [27,28], ML models based on clinical reports [29], ML models based on image processing with other techniques [30,31], and ML models based on a combination of predictors including abnormal lab test results, the incidence rates, and signs and symptoms, as well as epidemiological features [32]. Figure 2 summarizes these different acquisition methods. Moreover, Table 3 shows the sizes, acquisition methods, and type of infectious disease of each article.
Dataset
Features. Tetanus and HFMD cause autonomic nervous system dysfunction (ANSD) and lead to death in complicated stages. Moreover, physiological waveforms collected from sensors are susceptible to noise and should be filtered using a pass filter followed by a Gaussian filter [19]. In contrast, the biosensors in [20] collected five vital signs directly from patients to a hub that quantifies and mul-tiplexes the signals and sends them to a mobile application through Bluetooth technology.
On the other hand, a smartphone application was adopted [21] to collect data available in questionnaire format from registered users. However, the study [22] compared real-time polymerase chain reaction (RT-PCR) results with the diagnosis results of vocal inputs at the time of recordings and the required dataset available in WAV formats.
Twitter application program interfaces (APIs) are used in [23] to extract data from users' messages and then implement preprocessing to enhance quality and remove noise. Moreover, real electronic health records for 104 individuals diagnosed with influenza are used to validate the obtained results.
Domain ontology is constructed in [24] to diagnose 507 different infectious diseases based on the signs and symptoms of patients. This ontology-based model is more complete than the previous work in the field and performs data-driven decision-making at the point of primary care. On the other hand, the emergency department (ED) freetext reports were implemented to detect influenza cases based on 31 clinical findings adopted for ML classifiers [25].
Moreover, combining routine laboratory tests with ML models [26,27] provides precious opportunities to assist in infectious disease diagnosis. The selected tests in [27] include six clinical chemistry (CC) parameters and 14 complete blood count (CBC) parameters. Moreover, 35 features were used to diagnose COVID-19 in [26], and random sampling was done for 22,385 patients with at least 33 out of 35 total features. Table 1: Comparison of related systematic reviews and our systematic review.
Reference
Focus Differences [5] (i) Diagnosing COVID-19 and predicting severity and mortality risks (i) The review is based only on clinical and laboratory data. [6] (i) Diagnosis and prognosis of COVID-19 from prediction models.
(i) The review focuses more on preprints (ii) It covers all types of models including risk prediction, diagnosis of severity, and diagnosis from images [7] (i) Diagnosing hepatitis (i) It covers only clinical tests. [8] (i) Detecting pneumonia (i) It is based only on signs and symptoms (ii) Performance measures are not covered [9] (i) Diagnosing tuberculosis (i) It covers diverse AI approaches using clinical signs and symptoms and radiological images. [10] (i) Diagnosing pulmonary tuberculosis (i) It covers AI methods based on chest X-ray images. [11] (i) Diagnosing tuberculous meningitis (i) It is based only on clinical and laboratory data. [12] (i) Predicting phenotypic characteristics of influenza virus (i) It is based on genomic or proteomic input.
[13] (i) Diagnosing HIV, HCV, and chlamydia (i) It implements different digital technology but does not include any kind of AI technique. [14] (i) Diagnosing COVID-19, hepatitis, sepsis, malaria, Lyme disease, and tuberculosis (i) It covers data coming from EMR. [15] (i) Automatic diagnosis of several infections such as sepsis, general infections, and Clostridium difficile infection through ML and expert system (i) It covers papers based on physiological data.
Computational and Mathematical Methods in Medicine
Furthermore, a patient record system is a clinical information system that provides valuable information, including disease diagnosis. Hence, it assists with patient care. Clinical records [28] and EHR [29] were used to collect demographic and clinical data of patients, aiding in infectious disease diagnosis.
Feature engineering is a critical task for classification, and it is helpful to extract features from computerized tomography (CT) scan images. A novel convolutional neural network (CNN) model was applied [31] to assist with ML classification. Moreover, the authors of the study [31] applied contrast limited histogram equalization (CLAHE) to enhance the quality of the images. However, various datasets, including clinical data (demographics, radiological scores, and laboratory tests) and lesion/lung radiomic features extracted from enhanced chest CT images, were implemented in [30], to diagnose COVID-19.
In addition, a classifier was established in [32] to classify 25 common infectious diseases. It used symptoms and signs, city of disease origin, epidemiological features of the disease, and abnormal lab test results as input, but the authors did not specify all features that the model used. Table 4 lists each dataset's features.
ML Algorithms and Model
Performance. Different ML algorithms in the reviewed articles focused on the nature of the data and the goal to be achieved [33]. Further, different performance metrics are applied to evaluate different ML algorithms. The right choice of metrics [34] influences the model's effectiveness based on test datasets and how to These metrics include accuracy, recall (sensitivity), precision, specificity, and F1-score. Figure 3 classifies reviewed articles according to the ML algorithms that each model adopted.
A support vector machine (SVM) was utilized as a classifier by [19,22,26] to diagnose various infectious diseases. The study [19] experimented with linear and Gaussian kernels for the SVM to classify autonomic nervous system dysfunction (ANSD) levels for HFMD and infectious tetanus diseases. ANSD is the main cause of death for both HFMD and tetanus. The study sought to detect the ANSD level automatically and showed that Gaussian kernels provided the best results. Moreover, it used different measurements such as accuracy, precision, specificity, recall, and F1-score.
However, manual encoding of features is the main limitation of the study.
Four ML models, i.e., SVM, RF, gradient boosting (XGBoost), and LR, utilized complete blood count (CBC) parameter results and clinical chemistry (CC) results to diagnose the COVID-19 infectious disease [26]. However, the RF model performed the best on the present study's dataset, whereas the adopted SVM model performed the best on the external validation dataset.
Moreover, models trained on CBC and CC provided better results than those trained on CBC only. The study reported that ML models could detect COVID-19 when the population has more severe cases, eventually improving sensitivity. In addition, it demonstrated that eosinophil count is the most important feature that the model uses.
Computational and Mathematical Methods in Medicine
Furthermore, a three-stage architecture supported a semisupervised learning approach [22]. The first stage used a transformer and pretrained it on the unlabeled recordings to transform the frame sequences into "transformer embeddings." In the second stage, recurrent neural network (RNN) classifiers were utilized on speaker vocal inputs, and every classifier produced a score for each COVID-19-positive probability. In the third stage, ensemble stacking generated a meta-model, and scores for each classifier were averaged and assembled into a feature vector per speaker. Finally, a linear SVM trained on the feature vectors weighed the scores and predicted the final result. In addition, cross-validation was used to present the performance metrics for the ensemble stacking meta-model.
Besides the machine-learning algorithm, another routine blood test was used to diagnose COVID-19 from 255 different viral and bacterial infections [25]. It compared various ML techniques, such as SVM, RF, neural network (NN), and extreme gradient boosting (XGBoost). The XGBoost model outperformed others with 5333 patients with negative tests and 160 patients with the positive test used for training. It used a tenfold stratified cross-validation testing procedure to evaluate models' performance.
Ensemble learning employs a mixture of algorithms to solve classification or regression problems that cannot be solved with a single ML model [35]. Moreover, a soft voting ensemble learning model (Gaussian naive Bayes (GNB), SVM, decision tree (DT), LR, and RF) was deployed by [31] on features extracted from a CNN model to diagnose COVID-19. The study used 85% of the images to train the proposed model and 15% of the images to test it. Additionally, a confusion matrix evaluates the robustness of each model by determining the accuracy, F1-score, recall, preci-sion, and area under the curve (AUC) and comparing them with previous work implementing the same dataset.
The study in [21] used a combination of symptoms and ML techniques as a screening model that identifies individuals infected with COVID-19. It compared five ML techniques: LR stepwise, RF, naïve Bayes (NB), decision tree using C5.0 (DT), and XGBoost. Moreover, it utilized different data balancing techniques-upsampling, downsampling, the synthetic minority oversampling technique (SMOTE), and random oversampling examples (ROSE)-and resulted in 25 combinations of ML techniques and sampling strategies. The applied dataset was divided into 80% training and 20% testing sets. Moreover, participants with a probability of more than 50% were classified as infected with COVID-19. Finally, the study applied the Matthews correlation coefficient (MCC), specificity, F1-score, sensitivity, a positive predictive value (PPV), and a negative prepredictive value (NPV) to evaluate the results. The XGBoost, LR, and random forest techniques presented the best median MCCs, followed by naïve Bayes (NB) and decision trees.
Computational and Mathematical Methods in Medicine
The study employed the cooccurrence analysis because there were no training data and to reduce false positives.
On the other hand, seven ML classifiers-Bayesian classifiers (NB, Bayesian network with the K2 algorithm (K2-BN), and efficient Bayesian multivariate classification (EBMC)), function classifiers (LR, SVM, and artificial neural network (ANN)), and decision trees (RF)-were trained using data extracted from ED free-text reports [29]. Their Computational and Mathematical Methods in Medicine Table 6: Comparison of pros and cons of the reviewed articles.
Reference Pros Cons [19] (i) The proposed method provides efficient hospital resources (ii) Simple and more generic features are used to encode the waveform dynamics in time and frequency domains (iii) Low-cost wearable sensors are used to collect data (i) The manual encoding of features used to encode the waveform dynamics in time and frequency domains is time-consuming and may have errors (ii) The dataset is small [20] (i) The study shows an accessible, easy to use, flexible, ubiquitous, and cost-effective eHealth system for diagnosing infectious diseases from vital signs.
(i) The short period of sampling affects the classification results, and more accuracy is needed (ii) The small dataset affects the accuracy of the model [21] (i) The proposed model shows that a combination of symptoms assisted with the prediction of COVID-19 infection (ii) It helps improve the test strategy by prioritizing users for testing (i) Some studies criticize the use of symptoms for classifying COVID-19 because of the existence of other respiratory coinfections and the nonspecific nature of some symptoms [22] (i) The study shows that voice-based screening for COVID-19 is possible (ii) Deep learning is useful in addressing the challenges of the long sequences in voice recordings, uncertain and presumably subtle vocal attributes of early COVID-19, and the lack of large labeled datasets (i) The study uses a small set of vocal-input types that were self-recorded (ii) The results show a connection between COVID-19 voice symptoms and detection of it, whereas there are no reliable reports to assess this correlation [23] (i) The study is helpful in diagnosing latent infectious diseases in early stages without prior training data and in a short period.
(i) There is a need to improve the performance of the proposed model and to include accuracy measures when considering social media user information (e.g., age, gender, and posting frequency). [24] (i) It is more comprehensive compared to other existing works (ii) It establishes a reliable knowledge base for infectious diseases (i) Symptoms are not weighed to distinguish syndromes and signs (ii) Important etiological factors such as a history of close exposure to other infected patients are not included [25] (i) The study determines the most useful routine blood parameters for COVID-19 diagnosis from a large number of patients > 5000 (ii) Use of the ML model to diagnose COVID-19 from a routine blood test in the early symptomatic phase is effective when demands on the real-time reverse transcriptase-polymerase chain reaction (RT-PCR) test are enormous (i) The proposed model might be inefficient at the stage where there are no systemic effects (ii) The study includes only features available from a single center (iii)the positive number of COVID-19 patients is limited in the study. [26] (i) The proposed ML model can be used as a decision support system tool (ii) A combination of routine laboratory results and ML models improves the COVID-19 diagnosis (i) The study patients' comorbidities are not available in the dataset (ii) Larger datasets are required (iii) Some important features are not included, such as medical imaging, vital signs, physical examination, symptoms, and increasing sample size (iv)the absence of data pertaining to vaccinations limits the study. [27] (i) The use of ML to predict Candidemia improves decisionmaking for appropriateness in antifungal and antibiotic therapies (ii) The use of ML reduces the delay in empirical treatment (iii) The study uses real-world data with a large number of features (as predictors) (i) There is a need for a large number of patients (ii) It is a retrospective study, and the pooled data are anonymous [28] (i) The EHR-based model can be used as clinical decision support to predict complicated cases of CDI on the day of diagnosis (ii) Use of the ML approach is feasible for generating accurate and early risk predictions for complicated CDI (i) The performance metrics that are used are not enough to evaluate the model (ii) The proposed model is not tested in a real-time manner (iii) The obtained results are based on a small dataset from one institution (iv) The study does not mention the applied ML algorithm 9 Computational and Mathematical Methods in Medicine diagnostic capabilities were compared to an expert-built influenza Bayesian classifier. The NLP results showed a 90% accurate determination of the required clinical findings from ED free-text reports. Missing values in the training dataset were assigned to value (false), whereas in the testing dataset, they were assigned to value missing (M). In the study, only ACU was used to evaluate the performance of the classifiers. In addition, four ML classifiers-NB, SVM, LR, and ANN-obtained the highest AUC, 0.93.
The authors in [20] introduced a combination of ML technology, cloud services, and mobile communications and performed pattern recognition analysis. Three ML [29] (i) Machine-learning classifiers perform better than expert constructed classifiers using the NLP extraction tool (ii) The large number of ED reports in training classifiers solves the imbalance problem in the dataset (iii) The applied method for dealing with missing values in the study shows improved performance (i) The study focuses on the data of only one health system (ii) The number of selected features is considered small [30] (i) The study shows that the combination of clinical data and radiomic features, including all measures in the optimal model, has the height performance, and can effectively predict survival in COVID-19 (i) The used dataset is small, and there is a lack of an external validation dataset (ii) Clinical studies are required to verify obtained results. (iii) The study tests only one ML classifier and one feature selection method [31] (i) Diagnosing COVID-19 is faster and more accurate than other traditional methods applied to the same CT image dataset.
(i) None of the COVID-19 variants is included in the study (ii) The proposed model is trained on a tiny dataset (iii) The study uses only a COVID-19 patient dataset to train the model [32] ( Figure 4: Percentages of dataset types and ML algorithms from the reviewed articles. 10 Computational and Mathematical Methods in Medicine algorithms were applied: NB, the filtered classifier (FC), and RF, to allow changes in the weights of the attributes. The 6277 samples from 60 individuals were divided into 95% for training and 5% for validation. It is noticed that heart rate is the most important variable during the classification process. The study implemented NB and FC on a web service and experimented with the weights of the variables through RF as a software application. In addition, the NB classifier was implemented in [24,32] to classify several infectious diseases. Moreover, a patient's self-examination is required in [24], and a proposed decision support system that supports infectious disease diagnosis and therapy is constructed. In [24], we focused on the diagnosis part only, which helps to diagnose the type of bacterial infection; it was tested on 84 medical records. The Bayesian classifier possesses high predictive performance in [32], and the output of the model is a list of possible diseases arranged according to the calculated probabilities.
Another combination approach involving the EHR and ML techniques was proposed in [28], whose approach used EHR (4271 features), curated data (23 features) from another study [37], and implemented a combination of both features to train the model. The proposed model in the study was trained on 80% of the data and evaluated on 20%. In addition, it applied the L2 regularization and k-best feature selection techniques to control data overfitting and big dimensionality problems, respectively. The results showed that the model based only on EHR outperformed curated and combination-based models in predicting complicated CDI. Moreover, the study implemented the area under the receiver operating characteristic curve (AUROC) to measure the model's discriminative performance.
Furthermore, the RF model was compared to the LR model for early detection of Candidemia in internal medical wards (IMWs) [27]. The RF model was built using both the original features and a shuffled copy of their values. The study used 150 different combinations of the three tuning parameters for the random forest algorithm. The discriminative performance of each model was assessed by sensitivity and specificity values and by the area under the ROC curve (AUC, C-statistics). Moreover, the Hosmer-Lemeshow test (HLT) was used to evaluate the models' calibration. Notably, a smaller value of HLT statistics and a greater value of (HLT p) for the RF model indicated better performance. The crossvalidation procedure showed that the best-tuned RF model outperformed the LR model regarding discrimination and calibration.
A supervised ML algorithm XGBoost classifier was implemented in [30] to train models to find patterns to predict the survival of COVID-19 patients. The predictive models can effectively predict alive or dead status in COVID-19 patients through various combinations of clinical data and radiomic (lung/lesion) features. The dataset was divided into 106 patients for training/validation and 46 for testing to evaluate the selected model.
In addition, bootstrap techniques were used for XGBoost hyperparameter tuning with 1000 repetitions through a random search method. Maximum relevance minimum redun-dancy (MRMR) was used for feature selection. Table 5 shows the ML techniques used in each article and the performance metrics used to evaluate the developed model.
Discussion and Future Research Trends
There are numerous limitations reported in diagnostic models of infectious diseases, as illustrated in Table 6. To overcome them, we give some recommendations for future research in the area. Moreover, Figure 4 shows that signs and symptoms were used by several articles to diagnose different infectious diseases, and the most used ML algorithm is NB for different infectious diseases. However, including more critical features can improve models' performance, and the automatic encoding of many features is more efficient than manual encoding. In addition, the consensus in most reviewed articles is the requirement for large dataset to improve accuracy, ensure the reliability of the developed model, and validate results. In addition, diagnostic lab test (s) are required to help confirm disease prognosis.
The ML models could extract delicate prognostic data from routine blood test results that were unobserved by the most experienced clinicians. However, some articles that use a single ML algorithm should focus on evaluating multiple ML algorithms and comparing different models' performance.
Further, NLP is beneficial to extract features from clinical reports, notes, social media, etc. However, a composite of NLP tools may be implemented to increase the accuracy of feature extraction.
Besides, deep learning models, specifically the convolutional neural network (CNN) model, are mighty in extracting huge features from clinical images. These features are then deployed to classify infectious diseases through various ML models. The resulted performance is promised to assist clinicians in diagnosing several infectious diseases from images in their early stages.
EHR contains laboratory data and radiology reports, free-text clinical notes, patients' demographics, etc. Therefore, the use of EHR can achieve improved accuracy. Moreover, if the naïve Bayes classifier is implemented, then calculating the degree of correlation between symptom vectors is essential to evaluate the probability of the infectious disease.
Finally, developing a ML model for detecting and classifying infectious diseases as a web application or API call is a good idea. However, future studies are required to investigate how can ML models perform in real-time. In addition, prospective validations are needed to obtain more robust results. On the other hand, working with a heterogeneous dataset from multiple sources and implementing medical ontologies is more useful to ensure the model's generalizability. Furthermore, the model achieves better discriminative performance as more data becomes available and integrated. Moreover, developing a hybrid model for diagnosing infectious diseases from a vast dataset is recommended. It is important to notice that our review did not perform any meta-analysis because the reviewed data from the studied articles are too heterogeneous [38].
Conclusion
Infectious diseases can affect humans worldwide and cause death in complicated situations. Many contributions have been developed using various methods to diagnose and classify infectious diseases. Further, ML algorithms can assist in the diagnosis of infectious diseases at early stages. By reviewing the selected articles, we found some limitations in these studies, including small datasets, which is the main limitation. Combining techniques to extract more features is useful and can improve ML predictive models' performance. It is essential to build a smart and generalized health system that can combine medical ontology, real-time heterogeneous data from multiple sources, and the ML predictive model to assist clinicians in diagnosing infectious diseases early. In addition, patients may have access to this system to alert them about possible infectious diseases.
|
2022-06-12T05:07:17.861Z
|
2022-06-09T00:00:00.000
|
{
"year": 2022,
"sha1": "1ad5d9e8893eaceb7016c740789997c7b088ef3c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1ad5d9e8893eaceb7016c740789997c7b088ef3c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16714272
|
pes2o/s2orc
|
v3-fos-license
|
Is it possible to construct the Proton Structure Function by Lorentz-boosting the Static Quark-model Wave Function?
The energy-momentum relations for massive and massless particles are E = p^2/2m and E = pc respectively. According to Einstein, these two different expressions come from the same formula E = \sqrt{(cp)^2 + m^2 c^4}. Quarks and partons are believed to be the same particles, but they have quite different properties. Are they two different manifestations of the same covariant entity as in the case of Einstein's energy-momentum relation? The answer to this question is YES. It is possible to construct harmonic oscillator wave functions which can be Lorentz-boosted. They describe quarks bound together inside hadrons. When they are boosted to an infinite-momentum frame, these wave functions exhibit all the peculiar properties of Feynman's parton picture. This formalism leads to a parton distribution corresponding to the valence quarks, with a good agreement with the experimentally observed distribution.
3). The formalism constitutes a representation of Wigner's little of the
Poincaré group which dictates internal space-time symmetries of relativistic particles [2,3]. The little group is the maximal subgroup of the Lorentz group which leaves the four-momentum of a given particle invariant.
In this paper, we use this covariant oscillator formalism to see that the quark model and the parton models are two different manifestations of one covariant formalism. We shall see how the parton picture emerges from the Lorentz-boosted hadronic wave function. In Sec. 2, we introduce the basic ingredients of the covariant harmonic oscillator formalism. In Sec. 4, we use this formalism to show that the valance quark distribution in the proton structure function can be derived from the Lorentz-boosted quarkmodel wave function.
Covariant Harmonic Oscillators
The covariant harmonic oscillator formalism has been discussed exhaustively in the literature, and it is not necessary to give another full-fledged treatment in the present paper. We shall discuss here only its features needed for explaining the peculiarities of Feynman's parton picture.
Let us consider a bound state of two particles. Then there is a Bohrlike radius measuring the space-like separation between the quarks. There is also a time-like separation between the quarks, and this variable becomes mixed with the longitudinal spatial separation as the hadron moves with a relativistic speed. There are no quantum excitations along the time-like direction. On the other hand, there is the time-energy uncertainty relation which allows quantum transitions. The covariant oscillator formalism can accommodate these aspects within the framework of the present form of quantum mechanics. The uncertainty relation between the time and energy variables is the c-number relation [4], which does not allow excitations along the time-like coordinate. This aspect is illustrated in Fig. 1.
Let us consider now a hadron consisting of two quarks. If the space-time position of two quarks are specified by x a and x b respectively, the system can be described by the variables The four-vector X specifies where the hadron is located in space and time, while the variable x measures the space-time separation between the quarks. In the convention of Feynman et al. [1], the internal motion of the quarks bound by a harmonic oscillator potential of unit strength can be described by the Lorentz-invariant equation We use here the metric: x µ = (x, y, z, t).
If the hadron is at rest, we can consider a solution of the form ψ(x, y, z, t) = ψ(x, y, z) 1 π where ψ(x, y, z) is the wave function for the three-dimensional oscillator with appropriate angular momentum quantum numbers. Indeed, the above wave function constitutes a representation of Wigner's O(3)-like little group for a massive particle [2]. In the above expression, there are no time-like excitations, and this is consistent with what we see in the real world.
Since the three-dimensional oscillator differential equation is separable in both spherical and Cartesian coordinate systems, ψ(x, y, z) consists of Hermite polynomials of x, y, and z. If the Lorentz boost is made along the z direction, the x and y coordinates are not affected, and can be dropped from the wave function. The wave function of interest can be written as with ψ n (z) = 1 where ψ n (z) is for the n-th excited oscillator state. The full wave function ψ n (z, t) is The subscript 0 means that the wave function is for the hadron at rest. The above expression is not Lorentz-invariant, and its localization a deformation as the hadron moves along the z direction [2]. This is still a Lorentz-covariant expression, and this form satisfies the Lorentz-invariant differential equation of Eq.
(2) even if the z and t variables are given by This corresponds to a Lorentz-boosting of the system along the z direction with the boost parameter η. This becomes more transparent if we use the light-cone if we use the light-cone coordinate system where as is illustrated in Fig. 2. Here one coordinate is becoming expanded while the other become contracted. This type of deformation is called "squeeze" these days [5], The wave function becomes where we have left out the Hermite polynomials for simplicity, because the essential properties of the oscillator wave functions are dominated by their Gaussian factor.
If the system is boosted variables u and v are replaced by e −η u and e eta v respectively, as is illustrated in Fig. 2. and the Lorentz-squeezed wave function becomes The transition from Eq.(9) to Eq.(10) is illustrated in Fig. 3. We can produce this figure by combining quantum mechanics of Fig. 1 and special relativity of Fig. 2. β=0 z t β=0.8
Feynman's Parton Picture
In 1969 [6], Feynman observed the following peculiarities in his parton picture of hadrons.
1). The picture is valid only for hadrons moving with velocity close to that of light.
2). The interaction time between the quarks becomes dilated, and partons behave as free independent particles.
3). The momentum distribution of partons becomes widespread as the hadron moves fast. 4). The number of partons seems to be infinite or much larger than that of quarks.
Because the hadron is believed to be a bound state of two or three quarks, each of the above phenomena appears as a paradox, particularly 2) and 3) together. We would like to resolve this paradox using the covariant harmonic oscillator formalism. For this purpose, we need a momentum-energy wave function. If the quarks have the four-momenta p a and p b , we can construct two independent four-momentum variables [1]: The four-momentum P is the total four-momentum and is thus the hadronic four-momentum. q measures the four-momentum separation between the quarks. We expect to get the momentum-energy wave function by taking the Fourier transformation of Eq.(10): Let us now define the momentum-energy variables in the light-cone coordinate system as In terms of these variables, the Fourier transformation of Eq.(12) can be written as The resulting momentum-energy wave function is Because we are using here the harmonic oscillator, the mathematical form of the above momentum-energy wave function is identical to that of the spacetime wave function. The Lorentz squeeze properties of these wave functions are also the same, as is indicated in Fig. 4. When the hadron is at rest with η = 0, both wave functions behave like those for the static bound state of quarks. As η increases, the wave functions become continuously squeezed until they become concentrated along their respective positive light-cone axes. Let us look at the z-axis projection of the space-time wave function. Indeed, the width of the quark distribution increases as the hadronic speed approaches that of the speed of light. The position of each quark appears widespread to the observer in the laboratory frame, and the quarks appear like free particles.
Furthermore, interaction time of the quarks among themselves become dilated. Because the wave function becomes wide-spread, the distance between one end of the harmonic oscillator well and the other end increases as is indicated in Fig. 3. This effect, first noted by Feynman [6], is universally observed in high-energy hadronic experiments. Let us look at the time ratio more carefully. The period of oscillation increases like e η as was predicted by Feynman [6].
In the picture of the Lorentz squeezed hadron given in Fig. 3, the hadron moves along the u (positive light-cone) axis, while the external signal moves in the direction opposite to the hadronic momentum, which corresponds to the v (negative light-cone) axis. This time interval is proportional to the minor axis of the ellipse given in Fig. 3.
If we use T ext and T osc for the quark's interaction time with the external signal and the interaction time among the quarks, their ratio becomes Energy distribution The ratio of the interaction time to the oscillator period becomes e −2η . The energy of each proton coming out of the Fermilab accelerator is 900GeV . This leads to the ratio to 10 −6 . This is indeed a small number. The external signal is not able to sense the interaction of the quarks among themselves inside the hadron. Thus, the quarks appear to be free particles to the external signal. This is the cause of incoherence in the parton interaction amplitudes. The momentum-energy wave function is just like the space-time wave function in the oscillator formalism. The longitudinal momentum distribution becomes wide-spread as the hadronic speed approaches the velocity of light. This is in contradiction with our expectation from nonrelativistic quantum mechanics that the width of the momentum distribution is inversely proportional to that of the position wave function. Our expectation is that if the quarks are free, they must have their sharply defined momenta, not a wide-spread distribution. This apparent contradiction presents to us the following two fundamental questions: 1). If both the spatial and momentum distributions become widespread as the hadron moves, and if we insist on Heisenberg's uncertainty relation, is Planck's constant dependent on the hadronic velocity?
2). Is this apparent contradiction related to another apparent contradiction that the number of partons is infinite while there are only two or three quarks inside the hadron?
The answer to the first question is "No", and that for the second question is "Yes". Let us answer the first question which is related to the Lorentz invariance of Planck's constant. If we take the product of the width of the longitudinal momentum distribution and that of the spatial distribution, we end up with the relation The right-hand side increases as the velocity parameter increases. This could lead us to an erroneous conclusion that Planck's constant becomes dependent on velocity. This is not correct, because the longitudinal momentum variable q z is no longer conjugate to the longitudinal position variable when the hadron moves. In order to maintain the Lorentz-invariance of the uncertainty product, we have to work with a conjugate pair of variables whose product does not It is quite clear that the light-cone variable u and v are conjugate to q u and q v respectively. It is also clear that the distribution along the q u axis shrinks as the u-axis distribution expands. The exact calculation leads to Planck's constant is indeed Lorentz-invariant. Let us next resolve the puzzle of why the number of partons appears to be infinite while there are only a finite number of quarks inside the hadron. As the hadronic speed approaches the speed of light, both the x and q distributions become concentrated along the positive light-cone axis. This means that the quarks also move with velocity very close to that of light. Quarks in this case behave like massless particles.
We then know from statistical mechanics that the number of massless particles is not a conserved quantity. For instance, in black-body radiation, free light-like particles have a widespread momentum distribution. However, this does not contradict the known principles of quantum mechanics, because the massless photons can be divided into infinitely many massless particles with a continuous momentum distribution.
Likewise, in the parton picture, massless free quarks have a wide-spread momentum distribution. They can appear as a distribution of an infinite number of free particles. These free massless particles are the partons. It is possible to measure this distribution in high-energy laboratories, and it is also possible to calculate it using the covariant harmonic oscillator formalism. We are thus forced to compare these two results [7]. Figure 5 shows the result.
Concluding Remarks
In this report, we introduced first the covariant harmonic oscillator formalism which is consistent with all physical laws of quantum mechanics and special relativity. We then used this this formalism to show that the quark model and the parton model are two limiting case of one covariant picture of quantum bound states.
|
2014-10-01T00:00:00.000Z
|
2004-07-12T00:00:00.000
|
{
"year": 2004,
"sha1": "4409119630296a384c809189ef443ff287a18a4e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0407144",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4409119630296a384c809189ef443ff287a18a4e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236957091
|
pes2o/s2orc
|
v3-fos-license
|
Identifying Correlation Clusters in Many-Body Localized Systems
We introduce techniques for analysing the structure of quantum states of many-body localized (MBL) spin chains by identifying correlation clusters from pairwise correlations. These techniques proceed by interpreting pairwise correlations in the state as a weighted graph, which we analyse using an established graph theoretic clustering algorithm. We validate our approach by studying the eigenstates of a disordered XXZ spin chain across the MBL to ergodic transition, as well as the non-equilibrium dyanmics in the MBL phase following a global quantum quench. We successfully reproduce theoretical predictions about the MBL transition obtained from renormalization group schemes. Furthermore, we identify a clear signature of many-body dynamics analogous to the logarithmic growth of entanglement. The techniques that we introduce are computationally inexpensive and in combination with matrix product state methods allow for the study of large scale localized systems. Moreover, the correlation functions we use are directly accessible in a range of experimental settings including cold atoms.
I. INTRODUCTION
Initiated by the seminal work of Anderson [1], many-body localization (MBL) is now understood as a dynamical quantum phase of matter [2, 3]-defined by the properties of its highly excited many-body eigenstates. In particular, the entanglement of eigenstates in the MBL phase has been found to obey an area law even at finite energy densities [4-6] and to violate the eigenstate thermalization hypothesis [7,8], due to the existence of quasi-local conserved quantities [6, [9][10][11][12]. The concept of MBL has since proven central to the understanding of several aspects of non-equilibrium physics. For instance, MBL is essential to stabilise various emergent Floquet phases of matter, such as discrete time crystals [13,14].
The study of MBL has been driven by large scale numerics and experimental advances in the control of isolated quantum systems. These efforts have identified characteristic properties of MBL, such as the unbounded logarithmic growth of entanglement following a global quench [15][16][17][18][19][20] -which distinguishes it from Anderson localization (AL) where the entanglement saturates-and the presence of an eigenstates transition to an ergodic phase at finite disorder strengths [21][22][23][24][25][26]. The logarithmic entanglement growth has since been observed experimentally for small system sizes in Rydberg atomic systems and in superconducting circuits [27,28]. However, extracting the entanglement entropy experimentally generically requires high fidelity measurements of a number of non-local observables that scale exponentially with system size. This makes experimental measurements of the entanglement entropy prohibitively difficult for large systems. In cold atom setups, large systems and long times can be reached, even in 2D, and clear signals of MBL has been detected in local measurements [29][30][31].
In spite of the recent progress, the MBL transition is still not fully understood. While we have powerful numerical and * kevin.hemery@tum.de analytical techniques that allow us to investigate the slightly entangled eigenstates deep in the MBL phase [10,11,[32][33][34][35], the transition to the ergodic phase is much harder to study. Phenomenological renormalization group (RG) approaches have emerged as promising theoretical description of the transition [36][37][38][39][40][41]. Although the assumptions behind the various models differ, most of them describe the MBL transition in term of the proliferation of "thermal blocks" versus "insulating blocks", i.e., regions of the spin chain that look locally thermal or fully localized, respectively. However, the interpretation of these approaches rest on phenomenological assumptions which could bias the results. Indeed, most models assume that each of these blocks is local, although the existence of sparse thermal blocks spanning the whole chain has also been suggested [40]. These RG studies suggest that the MBL transition is of Kosterlitz-Thouless (KT) type with a delocalization mechanism called avalanche instability, also sometimes referred as quantum avalanche. [37,42].
Since RG approaches provide a clear mechanism for the transition and allow for a prediction of a scaling behaviour close to the transition, it is desirable to have a clear prescription in order to identify these "blocks" from states obtained numerically or experimentally. The first numerical validation of this picture in a microscopic model has been provided in Ref. [43], where it was proposed to identify these "blocks" by finding what they denote as entanglement clusters. These are clusters of spins that are more strongly entangled with each other than the rest of the system. A numerical investigation using exact diagonalization for small systems revealed that the average block size of these entanglement clusters is indeed consistent with the RG analysis of the transition. Entanglement entropy is paramount for this approach, but it is costly to calculate both numerically and from experimental measurements.
Motivated by that work, we propose an approach in which we identify these structures in MBL systems in a scalable way that is relevant for efficient matrix product state (MPS) based simulations and accessible in experiments. We are focusing on the XXZ spin-chain in the presence of a disordered arXiv:2108.03251v2 [cond-mat.dis-nn] 15 Sep 2021 z-directed field defined by the Hamiltonian The disordered field h i is sampled uniformly from the interval [−W, W], where W ≥ 0 controls the strength of the disorder. We consider the Anderson insulator at ∆ = 0 as well as the Heisenberg model at ∆ = 1, which is believed to exhibit an MBL transition at W C estimated between 2.7 and 3.8 [21,22,26,[44][45][46]. In this paper, we present practical tools to efficiently identify the ergodic clusters within MBL eigenstates using pairwise correlations by applying methods originally developed in the context of graph theory [47][48][49][50]. We validate our approach in two ways: First, we show that the two site mutual information (TSMI) is a useful proxy for analysing the structure of MBL eigenstates. Second, we demonstrate in Sec. IV that our clustering algorithm applied during time evolutionusing the TSMI as well as the pairwise connected correlation functions in the σ z basis-indicates the logarithmic spreading of entanglement.
II. FROM CORRELATIONS TO GRAPH THEORY
The quantum mutual information of two subsystems A and B is a correlation measure defined as: where S (A) = −Tr[ρ A log(ρ A )] is the von Neumann entanglement entropy for the subsystem A. The TSMI corresponds to the case where subsystems A and B each consist of a single site, and in this case we denote it I(i; j) for two sites i and j.
The TSMI captures the classical and quantum correlations between two sites, and has already been shown to be a relevant probe of localization [51]. In particular, spatial fluctuations in the TSMI grow logarithmically under non-equilibrium dynamics, mirroring the entanglement entropy [51]. Another useful quantity to study quantum correlations is the (connected) correlation function C(Ô A ,Ô B ) of two operatorsÔ A andÔ B : where Ô denotes the expectation value of the operatorÔ. Although TSMI takes into account all pairwise correlations [52,53], it is less accessible in experiments than certain correlation functions. In this paper we introduce tools borrowed from the field of graph theory to extract what we call correlation clusters, in analogy to Ref. [43]. This provides an efficient method for studying of correlations in MBL systems. Graph theory has been used in the past to detect quantum phase transition in equilibirum settings [54][55][56]. Recently, another work identified the so called "ergodic bubbles" (i.e. regions of space where the expectation values of local operators look thermal) using neural networks techniques [57]. Our starting point is to construct a matrix M i j containing the correlations between site i and j, and to interpret it as an adjacency matrix for a weighted graph as illustrated in Fig. 1. (4)). In the second step, the weakest edges have been successively removed until one site is isolated. This clustering is associated with a very low modularity of Q = −0.00023, indicating no community structure. In the third step, the new clustering yields three clusters and a relatively high modularity of 0.32, which turns out to be the highest obtained follwing the procedure. Therefore, we identify this clustering as the physical one. The next steps of the decomposition yields four clusters with a modularity of Q = 0.23, smaller than in the last step, indicating community structure of a lower quality. The rest of the procedure was not represented here, but the modularity was decreasing at each new step.
The vertices of this graph are the lattice sites of our system and the bonds connecting them are weighted by the matrix element M i j between that pair. Our goal to find the correlation clusters in the state translates to finding "communities" within this graph. We consider M i j = I(i; j) in the case of eigenstates, to which we add M i j = C(σ z i ,σ z j ) for dynamics. The task of finding communities has received considerable attention in the field of graph theory [47][48][49][50]. This is usually achieved by splitting the graph into disjoint sets of vertices which we refer to as clusters. A given decomposition of a graph into clusters is referred to as a clustering. The goal is to find a clustering that is optimal by some well-defined measure. Inspired by the well established Girvan-Newman approach [47,49], we propose the following three steps procedure, shown schematically in Fig. 1, for finding the optimal clustering from the correlation matrix M i j : 1. Successively remove the weakest bonds of the graph. 2. When the removal of a bond results in two parts of the graph becoming disconnected, we store the new clustering. This clustering corresponds to the set of clusters, where a cluster contains sites that are connected to each other. 3. Repeating steps 1 and 2 appropriately, we eventually end up with a completely disconnected graph, and have stored a sequence of different clusterings. For each of these stored clustering we then compute the modularity: takes values Q ∈ [−1/2, 1] and quantifies how good the clustering is, with close to 1 corresponding to a good clustering, or "community structure". We select the correct clustering as the one with the highest modularity. The first step differs from the original Girvan-Newman approach. While in our case, we are guided by the physical intuition that two correlation clusters are only connected by weak bonds, Girvan and Newman use a quantity called edgebetweeness to assess which bonds are most likely to link separated communities [48].
III. CORRELATION CLUSTERS IN EIGENSTATES
We will now focus on the clustering in mid-spectrum eigenstates for the Hamiltonian Eq. (1) for different values of the disorder strength. We analyse the structure of the optimal clustering for the eigenstates across the MBL-ergodic phase transition, using the TSMI to define the graph M i j . It has been shown in earlier studies that the number of entanglement clusters can be taken as a relevant scaling parameters for the MBL transition [43]. In order to validate our graph clustering approach, we perform a similar scaling analysis. The average number of correlation clusters n as a function of disorder is shown in Fig. 2 for different system sizes. We select 50 eigenstates from the middle of the spectrum of 700 disorder configurations and then apply the algorithm outlined in the previous section to extract the average number of clusters in the optimal clustering. The data collapses convincingly with scaling n/L = f ((W − W c )/L u ) with parameters W c = 3.8 and u = 1.26, taken from Ref. [43]. It was pointed out that this scaling is consistent with theoretical studies, where a Harristype bound on the exponents has been derived [58].
We note that if the system is ergodic, then we would expect the mutual information to be uniform on average between all pairs of sites [51]. In this case, the optimal clustering is a single cluster containing all sites but the algorithm as defined will instead choose a clustering with very low modularity. We therefore need to bypass the graph theory algorithm by setting a threshold Q th below which the states yielding a modularity Q < Q th are considered as ergodic.
Since our numerics are performed on finite system sizes up to L = 16, the modularity will be affected by finite size effects that we must take into account in Q th . To understand these effects we consider states deep in the MBL phase where we can make statements about the optimal clustering. In particular, MBL eigentates are simultaneous eigenstates of an extensive number of exponentially localized l-bits with a characteristic localization length [9]. This means that the structure of the clustering should be independent of systems size, so long as it is sufficiently large compared to the localization length. As explained in appendix E, this actually results in a system size dependence of the modularity for similar clusterings. To account for this we use the system size dependent threshold where α ∈ [0, 1]. In practice we obtain the coefficient a by fitting Q(W = 6, L), where W = 6 is the maximum disorder strength considered in our scaling analysis and is located deep within the MBL phase. In the main text, we present results for the overall cutoff parameter α = 0.3. We show in Appendix A that as long as α gives the correct clustering behaviour deep in the MBL and ergodic phases, the scaling collapse is not sensitive to the specific choice of this coefficient. After focussing on the average cluster size, we will now investigate the structure of individual eigenstates using the clustering algorithms. The TSMI matrix M i j is shown in Fig. 3 for a single mid-spectrum eigenstates in an L = 50 system with disorder strength W = 12-obtained using DMRG-X [32]and compared against the bipartite von Neumann entanglement entropy for cuts along different bonds. Here we can see that the localized state is decomposed into a sequence of small clusters (red boxes) and there are only weak off-diagonal (long-range) correlations in the matrix. However, we observe several examples of clusters that contain sites that are not nearest neighbours, a phenomenon which, following Ref. [43], we refer as "leapfrogging" (green and yellow boxes). Ideally, we would like to be able to average over many eigenstates obtained by MPS methods on the MBL side of the transition, and to therefore extrapolate its scaling. However, given the current state of algorithms, we find this goal impossible to achieve due to the bias in the sampling of the states [59].
A few comments are in order: First, the clustering algorithm is a numerically very inexpensive procedure which is easily scalable, since only two-sites correlations need to be computed, allowing us to apply it to state in the MPS form.
Second, there is a clear agreement between the strong correlations and the increase in entanglement, as it can be seen by comparing the TSMI matrix with the bipartite von Neumann entanglement entropy (see Fig. 3). Indeed, the separation between two local communities always coincide with a local minimum of bipartite entanglement entropy. However, 3. Example of the mutual information matrix and the associated communities (correlation clusters) of an eigenstate of a MBL hamiltonian obtained using the DMRG-X algorithm [32]. This disorder strength is W = 12, the system size is L = 50. On the top panel, we plot the mutual information matrix. We draw boxes around the matrix elements belonging to the same "correlation cluster". We use a red box when a cluster is connected (i.e. no leapfrogging), while we use orange and green boxes for the two disconnected clusters. On the bottom panel, we present the bipartite entanglement entropy as a function of sites. The boundary between two clusters is signaled by a vertical red line. The boundaries between two disconnected clusters are always associated with a local minimum of entanglement entropy.
our approach gives information that could not be directly extracted from the bipartite entanglement entropy. In particular, not all local minima of entanglement entropy signal a separation between two clusters. For this reason, the bipartite entanglement entropy alone is not a good criterion to detect clusters since one would need to arbitrarily fix a cutoff entanglement entropy to determine where the separation between two communities is. Moreover, entanglement entropy is unable to detect non-local clusters, i.e."leapfrogging", which we detect with our graph theory approach.
This brings us to our third point, namely that our approach does not rest on a priori physical assumptions, such as locality of the clusters for example. Indeed, the graph theory algorithm does not know about the spatial arrangements of the sites, since its only input is the TSMI matrix. However we note that in all cases we considered, the clusters were still relatively local and did not extend throughout the system, in accordance with the results of Ref. [43]. ) and average modularity (panels c) and d)) as a function of time for different system sizes, with disorder strength W = 8, for both an Anderson localized (AL) (J z = 0) and MBL Hamilonian (J z = 1). We start from a Néel state, simulate a quench using exact time evolution (ETE) and apply our graph theory approach to the TSMI matrix (panels a) and c)) and to the pairwise correlation functions in the σ z basis (panels b) and d)). The fitting parameters are the following: panel c) MBL: a = 3.69, AL: a = 3.63; panel d) MBL: a = 3.76, AL: a = 3.77.
IV. NON-EQUILIBRIUM DYNAMICS
We now turn to the behavior under non-equilibrium dynamics in the localized phase and compare AL and MBL systems. We now consider a global quantum quench protocol, starting from an initial Néel state | · · · ↑↓↑↓ · · · , and time evolve using the Hamiltonian Eq. (1) with ∆ = 1 (MBL) or ∆ = 0 (AL). We can then analyse the correlations as a function of time and identify the time dependence of the correlation clusters. We compare results obtained using the TSMI, M i j = I(i : j), and the correlation functions, M i j = C(σ z i ,σ z j ). Fig. 4 a and b show the numerical results for the average cluster length l as a function of time. When using M i j = I(i : j), l stays approximately constant throughout time, both in the interacting and non interacting cases. In contrast, when using M i j = C(σ z i ,σ z j ), l decreases in the MBL case while it stays constant in the AL case.
In order to understand these results better and to be able to distinguish further MBL from AL using graph theory, we show the numerical results for the average modularity as a function of time on Fig. 4 panels b) and d). The offset of the modularity has been shifted so that the values for different system sizes coincide at short times. Indeed it is shown in Appendix E that the modularity scales with the system size as Q ∼ q − aL −1 for comparable clusters. The value of a is found by fitting the data at short times and we find it to be roughly the same for both AL and MBL. In the non interacting case, the modularity Q stays constant throughout the time evolution. On the contrary, Q decreases in the interacting case.
These observations can be explained as follows: at very short times (of the order of 1 J ), correlation clusters appear similarly for both AL and MBL. Due to dephasing in the MBL case, these clusters interact exponentially slowly with separation between them, leading to a logarithmically slow decrease of the modularity until it reaches a minimum set by the system size. These additional correlations over longer ranges can lead to the clusters breaking down further. Nonetheless, due to the presence of l-bits in the MBL system, these clusters are robust.
When using the TSMI as adjency matrix, the clustering overall stays the same and the dephasing can be seen only through the modularity which signal the interaction between the clusters. When using the correlation functions, the picture stays roughly the same, although the average cluster size decreases slightly.
The difference of behaviour in the cases of TSMI and correlation functions stems from the lack of transport in MBL [60], which implies that off-diagonal correlation functions cannot build up beyond the localization length. Thus, for our charge conserving model, only the σ z component contributes to the growth of the TSMI at long times. Numerical evidence for the spreading of correlation functions in the σ z basis is presented in appendix F. As a consequence, when using the TSMI, the information contained in the σ z correlation functions is washed out by all the other correlations, which necessarily decay for sufficiently large distances. This leads to more robust clusters which interact less strongly with each other. This is in line with findings of previous works [61][62][63], which have shown that quantities based on these correlators, in particular certain types quantum Fisher information, can probe the logarithmic growth of entanglement in MBL systems.
These findings show that it is advantageous to consider the σ z component in this context. In particular, the diagonal σ z correlations are accessible in existing quantum gas microscope experiments [29,64,65] and thus our technique can be directly be applied in such settings.
V. CONCLUSION
In this work, we have shown how to efficiently investigate the structure of MBL states using pairwise correlation func-tions and TSMI. We focused on two applications. First, we provide new numerical techniques for probing the structure of MBL eigenstates, scalable to large systems, particularly relevant for states obtained by MPS methods. Second, we show that our approach can provide a characterization of dynamics in the MBL phase. We have demonstrated that our clustering procedure yields results physically consistent with previously known results. When looking at the eigenstates, the scaling of the length of the clusters found in previous works [43] has been recovered. When looking at the dynamics, our results were consistent with the dephasing process between distant l-bits which is observed in other quantities such as the entanglement entropy or the quantum Fisher information. Moreover, we found that the quality of the clustering at late times was still high, a fact which underlines the relevance of the clustering in the time evolution of MBL systems, for it is the persistence of these relatively well separated clusters which prevents full thermalization of the state and keeps the saturation of entanglement entropy well below the Page value.
More broadly, we have demonstrated the possibility of probing the structure of quantum states based solely on pairwise correlations using graph theory approaches. In the main text, we present the scaling collapse of the number of clusters divided by system size. When the modularity obtained for one clustering is smaller than Q th (L) = α(1 − a/L), we bypass our algorithm and consider that the state is fully ergodic and therefore made of a single cluster. In Figs. 5 and 6, we show that the scaling collapse is not sensitive to the value of the coefficient α, as long as α is such that the modularity of almost all eigenstates deep in the ergodic (resp. MBL) phase is below (resp. above) Q th (L).
where P i j is the expected adjency matrix of the random graph which shares the same structural properties as the original graph of interest without presenting the same community structure. This random graph is also sometimes called the "Null model". In order to determine the matrix P, we must first specify a choice for the null model. Since it has to be similar to the original graph, we impose that the vertex of the random graph has to have the same degrees k i than the original one, that is to say: In other words, every vertex of the null model shares as much weight with the rest of the system than the graph of interest, although the connections between vertices are assigned randomly. On average, the vertex i and j will be connected by a edge of weight P i j = k i k j 2m [50], yielding [49]: We can see that this measure solves the issue initially encountered since the partition containing all vertices have a zero modularity. A value of modularity close to zero means that the partition is not better than a random one while a value close to one indicate a strong community structure.
Appendix D: Example of the evolution of the clustering in the dynamical case Fig. 8 shows the evolution of the clustering for t = 10, t = 10 3 and t = 10 7 . At short times, correlations start to build up locally, resulting in the formation of three large clusters. At intermediary times these blocks start to break up as correlation become more non local. Inter cluster correlations (corresponding to "off-diagonal" elements on the correlation matrix) are more important resulting in a decrease of modularity. At long times, this process continues to unfold, with a further fragmentation of the cluster structure. However we note that, despite longer range correlations, a clear cluster structure is present, and the correlations are not completely scrambled. Moreover, the inter-cluster interactions is more pronounced in the case of the correlation functions than in the case of the TSMI.
Appendix E: Scaling of the modularity with system size For a system where the optimal decomposition yields N clusters, the modularity can be written in the following way [66]: where the sum runs over the clusters. In the formula above, d i denotes the total degree of nodes in the cluster i: d i = j k j δ(c j , c i ) in the notation of the main text, e i is the number of edges in cluster i and m is, as in the main text, the total number of edges. Defining e = 1 N i e i and d = 1 N i d i we obtain: We now introduce the quantity e out , which is the average weight going out of each cluster: Using the fact that d = 2e + e out and m = 1 2 N d , we obtain: (2e + e out ) 2 N 2 (2 e + e out ) 2 (E4) Finally, noting that the number of clusters N is proportional to the system size, we recover the scaling of the main text:
|
2021-08-10T01:16:03.295Z
|
2021-08-06T00:00:00.000
|
{
"year": 2021,
"sha1": "3704f0cf6f6b4deb30614cdd0708fb66f4d787c0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3704f0cf6f6b4deb30614cdd0708fb66f4d787c0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253784202
|
pes2o/s2orc
|
v3-fos-license
|
A Study on a HMM-Based State Machine Approach for Lane Changing Behavior Recognition
In recent years, the development of advanced driving assistance systems (ADAS) has grown significantly within the transportation industry to assist drivers for making safe maneuvers. A major component in developing these assistance systems are driving behavior prediction and recognition models. These models aim to infer driving behaviors based on different sources and parameters using complex mathematical models. Machine learning algorithms are being used increasingly to develop these models. In this contribution, two formerly developed trainable models, which are an improved Hidden Markov Model (HMM) and a state machine model, are combined for the recognition of three lane changing behaviors (lane change to the right (LCR), lane keeping (LK), and lane change to the left (LCL). In the improved HMM, a prefilter is implemented on two sets of observation variables (input variables of HMM): one consisting of distances and velocity deviation, while the other consists of time to collision (TTC) variables. To develop an optimal model, thresholds of the prefilter are optimized using a Non-Dominated Sorting Genetic-Algorithm-II. The aim is to investigate if the proposed model is able to produce estimations with high accuracy (ACC), detection rates (DR), and low false alarm rates (FAR). In addition, the performance based on applying the prefilter on the two sets of variables are compared. Comparisons to an individual improved HMM and an ANN-based state machine approach are also addressed. The obtained results show that the application of prefilter on the TTC variables improves the estimation performance. Furthermore, the proposed approach outperforms other approaches.
I. INTRODUCTION
Based on a recent preliminary report by the European Commission related to road accidents in 2021, an estimate of 19800 lives were lost due to traffic accidents in the EU [1]. Focusing on Germany, road accidents have cost around 2562 lives in 2021, according to the German Federal The associate editor coordinating the review of this manuscript and approving it for publication was Alberto Cano .
Statistical Office (Destatis) [2]. While, this is a decrease of 5.8 % from the previous year, it is still a critical problem faced. Further research shows that majority of the deaths in country roads and motorways involve the passenger vehicles, while in built up areas, most victims are pedestrians [1]. A recent report by Destatis also shows that majority of these accidents are caused by human driving behaviors accounting for 71.8 % of the accidents in Germany in 2020 [2]. This is due to the driver's inability to predict the right action when driving in a complex or different environment [2]. The development of Advanced Driving Assistance Systems (ADAS) plays an important role to tackle this issue for improvements to road and vehicle safety. The ADAS provide support and assistance for drivers to maneuver safely in different environments. Some of the well-known ADAS used are the adaptive cruise control (controls speed of the vehicle while maintaining safety distances) and the collision avoidance systems (warn and alert drivers to avoid collision). Driving behavior prediction and recognition models are important elements of ADAS. The incorporation of driving behaviors with ADAS allows early prediction of driving behaviors and dangerous situations. Driving behaviors are considered as individualized, thus classifying and predicting behaviors by an assistance system enables the related drivers to get better supervision while driving [3]. A system that is able to adapt to individual behaviors of different drivers is considered to be ideal not only for increasing safety, but for the improvement of functionality (development of smart vehicles) as well [3]. Machine Learning (ML)-based approaches are increasingly used for the development of prediction and recognition models as these approaches are able to learn from given behaviors to predict similar behaviors or the driver's intentions. Some of the well-known ML approaches used in research contributions are Artificial Neural Network (ANN) [4], Support Vector Machine (SVM) [5], and Hidden Markov Model (HMM) [6]. In [4], an ANN-based model is used to predict lane changing maneuvers based on different situations, while in [5], a SVM model is used to detect lane changing intentions. A lane changing prediction model is developed in [6] based on an improved HMM, which includes the use of a prefilter. Most of these contributions analyze lane changing behaviors because a significant proportion of accidents occur during a lane change [2].
However, several challenges exist related to the development of such models, like defining optimal parameters for optimal estimations. A reason for this is the lack of understanding on selecting relevant input variables and parameters to be optimized due to the black box nature of the ML-based approaches. Two methods are usually applied to solve this problem. The first method combines two or more ML-based approaches to develop the model as in [7], whereby ANN is combined with SVM to predict lane changing behaviors on a highway. The results based on this work show that the combined model produced better estimation results than the individual models. A combination of ANN and HMM to develop a prediction model is realized in [8], whereby the HMM is used to predict different driving behaviors such as emergency steering, normal cornering (ability to handle bends), and straight line driving. Based on the predictions, an ANN model is used to obtain specific steering wheel angles for the different behaviors. The second method is based on feature selection techniques. Features used to describe a particular driving situation play an important role in driving behavior predictions. Thus, methods such as filter and wrapper methods are used in [9] to select the most appropriate features as input variables in the prediction and recognition models. The wrapper method considered in [9] employs the combination of SVM with a recursive feature elimination method to extract features [10], [11]. Different vehicle variables are combined to develop features using a prefilter in [6], as part of HMM for lane changing predictions. On the other hand, deep learning methods have been applied in recent years as in [12], for automatic extractions of temporal and spatial features, eliminating the need for manual extraction.
In this contribution, a trainable model is proposed by combining a state machine-based approach [13] and an improved HMM [6] for the recognition of lane changing behaviors. The lane changing behaviors considered are lane change to the right (LCR), lane keeping (LK), and lane change to the left (LCL). The state machine models the different lane changing behaviors as states, such that it describes the transition from one state to another based on specific conditions. These conditions are defined by the estimations of HMM. A state machine model as a new ML-based model is considered as it is more interpretable than a HMM model, while the HMM is known for its stochastic properties. To improve the performance of HMM, previous works [6] and [14] considered a prefilter application. The prefilter process variables to generate input features. Thus, in this work a prefilter is implemented to two sets of variables as part of the HMM. One set comprises of distance and velocity deviation variables, while the other set comprises of time to collision (TTC) variables. Similar to [6], this work also considers the optimization of the prefilter parameters using Non-Dominated Sorting Genetic-Algorithm-II (NSGA-II). Different from previously mentioned literature, a state machine-based model [13] is combined with HMM to formulate a new ML model for lane changing behavior recognition. The objective is to develop an effective model with improved performance in accuracy (ACC), detection rates (DR), and false alarm rates (FAR). Performance comparisons between the application of prefilter to the two sets of variables are realized as well for evaluations. The aim is also to test the generability of the model such that same parameter values can also be used to obtain optimal estimations for different drivers. In addition, comparisons between the proposed model, an individual improved HMM model [6], and a previously developed ANN-based state machine model [15] are performed to evaluate the effectiveness of the new model. This paper is organized as follows: in Section II the methodology of the state machine model, HMM, and improved HMM are described. Thereafter, the HMM-based state machine model is introduced in Section III. The feature variables used and optimization process of design parameters are described in this section as well. In Section IV, the application of the method is described, which includes the experimental setup, data processing as well as training and test process. The experimental results are discussed in section V. Finally, a conclusion is summarized in section VI. VOLUME 10, 2022
A. STATE MACHINE MODEL
State machines are used to model behaviors using a discrete number of states. A typical state machine model can either remain in the same state or describe the transition to another state based on a set of inputs and conditions. In this contribution, the transition conditions and parameters of the model are defined by the designers. One of the benefits of using a state machine model is its easy designing process and flexibility [16]. Another advantage includes its easy state reachability as the states can be defined finitely [17]. Nevertheless, a state machine-based model as a ML model for behavior estimations has not been widely applied in research. By far, only a few research works have applied this model in various areas, such as in tribology experiments to develop a lifetime model based on acoustic emission data [18] and in driving behavior experiments to develop driving behavior recognition models [13], [15]. Typical state machine models are developed in [18] and [13] whereby, the conditions for state transitions are based on threshold limits of certain variables. For an example, in [13] different threshold conditions of driving variables (values of input variables are higher or lower than the threshold values) define the state transitions.
In this contribution, the state machine-based ML model structure developed in [13] for the recognition of lane changing behaviors is adapted. The three lane changing behaviors estimated are represented by three states. In the developed model, the states are the output of the model at a particular time point. Based on Fig. 1, if the current estimated state is LK, the next possible estimations are either switching to states LCR, LCL (estimating the driver performs a lane change maneuver) or remaining in the same state (estimating the driver remains in the same lane) depending on the transition conditions. On the other hand, if the current estimated state is LCR or LCL, the next possible estimations are remaining in the same state (driver performs further lane changes in the respective direction) or to switch to state LK (the lane change is over).
B. HIDDEN MARKOV MODEL
The HMM is a probabilistic graphical model used to represent behavioral changes. As shown in Fig To apply the HMM-based model for the recognition of the lane changing behaviors, the model is first trained with the Baum-Welch algorithm (a special case of EM) to estimate λ. Given an observation sequence and the possible hidden states sequence, λ is estimated through learning to best fit both sequences. Then, using the Viterbi algorithm the most probable lane changing behaviors sequence is estimated based on the saved parameters.
The transition probability of switching from S i to S j (a switch from one behavior to another) can be formulated as The probability of being at S i at time t and changing to S j at time t + 1, given the observation sequence is defined as whereby, q t is the state at time t. Hence, the expected number of transition from S i to S j is the the sum of η t for all time steps, while the expected number of transitions from S i is the sum of all transitions from S i . Thus, the a ij is formulated as whereby, T is the time length of the full drive.
Here, χ t (i) is the probability of S i at time t formulated as Thus, b ki can be finally defined as such that the sum of χ t (i) for the entire time length of the drive for a given observation is divided by the sum of Based on the defined a and b, the HMM parameter is generated using the Baum-Welch algorithm. The next driving state is determined based on the saved HMM parameter and the given observation sequence by employing the Viterbi algorithm. Here, the maximum probability of all previous state sequences leading to the next state (q t (j)) is considered to denote the most probable path at time t. So, q t (j) is formulated as The HMM posses certain benefits such as its ability to analyze time series data and its stochastic characteristics [19], [20]. Since upcoming behaviors are stochastic and only depend on the present state, the HMM is a suitable choice for driving behavior estimation. The HMM can also handle temporal pattern recognition [20].
C. IMPROVED HMM WITH PREFILTER
Using a conventional HMM may have an impoverished performance if the features used are not accurate enough. Therefore for performance improvement, various approaches have been established such as a combination of HMM with other methods and HMM-derived methods [21]. The combination of HMM with other methods includes ANN-HMM [22], Fuzzy Logic (FL)-HMM [22], and Gaussian Mixture Model (GMM)-HMM [23]. These methods use results from one method as input to the other to determine the final behavior estimation. The other method is also used for determining the parameters and classifying different driving styles, behaviors, or situations. Thus, the combined methods consider the advantages from both the HMM and other methods to determine the final outcome. On the other hand, the HMM-derived methods such as Hierarchical HMM [24] and Bayesian Nonparametric HMM consider the time series property of HMM [25]. The general idea of HMM-derived methods include partitioning behaviors into several task layers. In these methods, the initial layer is used for determining different driving variables like acceleration, while the higher layer uses the results from the initial layer to estimate the corresponding driving behavior. A new and improved HMM-derived method developed in [6], which includes the application of a prefilter is utilized in this work. The prefilter is applied to the observation variables of HMM.
In the HMM model, the observations variables are dynamic which changes with time. Changes in the observation parameters changes the observation vector. To simplify the model, a prefilter is applied on the data of the observation variables to quantize the variables with a feature vector. The feature vector is used to determine different driving situations [6]. The prefilter divides the driving variables into segments with thresholds, such that each segment represents an observation. Thresholds are defined using optimization to develop the observations and ultimately the observation sequence.
III. HMM-BASED STATE MACHINE MODEL
A new HMM-based state machine model is introduced in this section. Here, two trainable systems, a state machine and an improved HMM are combined to develop a model that recognizes lane changing behaviors. The state machine model describes the transition between the states, while the estimations of an improved HMM define the transition conditions. The transition conditions differ from [13] which uses threshold conditions instead. The structure of this model is similar to the ANN-based state machine model [15], which uses the ANN estimations instead as the transition or remaining conditions. Driving decisions depend on environmental variables as well as individual driving behaviors. Hence, environmental variables describing the relationship between the ego vehicle and surrounding vehicles are selected as inputs.
A. HMM-BASED STATE MACHINE APPROACH
Based on Fig. 3, for a transition from LCR or LCL to LK to occur, the estimation of HMM should also be LK. On the other hand, for a transition from LK to LCR or LCL, the HMM estimation should be LCR or LCL respectively. If the HMM estimation is same as the current state or the conditions are not met, the model remains in the same state. The conditions for state changes are based on the aforementioned HMM mathematical process. The transition conditions are summarized in Table 1.
1) DATA SELECTION
As mentioned previously, only environmental variables are used as inputs for the model as these variables provide information that mainly affect the human driving decisions. Environmental variables are distinguished into two types: state of ego/surrounding vehicles and driver's operational information. Accordingly, two models are developed using two sets of input variables. Model I uses distances, velocity deviation, and current lane as inputs, while model II uses TTC and current lane as inputs. The selected variables based on both models best describe the current driving situation, thus affecting the driving behaviors (given in Table 2 and 3). The variables are selected as they have a higher influence on the driver's decision to make a lane change as well as the ability to describe the relationship between the ego vehicle and surrounding vehicles [3], [6].
A prefilter using two thresholds is applied on the distance and velocity variables for model I and on the TTC variables for model II. Each variable is divided by the prefilter into three segments [6]. As for the current lane number, the values are fixed indicating the specific lane of the ego vehicle.
B. OPTIMIZATION
The prefilter thresholds are design parameters of the model that needs to be optimized to generate an optimal λ = (A, B, π) for performance improvements. Here, the Non-Dominated Sorting Genetic-Algorithm-II (NSGA-II) is chosen to optimize these parameters. This technique is used due to its ability to handle Multi-objective optimization problems (MOPs) [26] and its fast convergence [26], [27]. The parameters are defined using this approach, such that the objective functions are minimized.
To evaluate the model's performance, ACC, DR, and FAR [28], [29] metrics are used by comparing the actual and estimated behaviors. The metric values are defined based on the True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN) values. Using LCR as an example to illustrate these values, TP is the number of events when the actual and estimated maneuvers are positive (LCR), while FP is the number of events when the estimated maneuver is positive, but the actual maneuver is not. Similar concept applies to TN and FN. The formulations for the metrics are given as Therefore, the parameters are determined such that the model achieves in parallel high ACC, DR, and low FAR. Similar objective functions as in [13] and [15] are chosen for the NSGA-II. As objective functions (13) are used, whereby each function corresponds to a specific lane changing behavior. In Fig. 4, the optimization process of the prefilter thresholds is shown.
IV. APPLICATION OF THE METHOD
The application of the HMM-based state machine approach is realized in this section. The experimental setup for data collection is first described followed by the process of labeling lane changing behaviors. Next, the training and test processes are explained. The training is done using the driving data from each individual driver for defining the optimal design parameter values. Then, the test is done based on the trained model using different data by the same driver and other drivers.
A. DESIGN OF THE EXPERIMENT
The driving experiments are conducted using a driving simulator, SCANeR TM in a laboratory environment (Fig. 5) t lane (Fig. 6). A LCL is defined by an increase in l, while a LCR is defined by a decrease. When l is the same as the previous time point, a LK is denoted. The beginning of a lane change is defined by the time of the indicator activation, t indicator . Here, the interval between t indicator and t lane is defined as the lane change duration denoted by t change [3], [6]. From the experiments, this duration is between 2 to 3 seconds, whereby the driver activates the indicator 2 to 3 seconds before performing a lane change. Different preset t change values of 2 s, 2.5 s, and 3 s are therefore tested for labeling behaviors to evaluate the impact of t change on the lane change recognition abilities. To do so, the HMM developed in [6] is used to estimate the lane changing behaviors based on the variables of models I and II. Previously mentioned metrics are used to evaluate the estimations. The training and test split ratio is also 70:30. In Table 4, the average performance values based on all drivers using the different t change values are given. Using 2.5 s to label the behaviors, generates results closest to the actual behavior for both models as most metrics have the highest values. Hence, t change of 2.5 s is used to define a lane changing behavior in this research. Inaccurate data are removed as part of the labeling processing. For an example, when the driver does not intend to change lanes, but drives over the white lines or slightly overlaps the lines to the next lane due to driving errors. A lane change is detected consequently, when it does not reflect the actual driver's behavior. Hence, these inaccuracies are removed [6].
C. TRAINING AND TEST
The training and test process are explained here.
Training phase: The purpose of the training is to develop a model with optimal parameters. The training process is described in the following manner, 1) Data (input variables) and the actual lane changing behaviors are loaded into the model. 2) The design parameters (prefilter thresholds) are developed using NSGA-II, which are then used to define the observation sequence in the HMM. Based on the defined observation sequence and actual driving behaviors, the HMM model is trained by optimization to define the optimal HMM parameters. 3) Next, the hidden states are estimated using the HMM parameters. 4) The state machine defines the final estimations using the estimations of HMM from the previous step. 5) The actual and estimated behaviors are compared to evaluate the ACC, DR, and FAR values. Using these values, the objective functions are calculated. 6) Steps (1) to (5) are repeated until convergence, such that optimal parameters are defined minimizing the objective functions. For the number of iterations, a generation size of 200 and population size of 90 is used in NSGA-II. Test phase: The test is performed using data not used in training as mentioned previously. Thus, based on the trained models for each driver, the lane changing behaviors are estimated using test data of the corresponding driver. Estimation performances are then evaluated using the mentioned metrics. To analyze the generability of the model, the trained parameters based on a specific driver are not only tested using the corresponding driver's test data, but also using test data of other drivers.
V. RESULTS
The performance based on the proposed method is presented in this section. Here, the performance based on models I and II are given and compared to analyze the effects of the prefilter on the different variable sets. The results presented are based on using 2.5 s for the lane changing duration, as it generated the best performance values when tested with the proposed approach for both models. This shows that the estimations are closest to the actual behaviors. Performance comparisons between the proposed approach and other approaches (HMM and ANN-based state machine model) are presented as well to assess the capabilities of the approach.
A. EVALUATION OF RESULTS
The average values of ACC, DR, and FAR based on the test data for both models are given in Table 5 and Fig. 7. In the table, the average values (test data) are the average metric values when each trained model is tested with the corresponding driver's test data.
As an example, the actual and estimated lane changing states corresponding to test data set of driver 2 (based on trained model of driver 2) from both models are plotted in Fig. 8 and Fig. 9. The red dotted lines represent the actual driving states, while the blue lines are the estimated states. The different driving states are represented in the vertical axis, whereby 1 is LCR, 2 is LK, and 3 is LCL, while the horizontal axis represents the time length of the drive in seconds,s. The data are recorded every 0.05 s. The figures show the proximity between the actual and estimated states.
As mentioned, a generability test is performed as well in which the results are presented in Table 6 and Fig. 10. In this test, the trained model of a specific driver is tested with test data of other drivers with the aim to analyze if the performance values are close to the values obtained in Table 5. The average values (other test data) are the average values when the test data of other drivers are used for testing the trained model of each specific driver.
Using drivers 1 and 2 as examples for the generability test, the estimated and actual lane changing states based on test data of driver 2 (trained model of driver 1 tested) are plotted in Fig. 11 and Fig. 12.
Based on the results in Table 5, both models generate high ACC, DR, and low FAR, with the exception of FAR keep . Based on the obtained results, it can be stated that model II has a higher performance than model I in most metrics. From this observation, it can be concluded that the prefilter application on TTC variables tends to have a positive effect on the performance. The generability test results (Table 6) show that the model II also outperforms model I here except for ACC right , FAR right , and FAR keep . However, the metric values based on Table 5 are higher.
B. COMPARISONS WITH OTHER APPROACHES
Comparisons between the proposed approach with an improved HMM approach and an ANN-based state machine approach are also part of the evaluation process. The ANN-based state machine approach is developed in [15], while the HMM is based on [6]. The prefilter thresholds are optimized as well in the improved HMM, while the ANN-based state machine use biases and weights (as ANN parameters) defined by optimization to develop estimations. Model II is used for comparisons as it has a better performance than model I. All other approaches also uses the same input variables as model II. In Table 7, the average metric values based on each driver's test data are shown. In addition, the receiver operating characteristic (ROC) curves for the three approaches based on the different lane changing behaviors are given in Fig. 13 to Fig. 15. The area under curve (AUC) values of each method based on the different behaviors are also presented in Table 8.
From the results, the HMM-based state machine approach has a better performance than the other two approaches in most metrics. On the other hand, the individual HMM approach outperforms the ANN-based approach, except for ACC right , FAR right , ACC keep , and DR keep . An observation based on the results are the FAR keep values tend to be high in all approaches. The ANN-based state machine approach also produces a lower DR left value compared to other HMM-based approaches. Based on the ROC curves, the HMM-based state machine model has the best performance in LCR and LCL, as the model generates the highest true positive rate corresponding to a related low false alarm rate. The AUC values of the the proposed approach are also the highest in LCR and LCL (Table 8), while the AUC of the individual improved HMM approach is the highest fin LK. In general, it can be concluded that the proposed approach improves the performance of the individual improved HMM approach and the ANN-based state machine approach showing its effectiveness.
VI. SUMMARY AND CONCLUSION
Driving behavior prediction accuracy often rely on the structure of ML approaches or the input features used. In this contribution, a new approach combining a previously developed state machine model and an improved HMM for the recognition of lane changing behaviors is introduced. The state machine models the lane changing behaviors using three states, such that the model can express the transition between states or remain in the same state for estimations of the final VOLUME 10, 2022 lane changing behaviors. The modeling of the transition is realized based on specific conditions defined by the HMM estimations. Here, the HMM estimates lane changing behaviors as well. The HMM differs from a conventional HMM as a prefilter with unknown thresholds values is applied to the features used with it. Two feature sets consisting of distance as well as velocity deviation variables (model I) and TTC variables (model II) are considered for the application of this method.
Based on the results obtained, model II produces better ACC, DR, and FAR than model I. Nevertheless, both models generates acceptable results. In general, it can be concluded that using TTC variables improves the recognition performance in both experiments. A generality test is performed as part of this work, which shows that model II has a better generalization ability than model I. For further evaluations, the developed approach is compared to an individual HMM approach and an ANN-based state machine approach using model II. The results show that the HMM-based state machine model outperforms the other two methods.
RUTH DAVID is currently pursuing the Ph.D. degree with the Chair of Dynamics and Control, University of Duisburg-Essen, Germany. Her current research interests include recognition and prediction of driving behaviors based on applications of machine learning methods. She currently focuses on developing a state machine-based model as a new machine learning approach for the prediction and recognition of behaviors.
DIRK SÖFFKER (Member, IEEE) received the Ph.D. degree (Dr.-Ing.) in mechanical engineering and the Habilitation degree in automatic control/safety engineering from the University of Wuppertal, Germany, in 1995 and 2001, respectively. He has been the Head Professor of the Chair of Dynamics and Control, University of Duisburg-Essen, Germany, since 2001. His current research interests include diagnostics and prognostics, modern methods of control theory, safe human interaction with technical systems, safety and reliability control engineering of technical systems, and cognitive technical systems.
|
2022-11-23T16:05:48.155Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b1e8f041a4c44192b12133992bb3f3e158ccea1e",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09957049.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "52c1d20099b46193ef53cf10b4bd396cb8ac0fee",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
159143258
|
pes2o/s2orc
|
v3-fos-license
|
DYNAMICS OF AUSTRIAN FOREIGN DIRECT INVESTMENT AND THEIR INFLUENCE ON THE NATIONAL ECONOMY
The purpose of the paper is to analyse the dynamics of Austrian foreign direct investments (FDI) and its role in the development of the national economy. The subject of research is the main components of Austrian foreign direct investments 2005–2017 and their impact on the national economic development. Methodology. Methods of comparative and statistical analysis were used to study the dynamics, structure, and economic impact of Austria’s FDI. Special attention was given to the dynamics of FDI inflows and outflows, accumulated investments, cross-border mergers and acquisitions, “Greenfield Investments”, the impact of FDI on the balance of payments and international investment position of Austria. The method of mathematical modelling in economics, in particular, regression analysis, based on annual data for the period from 2005 to 2015, was applied to assess the relationship between the main components of foreign direct investments and the indicator of the country’s economic growth – the gross domestic product (GDP) per capita. The following indicators were selected as independent variables: FDI liabilities, assets of FDI funds, as well as the balance of primary incomes. The dependent variable was the GDP per capita. It should be noted that such indicators as FDI assets and liabilities of FDI funds were not represented in the final model because of the high correlation between independent variables, and the relationship between GDP per capita and net foreign assets was insignificant. The assets of foreign direct investment funds have the greatest impact on the economy of the country, and the relationship between these indicators is direct. A slightly weaker relationship is observed between the balance of primary incomes and GDP per capita. The relationship between them is also direct. Liabilities of FDI have the least impact on the dependent variable in comparison with the other two. Findings. The growth of foreign direct investments of Austria, as a result of liberalization of the world and European economy, as a whole has a positive impact on its GDP. Thus, activities that are aimed at stimulating investments are fully justified and understandable. The paper determines important factors of Austria’s investment activity and attractiveness, as well as the main factors that influence the dynamics of FDI. The most important among them are: the level of education, the internal coefficient of investment, political stability, the terms of trade, the state of the financial sector. The results of the analysis show that Austria has a high level of business activity; the government conducts activities to stimulate investment in R&D and in high-tech enterprises, to create new jobs, to protect the environment etc. The results of the study allow forecasting a gradual improvement in the balance of the country’s primary incomes, which will contribute to the further growth of the current account surplus and will strengthen the positive influence of Austria on the development of the European and global financial systems. Practical implications. The results of the study will help to increase: the effectiveness of the investment policy of Austria to stimulate the country’s economic growth; the international competitiveness of national companies on European and world markets; the level of stability of Austria’s financial system to external shocks.
Introduction
Austria is a highly developed industrial country.Its geopolitical position between Western Europe (industrialized countries) and burgeoning markets of Central, Eastern, and South East Europe (CESEE) has provided a high degree of economic, social, political integration with EU countries and other countries CESEE, which are not EU members.Due to the external market access in Central and Eastern Europe, the Enlargement of the EU in 2004 and 2007 has consolidated Austria's investment attractiveness.However, the Enlargement of the EU has also strengthened investment positions of Austria's competitors.As a result, nowadays, Budapest, Prague, and Bratislava compete with Vienna for foreign investors.
Nowadays, Austria has created special favourable conditions to attract foreign investors.The priority sectors for investment in Austria are high-tech and innovative industries.There are four major types of incentives for investment projects of companies registered in Austria: regional incentives, incentives for small and medium-sized enterprises, technology incentives, and environmental incentives.The government of the country provides assistance as subsidies to entrepreneurs, who operate investment projects targeted at supporting the creation of new jobs.The most common way to support foreign investment is to ensure the bank's guarantee for the preferential loan, provided for the implementation of investment projects.In case of implementing high risky or very large investment projects, conditions for participation in the distribution of a company's profit or additional non-risky rewards can be discussed.
Along with the supportive programs, a very important role for Austria's investment attractiveness play political stability and administrative transparency, rule of law, business environment, tax legislation, the protection of investors' rights, the stability of legislation and etc. Bilateral agreements, mainly with less-developed countries, have also a great influence on investments attraction in Austria (U.S. State Department, 2015).
Austrian Business Agency is a special structure for enterprises and investments attraction, which organizes contacts with potential investors and informs the business environment about Austria as a business platform.In general, since the foundation of the Agency in 1982, nearly 49.803 new jobs have been created and investments in the amount of almost €7.25 bn have been attracted (Austrian Business Agency, 2018).
In addition to this Agency, Austrian Economic Chambers (WKO), whose membership is compulsory for all Austrian companies, also helps Austrian companies in investing abroad.There is a government structure "Austrian Business Agency", as well as similar departments in each of the nine federal constituencies (counties) which provide an informative business support and offer different stimulating measures.
Meanwhile, foreign investors should take into account that Austria saves the essential features of the national economic model, established after the Second World War.For many years, the state played a major role in the country's economy.However, the state's role has significantly reduced as a result of structural adjustment policies and the large-scale privatization of the state holding company Österreichische Industrieholding (ÖIAG).
Literature review
Foreign direct investments play an important role in the development of national economies.Therefore, different aspects of FDI movement are investigated by both academics and international and national specialized institutions (IMF, UNCTAD, ECB, EBA, OECD, etc.).
Most of the modern applied and theoretical researches of the FDI movement are dedicated to developing countries.However, the proper attention is not paid to the analysis of the dynamics and structure of FDI in developed countries, including Austria.Authors focus their attention on the impact of FDI on employment, remuneration in the context of globalization (Onaran Ozlem, 2008), European economic integration (Beer et al., 2017) and disintegration processes (Sydorova, Yakubovskiy, 2017), role of FDI in Austria's innovation potential (Lomachynska, Podgorna, 2018), assess the impact of FDI from Austria on the development of the economies of the recipient countries (Petrakos et al., 2000;Kurtovic et al., 2016;Yakubovskiy et al., 2018), analyse the impact of the global financial crisis on the dynamics of GDP and FDI (Simionescu, 2016).Based on these studies, it can be summarized that the growth of Austria's FDI as a result of the liberalization of the world and the European economy had a positive impact on its GDP; had a positive influence on the development of the service sector, but did not have a significant impact on the export-oriented trades of the real sector; had a negative impact on wages and employment in Austria; contributed to the growth of transitive economies and their effectiveness.The global financial crisis, the decline in GDP growth and FDI in the CESEE countries negatively affected the investment activity and profitability of Austria.
At the same time, the changes in the dynamics of the inflow and outflow of Austrian FDI in modern conditions, as well as their factors, are insufficiently covered.
The paper's aim is to investigate the dynamics of the main components of Austria's FDI in 2005-2015.
Dynamics of Austria's FDI and their main components
The data of Austria's volume of foreign direct investment show a declining tendency for the last years.Figure 1 demonstrates the dynamics of inward and . 4, No. 5, 2018outward of FDI in 2005-2015.During the pre-crisis period 2005-2007 inward of FDI increases and reaches $13.7 bn in 2007, while the outward reaches $19.7 bn.In the post-crisis period, FDI flows are rapidly declining and the minimal values are seen in 2010: the inward of FDI is $2.6 bn, the outward is $9.6 bn.
Vol
In spite of investment attractiveness, the dynamics of FDI in general shows that Austria is more a donor, rather than a recipient of foreign direct capital.Sharp changes in the dynamic of FDI can be observed only in 2014 when its outward has decreased by about three times, while the inward has gone up by more than 1,5 times.These changes can be explained by Austria's victory of "Eurovision" Song Contest in 2014.In terms of this contest, Austria as a winner held up this event in Vienna in 2015.The organization of such a type of event requires a large-scale preparation of the accompanying infrastructure, as well as a large capital investment on the eve (in 2014).In 2015, Austria's flows of FID have returned to its usual ratio, when the inflow is less than the outflow.Nevertheless, the inflow of FDI reduced to $3.8 bn, outflow increased by $12.4 bn.Comparing with 2013, both indicators have declined by nearly $2 bn.This situation can be explained by European migration crisis at the beginning of 2015, which arose as a result of the rapid increase in the flow of refugees and illegal migrants to the European Union from North Africa and the Middle East.Political and economic instability in Ukraine, low growth prospects in the countries of South-East Asia had also a negative impact on the dynamics of Austrian FDI in 2013-2014.
Considering the changes of the FDI stocks of Austria (Fig. 2), first of all, it should be noted that despite of their small volume compared to developed countries of Western Europe, such as Germany and Great Britain, their role in Austria's economic and their ratio to country's GDP are no less important.
For the period from 1995 to 2015, the inward FDI stocks of Austria raised up by nearly 9 times (from $19 bn to $164 bn), while Austria's foreign investments stocks in other countries almost by 20 times (from $114 bn to $208.3 bn).This also confirms that Austria is gradually strengthening the position of the investor country.
FDI stocks of Austria
Source: compiled according to data (IMF, 2017) Vol. 4, No. 5, 2018 Due to the territorial proximity, the absence of language, German companies are the leading investors of Austria's economy.As a result of the deepening of European integration in the last fifteen years, a significant share of investment is provided by the EU countries, primarily by Germany.Although, if earlier Germany accounted for more than 40% of FDI in Austria, nowadays the share of German investors does not reach even 25%.The Netherlands, Italy, Luxembourg, and Russia are also the leaders of investing in Austria.
The dynamics of cross-border mergers and acquisitions of companies (M&A) in Austria from 2005 to 2007 shows that M&A transactions have been conducted in the amount of $3.9 bn, which is 11 times less than in Germany and 39 times that in the UK.There is a sharp increase in this indicator in 2014: the volume of the sales of Austrian companies in M&A amounted to $3.07 bn, which corresponds to the trends of 2005-2007.In 2015, the volume of the sales of Austrian companies in M&A fell by 3, 5 times and reached only $849 m.In the years of 2005-2015, the volume of the purchasing of Austrian companies in M&A is slightly higher than the amount of sales.This indicates the high activity of Austrian multinational companies.For three years from 2005 to 2007, the total value of transactions in M&A amounted to $5.6 bn.In 2013, this indicator has increased significantly -up to $10.7 bn.This also confirms that, unlike many other participants in the euro area, Austria quickly and successfully coped with the consequences of the global financial and economic crisis of 2008.However, in 2014 the amount of purchase transactions for mergers and acquisitions of companies decreased to $345 m.Despite this sharp decline, the indicator began to recover in the next year and till the end of 2015, there were merger and acquisition transactions of companies amounting to $4.8 bn.
The dynamics of Greenfield Investments (investments in the "empty lot"), investments, which aimed at the creation of a new enterprise, demonstrate high activity in Austria during 2005-2015.From 2005 to 2007, $12.5 bn was invested in the "empty lot" projects by Austrian investors and a high figure has been maintained for the past three years.In 2013, $6.2 bn, $5.1 bn in 2014, and $5.7 bn in 2015 were invested in similar projects.
However, foreign investments in the "empty lot" projects in Austria for the whole analysed period is significantly less than in similar projects, which are invested by Austrian entrepreneurs.For the whole period of 2005-2007, only $2.8 bn was invested in the "empty lot" projects.At the same time, for the last 3 years, this indicator has not changed significantly and remains at the level of $1.1-$1.8bn.
For the growth of the "Greenfield Investment" in Austria, the government of the country offers foreign companies additional investment incentives, primarily within the framework of regional and innovation policies.Growing competition of the Central and Eastern Europe compels the Austrian government to reduce the tax burden for enterprises (IMF, 2017).The corporate tax rate is 25%, and by the estimation, it does not hinder to the development of foreign companies in Austria.In accordance with the Law on the Promotion of the Establishment of New Firms (Neugruendungs-Foerderungsgesetz), new companies are exempted from paying taxes when buying a land plot, a court fee for the registration of a firm in the commercial register and the land registry, and other fees.In addition, there is a federal agency for Investment Promotion "Austrian Business Agency" (Bellak Ch., 2010;Austrian Business Agency, 2018), as well as similar agencies in each of the nine subjects of the federation (lands) that provide business information support and offer other incentive measures.
Analysing the balance of payments in Austria, first of all, the main attention should be paid to the current account of Austria (see Figure 3).
During the period of 2005-2015, the balance of the current account is positive and has a fluctuant dynamics.If in 2005 it was $6.2 bn, then in 2008 the maximum value was reached at $19.3 bn.However, as a consequence of the global financial and economic crisis, at the beginning of 2009, the current account balance is rapidly decreasing -almost twice and equals $10.2 bn.The next recession is observed in 2011-2012, 2005 2006 2007 2008
Figure 3. The net current account of Austria
Source: compiled according to data (IMF, 2017) Vol. 4, No. 5, 2018 which can be explained by the European debt crisis and, as a consequence, the decline in business activity of Austria.However, in general, the balance of current operations of Austria for the whole analysed period has a positive value.One of the most significant components of Austria's current account is the trade balance (Figure 4).The Austrian economy highly depends on the foreign trade and is closely connected to the economy of other EU countries, especially Germany (33% of the trade turnover in 2015), Italy (6.2%), the USA (5.4%), Switzerland (5.5%).
The volume of foreign trade in Austria in 2015 reached $232 bn, which is 2.6% more than in the previous year.The main export goods in Austria are chemical products (primarily pharmaceuticals), cars and their components, equipment and paper products.Austria mainly imports machinery and equipment, automobiles, chemicals, metal products, oil and oil products, food products.A slight increase in exports in 2015 is ensured by the countries of Northern and Southern Europe, Asia.
During the whole analysed period, the trade balance has both a negative and a positive value.A close correlation of the trade balance with the current account can be noted.The trade deficit in Austria can be explained by the consequences of the global financial and economic crisis, the debt crisis and uncertainty in the EU.
The balance of primary and secondary income has also a big impact on the Austrian balance of payments (see Figure 5).The primary income account reflects the amount of investment income that must be paid and received for the temporary use of labour, financial resources or non-tangible non-financial assets (income from the use of natural resources by non-residents, etc.).These include: wages, income, taxes on production, income from property (interest, dividends, rent), etc.
The balance of primary income has unstable dynamics, changing several times from a negative value to a positive one.In 2007, the value of primary income was negative (-$315 m), due to the growth in the number of labour migrants as a result of the EU enlargement, and, as a result, an increase in the Austrian economy's wages to non-resident employees.It is interesting that the high value of the indicator can be seen in 2008 while the balance of primary income increased from -$315 m to $3.4 bn.These changes are explained by the dynamics of income from investments, including receipts from financial assets owned by residents of Austria located abroad.From 2011 to 2013, the balance of primary income remains positive, and then in early 2015, it rapidly has a negative value.Such a situation is a result of an Austrian long-term policy aimed at attracting investments.However, due to the negative impact of the European debt crisis, Austria's long-term stability rating was downgraded from AAA to AA+ by the European Financial Stability Facility, which directly influenced Austria's investment attractiveness.
The financial account is also a very important component of Austria's balance of payments (Figure 6).
During the period of 2006-2009, the financial account of Austria had a stable position and fluctuated within the limits of $13.7-$15.5 bn.In 2010, this indicator went down by more than 4 times and reached $3.3 bn, after which it gradually grew at $5.44 bn in 2012.In 2013, there was a sharp increase by 2.5 times to $13.9 bn.; in 2014 there was a rapid reduction by 28 times.However, in early 2015, this indicator rose again to $6.2 bn.
Austria's international investment position demonstrates the volume of external financial assets and liabilities of the economy at a certain period in time, which is formed as a result of external transactions estimated at current market value (at current market prices and exchange rates), as well as under the influence of other factors.The international investment position of Austria is shown in Figure 7.
Figure 7 presents financial assets belonging to Austrians abroad (assets) and non-residents in Austria (liabilities).From 2005 to 2012, the net international investment position of Austria was negative (liabilities exceeded assets with a difference of $10 m to $60 m), which indicates that Austria is a net debtor.From 2005 to 2007, both assets and liabilities increased by almost two: assets from $651 m to $1.136 m, and liabilities from $714 m to $1.176 m.From 2008 and by 2013, the volume of assets, both for Austrians abroad and for non-residents in Austria, remained within $1-$1.2 bn.In spite of the global financial and economic crisis and the European debt crisis, Austria's investment activity (both donor and recipient) is stable.From 2013 to 2015, Austria's assets began to exceed liabilities and the balance increased from $5.8 m in 2013 up to $8.9 m in 2014 and $10.8 m in 2015.At the same time, the total volumes of both assets and liabilities decreased.Assets have decreased from $1.2 bn to $970.7 m, liabilities from $ 1.2 bn to $960 m.Thus, the net investment position indicates that Austria is a net creditor and confidently reinforces this position.
Influence of the main components of foreign direct investments on Austria's gross domestic product per capita
In order to determine the impact of the growth of Austria's FDI on the economy as a whole, it is possible to construct a model describing the relationship between the main components of foreign direct investments and the indicator of the country's economic growth -the gross domestic product.The following indicators were selected as independent variables: FDI liabilities, assets of FDI funds, as well as the balance of primary incomes.Source: compiled according to data (IMF, 2017) Vol. 4, No. 5, 2018 The dependent variable is GDP per capita.It should be noted that indicators such as FDI assets and liabilities of FDI funds were not represented in the model because of the high correlation between independent variables, and the relationship between GDP per capita and net foreign assets was insignificant.Thus, the imaginary model has the following form: GDP=β1*FDIi+ β2*FDIso+ β3*PIB, where GDP -gross domestic product per capita, USD, FDIi -FDI liabilities, USD, bn., FDIso -assets of FDI funds, USD, bn., PIB -primary income balance, USD, bn.
The model was estimated on the basis of statistical data for 2000-2017, the indicators were obtained from the UNCTAD statistical database (UNCTAD, 2018) This regression model indicates the existence of a significant relationship between the dependent and the variables.The greatest influence on the economy of the country is provided by the assets of foreign direct investment funds, and the connection is direct, i.e. with an increase in this indicator by 1 standard deviation, GDP per capita will increase by 0.755 standard deviations.This dependence is understandable because the economic development of Austria depends very much on the movement of foreign direct investment, namely the country is more a donor, rather than a recipient of FDI.And the existence of direct dependence is explained by the receipt of the benefits of the investing country as a result of reducing costs, obtaining tax benefits and subsidies, and a number of other advantages.
A slightly weaker relationship is observed between the balance of primary incomes and GDP per capita.With the growth of the independent variable by 1 st.d., the GDP per capita grows by 0.493 st.d.There is also a direct dependence.The balance of primary incomes itself has unstable dynamics, changing several times from a negative value to a positive one.And volatility is observed both in terms of income for wages and for investment.In general, the most frequent manifestation is the surplus of the balance of primary incomes, where investment income predominates.
The last indicator -liabilities of FDI -had the least impact on the dependent variable in comparison with the other two.Growth in FDI liabilities leads to an increase in GDP per capita by 0.372 standard deviations.An explanation why in this case the relationship is much weaker in comparison with assets is statistics.For the analysed period, the inward FDI stocks of Austria raised by almost 9 times, while Austria's foreign investments in other countries -almost by 20 times.The second proof of this is the dynamics of Greenfield Investments: the value of foreign investments in the "empty lot" projects in Austria for the whole of the analysed period was four times less than was invested abroad by Austrian investors.
The following reasoning is an explanation of the existence of a direct relationship between variables.As it is known, in theory, the influence of FDI on the recipient country may be ambiguous or even negative.A number of factors that the country must have for a positive effect are connected with this.Among them: the level of education, the internal coefficient of investment, political stability, the terms of trade, the state of the financial sector.Thus, the country under consideration is characterized by administrative transparency, rule of law, business environment, tax legislation, the protection of the inventor's rights, the stability of legislation, political stability.
It should be noted that at the moment the government has taken a course to attract foreign direct investments.For this, supportive programs, regional incentives, incentives for small and medium-sized enterprises, technology incentives and environmental incentives are created.This policy is right for today for the further economic growth of Austria.Proof of this can be an analysis of the dynamics of the balance of payments.So, throughout the considered time the current account of the Austrian economy is in surplus, the financial account is also observed positive.Hence it follows that in order to maintain the surplus and the balance of payments itself, the balance of current operations over the financial balance must prevail.However, in practice, Austria's balance of payments indicator is volatile, in the last 2 years, there has been a deficit caused by the predominance of the financial account.In order to eliminate the deficit, an additional inflow of capital will be required.
So, after carrying out the research it can be found out that the growth of foreign direct investment of Austria as a result of liberalization of the world and European economy as a whole had a positive impact on its GDP.Activities that are aimed at stimulating investment, is fully justified and understandable.
Conclusions
Foreign direct investment plays an important role in the development of the national economy of Austria.Due to its geopolitical position, Austria for a long period of time had a high investment yield because of the activity in Central, Eastern, and South-Eastern Europe (CESSE).Despite this, Austria has a high level of business activity; the government conducts activities to stimulate investment in R&D and high-tech enterprises, creating jobs, protecting the environment and etc.The strengths of Austria remain political and macroeconomic stability, a developed financial system, attractive corporate taxation, Vol. 4, No. 5, 2018 developed infrastructure, highly skilled labour, high productivity and international competitiveness, high quality of life, etc.At this stage, it's important to boost the efficiency of direct investment within the country, their positive impact on the dynamics of the real economy and GDP.The paper proves the increase of stability of Austria's financial system to external shocks.The main factor is a significant improvement in the country's international investment position, which since 2013 has become positive.Thereby, it is possible to forecast a gradual improvement in the balance of the country's primary incomes, which will contribute to the further growth of the current and financial account surplus and will strengthen Austria's influence on the development of the European and global financial systems.
|
2019-05-21T13:03:48.559Z
|
2019-02-11T00:00:00.000
|
{
"year": 2019,
"sha1": "218c6f3ba66ef86faad3132e491d29919cba8192",
"oa_license": "CCBYNC",
"oa_url": "http://www.baltijapublishing.lv/index.php/issue/article/download/553/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "218c6f3ba66ef86faad3132e491d29919cba8192",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
248370287
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and Safety of Endoscopic Resection for Small Gastric Gastrointestinal Stromal Tumors in Elderly Patients
Background Gastrointestinal stromal tumors (GISTs) are prevalent in elderly patients. Endoscopic resection has become popular for treating small (≤5 cm) gastric GISTs. However, little is known about the outcomes of endoscopic resection in elderly patients. Aim To assess the efficacy and safety of endoscopic resection for small (≤5 cm) gastric GISTs in elderly patients (≥65 years old). Methods A total of 260 patients (265 lesions) with gastric GISTs treated via endoscopic resection from January 2011 to May 2020 were retrospectively analyzed. Among them, 65 patients were ≥65 years old (elderly group), and 195 patients were <65 years old (nonelderly group). Clinicopathological characteristics, postoperative complications, and tumor recurrence rates between the two age groups were compared. Results A total of 260 patients with primary small (≤5 cm) gastric GISTs were treated with endoscopic resection. The median ages of the elderly and nonelderly groups were 68 (range 65-83) years and 55 (range 32-64) years, respectively. Elderly patients showed a higher incidence of comorbidities compared with nonelderly patients (61.5% versus 32.3%s, respectively; p < 0.001). All elderly patients and 99.0% of nonelderly patients underwent en bloc resection; only two nonelderly patients received piecemeal resection. No significant differences were found regarding postoperative complications or tumor recurrence rates between the two groups. Conclusions Although elderly patients had more comorbidities than nonelderly patients, both groups had similar postoperative complications and recurrence rates. We suggest that endoscopic resection performed by experienced endoscopists is safe and effective for treating small (≤5 cm) gastric GISTs in elderly patients.
Introduction
Gastrointestinal stromal tumors (GISTs) are the most common gastrointestinal tract neoplasm originating from mesenchymal tissue [1], and more than 5000 new cases occur in the United States each year [2]. Most GISTs have gain-offunction mutations in c-KIT or platelet-derived growth factor receptor alpha that promote tumor cell proliferation and inhibit tumor cell apoptosis [3,4]. Clinical manifestations of GISTs vary greatly from the Carney triad [5] and GISTparaganglioma syndrome to incidental physical examination findings [6]. Almost all GISTs are considered to have malignant potential, regardless of the size or mitotic index of the tumors [1]. Previous studies have indicated that the stomach is the most frequent location for primary GISTs, followed by the small intestine, colon, rectum, and esophagus [2,7,8]. Moreover, gastric GISTs present with a smaller size and fewer symptoms compared with GISTs at other sites [9].
GISTs can appear at any age without significant gender differences [10]. GISTs can be diagnosed at any age, with the peak age of incidence of GISTs ranging from 60 to 74 years [11,12]. Elderly individuals are more likely to have multiple comorbidities and a worse systematic physical condition compared with younger patients [13], which greatly impact the choice of therapeutic approaches for elderly patients with GISTs. There is a lack of understanding of appropriate therapeutic options for elderly patients with GISTs due to the underrepresentation of clinical trials in this field. With the aging of the global population, further studies are urgently needed regarding treatment strategy exploration for elderly patients with GISTs.
Recently, the increasing availability and universality of endoscopy have significantly augmented the detection rate of GISTs. Open surgery or laparoscopic resection is recommended as the standard treatment for GISTs originating from the muscularis propria [14]. However, the endoscopic treatment of small (≤5 cm) GISTs remains controversial. Some studies have reported that endoscopic resection for GISTs predisposes patients to unavoidable perforation and a positive tumor margin, which results in local recurrence and distant metastasis [15]. Various studies have pointed out that endoscopic resection is a safe and reliable treatment for GISTs [16][17][18]. Currently, endoscopic resection, including endoscopic submucosal dissection (ESD), endoscopic full-thickness resection (EFTR), and endoscopic submucosal excavation (ESE), has been gradually adopted for treating gastrointestinal tumors [19]. Compared with surgical resection, endoscopic resection has the tremendous advantages of being a less invasive procedure, requiring less administration of sedative or analgesic medications, and having a lower cost and shorter length of postoperative hospital stay [20]. Thus, endoscopic resection seems to be a better therapeutic choice for GISTs.
To date, the clinical outcomes of endoscopic treatment for elderly patients have not been reported. Therefore, it is important to evaluate the efficacy and safety of endoscopic resection in patients with primary gastric GISTs. Here, we conducted a retrospective study that analyzed clinical data from 260 patients with primary small (≤5 cm) gastric GISTs who underwent endoscopic resection between 2011 and 2020 at the First Affiliated Hospital, Zhejiang University School of Medicine (Hangzhou, China), to gain a better understanding of the efficacy and safety of endoscopic resection for small (≤5 cm) gastric GISTs in elderly patients (≥65 years).
Patients.
In this study, we retrospectively enrolled patients with pathologically confirmed gastric GISTs who underwent endoscopic treatment at the First Affiliated Hospital, Zhejiang University School of Medicine (Hangzhou, China), between January 2011 and May 2020. We received approval from the Research Ethics Committee of the First Affiliated Hospital, Zhejiang University School of Medicine. The inclusion criteria were as follows: (1) the gastric lesions were diagnosed histopathologically as GISTs; (2) the tumors were 5 cm or smaller; and (3) the patients had no evidence of lymph node or distant metastasis. Patients were excluded if they died from a non-tumor-related cause. Finally, 260 patients with gastric GISTs were enrolled. Among them, 65 were aged 65 years or older (elderly group) and 195 were younger than 65 years old (nonelderly group).
Endoscopic
Procedures. Before endoscopic resection, patients underwent a preoperative examination to evaluate their physical condition, tumor size, tumor origin, and tumor metastasis. Patients who took anticoagulants or antiplatelets were asked to stop these drugs at least 1 week prior to endoscopic resection if feasible. All the endoscopic procedures were performed by experienced endoscopists. After fasting for 6 h, the patients received endoscopic treatment under general anesthesia with endotracheal intubation. Blood loss was defined as the amount of bleeding during the endoscopic procedure with or without the need for endoscopic hemostasis. After endoscopic resection procedures, all patients underwent gastrointestinal decompression and four-day routine hospital observation. All resected tissues were collected and immediately prepared for pathological examination to determine the diagnosis. Endoscopic resection procedure-related death was defined as death from adverse effects within 30-day postoperation.
Endoscopic Submucosal
Dissection. ESD is a common approach used for the resection of GISTs with intraluminal growth patterns. The major steps of ESD are as follows: (1) marking: the electrosurgical knife was used to point the marker dots about 5-10 mm outside the target lesions; (2) injection: the mixed solution (0.9% normal saline solution plus 0.002% indigo carmine plus 0.001% epinephrine) was injected into the submucosal layer to elevate the GISTs; (3) precutting: the needle-knife was used to cut the mucosa along with the marker dots; (4) incision: the muscularis propria layer associated with GISTs was stripped using an insulation-tipped knife; (5) submucosal dissection: the needle-knife was used to completely dissect tumors from the muscularis propria layer; and (6) closing of the wound (Figure 1).
Endoscopic Full-Thickness
Resection. EFTR is a common approach used for the resection of GISTs with extraluminal and mixed growth patterns. The first three steps of EFTR are the same as those for ESD. After (1) marking, (2) injection, and (3) precutting, (4) the insulation-tipped knife was used to cut the serosal layer to form a circumferential incision surrounding the tumor. (5) Thereafter, the tumor and surrounding serosal layer were completely removed by snaring. (6) Finally, a loop-and-clip closure technique or overthe-scope clip was used to close the defect ( Figure 2).
Endoscopic Submucosal
Excavation. ESE is a common approach used for the resection of GISTs with intraluminal growth patterns. The first three steps of EFTR are the same as those for ESD. After (1) marking, (2) injection, and (3) precutting, (4) the insulation-tipped knife or hook knife was used to expose the tumors. (5) Thereafter, the tumor and surrounding tissue were completely resected by snaring. (6) Finally, metallic clips were used to close the defect ( Figure 3). Gastroenterology Research and Practice 2.6.1. Submucosal Tunneling Endoscopic Resection. Submucosal tunneling endoscopic resection (STER) is an approach used for the resection of GISTs with intraluminal growth patterns. After (1) marking and (2) injection, (3) a hook knife was used to cut a 1.5-2.0 cm longitudinal mucosal incision as a tunnel entrance. (4) A submucosal tunnel was created to ensure a sufficient view and workspace. (5) The insulation-tipped knife or hook knife was used for tumor resection. (6) Finally, hemostatic clips were used to close the defect ( Figure 5).
Clinicopathologic
Variables. Demographic data (age and gender), clinical data (clinical symptoms and comorbidities), tumor characteristics (number of lesions, tumor location, tumor size, tumor growth pattern, and pathological outcome), procedural-related details, and postoperative outcomes were obtained. Based on age, patients were classified into the nonelderly (<65 years old) or elderly (≥65 years old) groups.
2.8. Pathology Assessment. The pathology assessment was confirmed by two experienced pathologists. En bloc resection was defined as resection of a tumor in a single piece by endoscopic treatment. Gastric GIST was diagnosed by immunohistochemical staining for CD117 (c-KIT), CD34, DOG-1, S-100, SMA, and desmin on paraffin-embedded specimens. The risk classification of all specimens was based on the modified National Institutes of Health (NIH) risk stratification [21].
Definitions of Postoperative Complications.
Delayed bleeding was defined as upper or lower gastrointestinal bleeding requiring an emergency endoscopy hemostatic procedure after endoscopic resection. Perforation was defined as gastric wall penetration diagnosed by the presence of abdominal free air on plain radiography after endoscopic resection.
Follow-Up.
Follow-up strategies were conducted via an outpatient service or telephone call. Endoscopic examinations were performed at 1, 3, 6, and 12 months after endoscopic treatment and once yearly thereafter to evaluate the healing of the wound and to exclude local recurrence of the tumor. An abdominal computed tomography scan was performed to check for metastasis every year.
2.11. Statistical Analysis. All data analyses were conducted using SPSS statistics version 25.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation, and categorical data were expressed as absolute values (n) and percentages (%). Continuous variables were compared using the t-test or the Mann-Whitney rank sum test, and categorical variables were compared using the chi-squared test or Fisher's exact test. Recurrence-free survival (RFS) was defined as the time from endoscopic resection to diagnosis of tumor recurrence, which was performed using the Kaplan-Meier curve method and log-rank test. All tests were two-sided, and p values < 0.05 were considered statistically significant.
Demographic and Clinicopathological Characteristics.
Of the 275 patients with small (≤5 cm) primary GISTs who underwent endoscopic resection, four patients with GISTs from other locations were excluded and 11 patients were excluded due to incomplete histopathological information ( Figure 6). A total of 260 patients with small (≤5 cm) primary gastric GISTs were included in our study, among whom 65 (25%) were aged 65 years or older (elderly group; 65 patients, 66 lesions) and 195 (75%) were younger than 65 years old (nonelderly group; 195 patients, 199 lesions). The demographic and clinical characteristics of the two groups are shown in Tables 1 and 2. Among these individuals, 116 (44.6%) were male and 144 (55.4%) were female. The median ages of the elderly and nonelderly groups were 68 (range 65-83) years and 55 (range 32-65) years, respectively. Elderly patients showed a higher incidence of comorbidities (elderly group versus nonelderly group; 61.5% versus 32.3%; p < 0:001) compared with younger patients. In both groups, the most common comorbidity was hypertension (nonelderly group versus elderly group; 19.0% versus 40.0%; p = 0:01), followed by diabetes mellitus (8.7% versus 12.3%; p > 0:05), pulmonary disease (6.7% versus 13.8%; p > 0:05), and cardiovascular disease (4.6% versus 15.4%; p = 0:004). Elderly patients were more susceptible than nonelderly patients to two or more comorbidities (elderly group versus nonelderly group; 20.0% versus 6.7%; p = 0:002 ). There were no significant differences in clinical presentation between the elderly and nonelderly groups. Abdominal pain or discomfort was the most common symptom of gastric GISTs at all ages (39.2%) followed by belching (8.5%), bleeding (3.5%), and dyspepsia (1.5%). Other symptoms including diarrhea, vomiting, wasting, and chest tightness only occurred in nonelderly patients. More than 40% of patients showed no symptoms (nonelderly group, 46.7%; elderly group, 43.1%); therefore, the lesions were found on physical examination. The mean tumor diameters of the nonelderly and elderly groups were 1.75 and 2.04 cm, respectively. Elderly patients exhibited a significantly higher prevalence of ulceration on the tumor surface (nonelderly group versus elderly group; 4.0% versus 12.1%; p = 0:017). Only one patient in the elderly group showed an irregular tumor margin. There were no significant differences in the tumor location (the most common location was the gastric fundus), history of alcohol consumption, history of smoking, Helicobacter pylori infection, and family history between the elderly and nonelderly groups.
Histopathological Outcome.
Tumor tissues were collected for pathological evaluation after endoscopic resection.
Gastroenterology Research and Practice
As presented in Table 2, the cell morphology (spindle, epithelioid, and mixed), tumor growth pattern (intraluminal, extraluminal, and mixed), and mitotic count (≤5/50 highpower field (HPF), 6-10/50 HPF, and >10/50 HPF) were classified into three groups, respectively. For morphology, 97.0% of elderly patients and 97.5% of nonelderly patients presented with a spindle cellular phenotype. For tumor growth pattern, most lesions showed the intraluminal growth pattern (elderly patients, 77.3%; nonelderly patients, 76.9%) in both groups. For mitotic count, most lesions were assessed as ≤5/ 50 HPF in the elderly (95.5%) and nonelderly (95.0%) groups. According to the modified NIH risk stratification [21], the risk classification of GISTs includes very low risk, low risk, intermediate risk, and high risk. In total, less than 10% of lesions in the elderly (7.6%) and nonelderly ( The postoperative complications are shown in Table 3. No adverse events were observed in the elderly group. Delayed bleeding occurred in one (0.5%) nonelderly patient, and perforation occurred in another one (0.5%) nonelderly patient; both patients have recovered after endoscopic treatment. No patients died from the ER-related procedure. Because perioperative complications did not differ significantly, we performed a subgroup analysis based on the endoscopic procedure (Figure 7). Due to the small numbers of patients, subgroup analyses were not performed for ESR and STER. For ESD and EFTR, no significant differences occurred in postoperative fasting, postoperative antibiotic usage, length of hospital stay, or hospitalization expenses between the elderly and nonelderly groups. For ESE, elderly patients had a longer length of hospital stay than nonelderly patients (elderly patients versus nonelderly patients; 6:89 ± 1:62 days versus 5:64 ± 1:17 days, p = 0:039). Postoperative fasting, postoperative antibiotic usage, and hospitalization expenses were similar between elderly and nonelderly patients who underwent ESE.
3.5. Prognosis of Small (≤5 cm) Gastric GISTs. In our study, the median follow-up time was 35 months (range: 1-105 months) for elderly patients and 37 months (range: 2-95 months) for nonelderly patients. In both nonelderly and elderly patients, small (≤5 cm) gastric GISTs showed a favorable prognosis. Only one nonelderly patient and one elderly patient had local tumor recurrence without distant metastases. Detailed clinicopathological features of these two patients are summarized in Table 4. No deaths were observed in either group during the follow-up period. There was no significant difference in the RFS rate between the two groups (p = 0:395) (Figure 8).
Discussion
GISTs are the most common soft tissue sarcomas of the gastrointestinal tract [1], with an annual incidence of 10-15 cases per million [7]. GISTs are characterized by positive expression of CD117 (c-KIT) (95%), CD34 (60-70%), SMA (30-40%), and desmin (1-2%) [22,23]. Primary GISTs can be found anywhere along the gastrointestinal tract, but the stomach is the most frequent location for these tumors [24]. GISTs may occur anywhere between the ages of 10 and 100 years, while the median age at diagnosis is the sixth decade of life [7]. Previous studies indicated that the risk and incidence of GISTs increased with age [23,25]. According to the European Medicines Agency, the age of 65 years was defined as the threshold for elderly patients, which has been used in previous studies [26,27]. Therefore, we also chose 65 years of age as the point at which to divide the patients into different age groups. An article published in the Lancet in 2017 predicted the life expectancy in 35 industrialized countries, indicating that both males and females will have a life expectancy that exceeds 20 years when they are 65 years by 2030 [28]. The rapid growth of the aging population will increase the urgent need for effective treatment of GISTs in elderly patients. Previous studies have indicated that increased attention has been paid to elderly patients during the past decade [29]. Herein, we presented a study that assessed the clinicopathological characteristics of primary 7 Gastroenterology Research and Practice gastric GISTs and treatment outcomes of elderly and nonelderly patients to explore the effects of age on the prognosis of primary gastric GISTs and the selection of the therapeutic modality.
Elderly patients with primary gastric GISTs remain a medical challenge, mainly due to the presence of multiple comorbidities and poor physical function. In this study, comorbidities were more frequent in elderly patients than in nonelderly patients. Moreover, elderly patients were more prone to suffer from two or more comorbidities, which is in agreement with previously published literature [12,30,31]. We found that gender distributions, tumor location, tumor size, mitotic count, growth pattern, and risk classification are similar between elderly and nonelderly patients, in line with previous findings [32,33]. The National Comprehensive Cancer Network (NCCN) guidelines state that ulceration is a possible high-risk feature of GISTs [21]. However, no study has investigated the differences in the detection rate for tumor ulceration of primary gastric GISTs between elderly and nonelderly patients. In our study, 12.1% (8/66) of lesions of tumor ulceration occurred in elderly patients, with 4.0% (8/199) in nonelderly patients. Remarkably, despite tumor ulceration being more common in elderly patients than in nonelderly patients, both elderly and nonelderly groups showed the similar clinical outcomes, suggesting that tumor ulceration is not sufficient to evaluate the prognosis of GISTs.
The optimal treatment for small (≤5 cm) GISTs is controversial. The NCCN suggests that very small (<2 cm) gastric GISTs without high-risk EUS features should be monitored periodically and that complete surgical resection should be conducted in patients with high-risk EUS features [21]. The European Society for Medical Oncology suggests that localized GISTs should be completely excised by laparoscopic excision or open surgery [34]. Further, the Japanese Clinical Practice Guidelines for GISTs suggested that small (≤5 cm) GISTs can be excised by endoscopic surgery [35]. Although recent years have seen a development in the endoscopic treatment for small (≤5 cm) GISTs, the increased risk of recurrence caused by perforation and the residual tumor margin in endoscopic procedures has limited the application of endoscopic resection. In fact, many studies have demonstrated that endoscopic resection has several clear Data are shown as n (%) or mean ± standard deviation of patients. Others * include diarrhea, vomiting, wasting, and chest tightness. Cardiovascular disease * includes coronary heart disease, atrial fibrillation, arrhythmia, rheumatic heart disease, and valvular heart disease. Pulmonary disease * includes pneumonia, chronic bronchitis, bronchiectasia, and pulmonary fibrosis, pulmonary nodules. NS: not significant. 9 Gastroenterology Research and Practice advantages for the treatment of small (≤5 cm) GISTs compared to open surgery, including a shorter length of operation time, less intraoperative blood loss, and a lower adverse event rate [20,[36][37][38][39]. Moreover, several publications have indicated that minimally invasive approaches, such as ER, laparoscopic surgery, and laparoscopic and endoscopic cooperative surgery, are safe and feasible for elderly patients [40,41]. In this study, both elderly and nonelderly groups showed similar clinical outcomes. The en bloc resection rates of the two groups were similar, both of which were over 99%. Notably, intraoperative outcomes (intraoperative blood loss), postoperative outcomes (postoperative fasting, postoperative antibiotic usage, and length of hospital stay), and hospitalization expenses were not significantly different between the groups. In our study, the major postoperative complications of endoscopic resection were delayed bleeding and perforation, and no significant difference in the occurrence of postoperative complications was detected between the two age groups, which is similar to the findings of previous studies [31,42]. Collectively, our results show that endoscopic resection is a safe and economical intervention for both elderly and nonelderly patients.
Gastroenterology Research and Practice
However, it has been reported that age may impact treatment outcomes and there is a trend toward more postoperative complications in elderly patients [43]. Yang et al. indicated that elderly patients had more postoperative complications when compared with nonelderly patients [44]. This phenomenon can be explained by the fact that a high number of elderly patients underwent open surgery in that study. A higher incidence of comorbidities significantly reduces the tolerance of open surgery in elderly patients, which is why postoperative complications are more common in elderly patients who undergo open surgery compared to nonelderly patients. Therefore, endoscopic resection is considered feasible in elderly patients to treat small gastric GISTs.
Studies have shown that age is an important risk factor for predicting gastric GISTs and that elderly patients tend to have a poor prognosis [32,44,45]. In contrast, some studies have shown that age is not a sensitive clinical predictor for determining the prognosis for GISTs [26,[46][47][48], which indicated that the Charlson comorbidity index, prognostic nutritional index, tumor size, and proliferative index could be significant prognostic factors for predicting the prognosis of patients with GISTs. In our study, elderly and nonelderly patients had similar long-term outcomes, and no diseaserelated deaths were observed during the follow-up period in either group. Moreover, there were no statistically significant differences in RFS between the two age groups. Our findings indicate that treatment decisions for patients with GISTs cannot be determined solely by age.
To the best of our knowledge, this is the first comparative analysis to explore the clinical outcomes of endoscopic treatment for elderly and nonelderly patients with small (≤5 cm) gastric GISTs. However, there are several limitations to our study. The first limitation is the patient selection bias due to the retrospective nature of the study and the small number of elderly patients. Second, the follow-up period of this study was not long enough to explore the long-term outcomes of endoscopic resection. Third, all of the endoscopic procedures were performed by experienced endoscopists, leading to improvement in the of success rates of endoscopic treatment in both groups. Therefore, a large multicenter randomized controlled prospective study with a long follow-up duration is necessary to further verify the long-term outcome of endoscopic treatment in elderly patients with small (≤5 cm) gastric GISTs.
In conclusion, we explored the efficacy and safety of endoscopic resection for small (≤5 cm) gastric GISTs, especially in patients aged 65 years or older. In our study, elderly patients had more comorbidities compared with nonelderly patients. However, the postoperative complications and recurrence rates were similar between elderly and nonelderly patients. Therefore, we suggest that endoscopic resection performed by experienced endoscopists is a safe and effective treatment strategy for small (≤5 cm) gastric GISTs in elderly patients. More studies are needed to further evaluate the long-term outcomes of ER for small (≤5 cm) gastric GISTs in elderly patients.
Data Availability
All data of this study could be obtained by emailing the corresponding author.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this article.
Authors' Contributions
FJ conceived the study. CC and JY gather the data, performed the study, and wrote the manuscript. MR and LL analyzed the data. XZ and MY revised the manuscript critically. All authors read and approved the final manuscript.
|
2022-04-25T15:06:35.336Z
|
2022-04-23T00:00:00.000
|
{
"year": 2022,
"sha1": "e3a403d929444d12824d60b8a85f9f8cc0c2d224",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/grp/2022/8415913.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc23ba33932be9d9837f4dd162055625a322272c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5318004
|
pes2o/s2orc
|
v3-fos-license
|
Role of endometrial immune cells in implantation
Implantation of an embryo occurs during the mid-secretory phase of the menstrual cycle, known as the "implantation window." During this implantation period, there are significant morphologic and functional changes in the endometrium, which is followed by decidualization. Many immune cells, such as dendritic and natural killer (NK) cells, increase in number in this period and early pregnancy. Recent works have revealed that antigen-presenting cells (APCs) and NK cells are involved in vascular remodeling of spiral arteries in the decidua and lack of APCs leads to failure of pregnancy. Paternal and fetal antigens may play a role in the induction of immune tolerance during pregnancy. A balance between effectors (i.e., innate immunity and helper T [Th] 1 and Th17 immunity) and regulators (Th2 cells, regulatory T cells, etc.) is essential for establishment and maintenance of pregnancy. The highly complicated endocrine-immune network works in decidualization of the endometrium and at the fetomaternal interface. We will discuss the role of immune cells in the implantation period and during early pregnancy.
Introduction
The endometrium is the site where the blastocyst is implanted and a key place not only for supporting fetal growth through supplementation of oxygen and nutrients but also for protecting the embryo and later the fetus from microbial invasion during pregnancy. Implantation of the embryo occurs during the mid-secretory phase of the menstrual cycle, known as the "implantation window. " During this implantation period, there are significant morphologic and functional changes in the endometrium, which is followed by decidualization. These events in the endometrium are mainly controlled by ovarian steroid hormones-estrogen and progesterone [1].
The uterine endometrium consists of two main cellular components, the stromal cells and the glandular cells. During the implantation window, the fibroblast-like endometrial stromal cells are transformed into larger and rounded decidual cells (decidualization). In the glandular cells, secretory glandules develop and large apical protrusions (pinopodes) and microvilli emerge as well [2]. Furthermore, ovarian steroid hormones regulate the expression of various cytokines, chemokines, growth factors, and adhesion molecules in the secretory endometrium [1].
Around the implantation period, there is a major change in the proportion and number of endometrial immune cells [3,4]. This peri-implantation period seems to be the earliest period in which the mother can recognize that she is pregnant [5]. The serum hCG concentration increases during this period, and it is likely that the maternal immune cells recognize fetal antigens from the implantation period [6]. It is now widely accepted that immunologic tolerance is inevitable for establishment and maintenance of pregnancy [7].
In this article, the role of immune cells in the endometrium of periimplantation period and following period will be discussed.
Peripheral blood and endometrial lymphocytes during a menstrual cycle
It has been reported that the number and proportion of immune cells in the peripheral blood and endometrium change between the follicular and luteal phases of the ovarian cycle. However, exact figures on the fluctuation of peripheral blood lymphocytes remain un-known due to the contradictory results of different studies [8,9].
Recently, our group published a relatively large-scale study that was designed to sample peripheral blood serially during a menstrual cycle [10]. In the luteal phase, the percentage of CD3 + T and CD3 + CD4 + helper T (Th) cells decreased, but the natural killer (NK) cell percentage and NK cell cytotoxicity increased. However, other lymphocyte subpopulations (B and natural killer T [NKT] cells) and the ratios of Th1/Th2 cytokines producing Th cells did not fluctuate. One important characteristic of peripheral blood immune cells is that they have been suggested as a major source of endometrial immune cells.
Dendritic cells and macrophages in the endometrium and decidua
Dendritic cells (DCs) and macrophages, the major antigen presenting cells in the endometrium, seem to play an important role in the maintenance of pregnancy. After implantation of a blastocyst, DCs are recruited into the endometrium and accumulated, especially around the implanted embryo [11]. In the deciduae, DCs represent around 5-10% of all hematopoietic uterine cells. DCs are not only essential for the induction of primary immune responses but also important for the induction of immunological tolerance. The function and differentiation of DCs are regulated by the local microenvironment determined by cytokines and chemokines [11]. Levels of colony-stimulating factor (CSF)-1 synthesized by the uterine epithelium increase at the time of implantation and continue to elevate dramatically throughout the process of placentation [12]. This CSF-1 is the major regulator of the mononuclear phagocytic lineage and controls the proliferation, migration, viability, and function of DCs and macrophages and affects decidual cells and trophoblasts ( Figure 1) [12]. Endometrial epithelial cells produce leukemia inhibitor factor (LIF) as well as CSF-1. LIF plays a role in embryo implantation and decidualization. DCs secrete soluble FMS-like tyrosine kinase1 (sFLT1) and transforming growth factor (TGF)-β1, which act locally and regulate angiogenesis in the endometrium and are involved in the development of regulatory T (Treg) cells ( Figure 2) [12].
In a study done in pregnant mice, depletion of uterine DCs resulted in the severe impairment of the implantation process, leading to embryo resorption [11]. The authors suggested that uterine DCs directly fine-tune decidual angiogenesis by providing two critical factors, sFlt1 and TGF-β1, that promote coordinated blood vessel maturation. Collectively, uterine DCs appear to govern uterine receptivity, independent of their predicted role in immunological tolerance, by regulating tissue remodeling and angiogenesis [11].
Macrophages are the innate immune cells that eliminate microbes in the endometrium. The number of endometrial macrophages increases in the late secretory endometrium as compared to the proliferative phase [13]. Decidual macrophages can participate in diverse activities during pregnancy [14]. Decidual macrophages comprise about 20-25% of the total decidual leukocytes and are the main subset of human leukocyte antigen (HLA)-DR + antigen-presenting cells (APCs) in human deciduae [14]. Decidual macrophages activated by pro-inflammatory cytokines and microbial lipopolysaccharide (LPS) are classified as M1-type, which secretes tumor necrosis factor (TNF)-α and interleukin (IL)-12 and participates in the progression of inflammation [14]. M2 polarization induced by glucocorticoids and Th2 cytokines such as IL-4, IL-10, and IL-13 is characterized by enhanced innate immunity receptors (scavenger receptors and macrophage mannose receptors) and upregulation of arginase activity, which counteracts nitric oxide synthesis [14]. In addition, M2 macrophages demonstrate increased secretion of IL-1R antagonist and are essential for tissue remodeling and immune tolerance during pregnancy ( Figure 3) [14]. In early pregnancy, pro-angiogenic factors secreted from macrophages prompt vascular remodeling to ensure appropriate utero-placental circulation. In contrast, macrophages accumulated in the low uterine segment participate in cervical ripening during late pregnancy [14]. A balance of M1 and M2 macrophages may contribute to the outcome of pregnancy.
T cells in the endometrium and deciduae
Some authors have reported that the proportion of endometrial lymphocytes was not found to fluctuate during a menstrual cycle [15]. However, this finding is not supported by others, who found a significant decrease of endometrial T cells from 55% to 6.7% between the late proliferative and the late secretory phages [4]. Even though there is a discrepancy regarding the proportion of T cells in the secretory endometrium, functional changes of T cells in the endometrium are evident. The balance between effector T and regulatory T cells may change from the peri-implantation period. This balance is very important for fetomaternal immune tolerance, the essential mechanism for a successful pregnancy.
Type1 and Type2 cytokine producing T cells
During pregnancy, type 1 (pro-inflammatory immune reaction mediated by TNF-α, IFN-γ, etc.) and type 2 (anti-inflammatory response characterized by IL-4, IL-10, etc.) immune reactions develop sequentially [16]. The process of embryo implantation and the following invasion into the endometrium is characterized by a pro-inflammatory reaction [16]. The second phase of rapid fetal growth and development represents an anti-inflammatory, Th2 environment. The end stage of pregnancy, parturition, is the phase of inflammation characterized by an influx of immune cells into the myometrium.
Even though the number of T cells present in the decidua decreases during pregnancy as compared to the non-pregnant endometrium, these T cells may affect acceptance of the fetus by producing cytokines [7]. It is well known that a strong Th1 environment is harmful for pregnancy and the Th2 response ameliorates the Th1 response [7]. Type 1 cytokines inhibit trophoblast invasion, stimulate apoptosis of human trophoblast cells, and enhance decidual macrophage activity, all of which result in the production of factors harmful to the embryo. Also, these cytokines negatively influence fetal growth by activation of prothrombinase, generation of thrombin, formation of clots, and production of IL-8, which stimulates granulocyte and endothelial cells to terminate the blood supply to the developing placenta [7].
γδ T cells
T cells can be classified by the expression of T cell receptor (TCR): TCR is composed of a combination of α, β, and ζ chains. Most T cells have TCRs with α and β chains. However, some T cells-γδ T cells-do have TCRs made up of γ and δ chains, not of α and β ones. These two T cell subsets seem to have distinct lineages and play different roles.
γδ T cells are commonly distributed in the mucosal organs, such as the uterus, the vagina, the tonsils, and the intestine, and may play a role in bridging the innate and adaptive immunity. Nearly 70% of decidual T cells express γδ TCRs and most γδ T cells are activated [17]. The majority of progesterone receptor (PR) + lymphocytes are γδ TCRs + and/or CD8 + T cells, and these PR + lymphocytes in the peripheral blood increase in number during normal pregnancy. PR expression is regulated in a hormone-independent manner and is upregulated by activation of these lymphocytes. Furthermore, lymphocyte immunotherapy for recurrent miscarriages has been shown to induce lymphocyte PRs and has shown that these are related to the success or failure of gestation [17]. It is suggested that γδ T cells can recognize trophoblast-presented antigens [17]. It is reported that progesterone-induced blocking factor (PIBF) is produced by activated lymphocytes and trophoblasts. PIBF downregulates Th1 cytokines, stimulates Th2 cytokines and antibody synthesis, and inhibits NK cell activity and phospholipase A2 in arachidonic acid metabolism [17].
In summary, γδ T cells are likely to be activated by recognition of fetal antigens, and activated γδ T cells produce PIBF under the influence of progesterone. PIBF induces a Th2 dominant microenvironment, inhibits NK cell cytotoxicity, and maintains uterine quiescence by blocking prostaglandin synthesis.
Treg cells
There are several types of immune regulatory T lymphocytes: Tr1 [18,19]. Among these cells, Foxp3 + Treg cells have been most extensively studied and most of them correspond to CD4 + CD25 bright T cells.
cells (IL-10 producing T cells), Th3 cells (TGF-β secreting T cells), TofB cells (regulatory T cells stimulated by B cells), and Foxp3 + Treg cells
There are two distinct lineages of Foxp3 + Treg cells. One lineage, nTreg cells, originates from the thymus. The other, iTreg cells, is known to be induced in the periphery by activation of naïve T cells in the presence of TGF-β. Foxp3 + Treg cells can be activated by APCs and play a role in immune regulation against other immune cells such as Th1 and Th2 cells, B cells, and NK cells by expression of TGF-β, IL-10, cytotoxic T-lymphocyte antigen 4 (CTLA-4), etc. [20].
Even though there is no report showing a change in the number of Treg cells in the midluteal phase of the menstrual cycle, Treg cells increased in the peripheral blood during normal pregnancy [21]. In a study in women with recurrent pregnancy loss (RPL), Treg cells decreased in the peripheral blood and in the decidua as compared to those of women underwent elective abortion [22]. These findings indicate that Treg cells may be deeply involved in maternal immune tolerance against a fetus.
Some papers have insisted that estrogen is a key factor in the proliferation of Treg cells in the late follicular phase or during pregnancy [23,24]. According to recent studies, however, expansion of Treg cells during pregnancy is relevant to exposure to paternal antigen or products such as sperm and factors in the seminal fluid, but not to estrogen or progesterone [25][26][27].
Interestingly, hCG may have a role in the recruitment of Treg cells into the uterus during pregnancy [28]. It remains unclear how Treg cells protect a fetus in the uterus. One explanation is that Treg cells may expand following recognition of paternal antigens, and are trafficked to the fetomaternal interface by hCG and several chemokines, where they help embryo implantation, placentation, and fetal growth by secretion of TGF-β, IL-10, LIF, Heme oxygenase (HO)-1, etc. [28] ( Figure 4).
Th17 cells
Recently, a novel subset of T cells called Th17 cells has been reported to induce experimental autoimmune encephalomyelitis and adjuvant arthritis in a mouse model [29,30]. Th17 cells are directly involved in chronic inflammatory processes by secreting IL-17, which recruits neutrophils to tissue through induction of granulocyte colony-stimulating factor and IL-8 [31]. Th17 cells have a distinct developmental lineage, which is different from Th1 and Th2 cells [29,32]. TGF-β has been suggested as a crucial cytokine for Th17 cell development, in conjunction with IL-6 and IL-21 in human [33][34][35][36]. Th17 cells are mainly regulated by Treg cells.
The level of Th17 cells in the endometrium at various points in time during a menstrual cycle remains unexplored. A recent study reported that the proportions of Th17 cells in the peripheral blood and deciduae were higher in pregnant women with unexplained RPL as compared to normal women in early pregnancy [37]. Non-pregnant women with unexplained RPL had a higher Th17 cell level in the circulating blood than did parous controls [38].
B cells in the endometrium and decidua
B cells are very scarce in the human endometrium and decidua [14]. Their role has not yet been explored.
NK cells in the endometrium and decidua
NK cells are one of the key components of the innate immune system and eliminate virus-infected cells and cancer cells by secretion of cytotoxic products such as granzyme and perforin. Endometrial NK cells show different phenotypic characteristics from NK cells in the Placental antigens e.g. semen Chemokines hCG peripheral blood. Peripheral blood NK cells represent 10-15% of all lymphocytes, and the majority of them express CD56 + CD16 + , a cytotoxic subset. On the other hand, most endometrial NK cells are CD56 + CD16 − , a less toxic cell population. During a menstrual cycle, endometrial NK cells increase in number in the secretory phase as compared to the proliferative phase [39]. However, the proportion of human endometrial NK cells remains constant during the cycle [15].
In the late secretory phase and early pregnancy, the percentage of endometrial/decidual NK cells rapidly increases up to 70% of uterine leukocytes [4]. The number of endometrial and decidual NK cells starts to increase in the mid-secretory phase and early pregnancy, reaches a peak at the end of the first trimester, and then decreases as the fetus approaches term ( Figure 5) [39]. These findings suggest that uterine NK cells play an important role in the establishment and maintenance of pregnancy. Most endometrial NK cells of non-pregnant women do not express CD16 (a cytotoxicity marker), NKp30, NKp44 (cell activation markers), or L-selectin (an adhesion molecule) [40]. However, they are characterized by expression of other activation markers such as HLA-DR, CD69, NKp46, and NKG2D. NK cells at this stage are functionally less cytotoxic and produce little amount of cytokines. However, secretory endometrial NK cells have the potential to proliferate.
If implantation takes place, endometrial cells secrete IL-15, which causes endometrial NK cells to differentiate into decidual NK cells. Decidual NK cells begin to increase the production of cytokines, growth factors, and angiogenetic factors [41] (Figure 6).
Decidual NK cells increase blood flow at the fetomaternal interface by remodeling of spiral arteries and help migration of trophoblasts. Some angiogenetic factors (VEGF, PLGF, Angiopoietin-2, and NKG5) are produced by decidual NK cells. They also secret various cytokines and growth factors such as TNF-α, IL-10, GM-CSF, IL-1β, TGF-β1, CSF-1, LIF, and IFN-γ [42]. Through this mechanism, decidual NK cells seem to contribute to embryo implantation and decidualization of the endometrium ( Figure 6) [41]. Several theories have been poposed for the origin of uterine NK cells. Among them, the most popular theory is the trafficking of peripheral blood NK cells. Endometrial chemokines have been suggested to recruit periheral blood CD56 bright CD16 − NK cells into the uterus. TGF-β may convert CD56 + CD16 + NK cells into CD56 bright CD16 − NK cells. Many chemokines are expressed in the human endometrium (CCL4, CCl5, CCl7, CCL13, CCL19, CCL21, CXCL9, CXCL10, IL-15). The chemokine receptors responding to these include CCR5, CCR7, CXCR3, CXCR4, and IL-15Rα chain [43]. Theories of in situ proliferation of uterine NK cells and of uterine NK cell differentiation from hematopoietic precursors have also been proposed [40].
Conclusion
The human endometrium undergoes significant changes to prepare for implantation of an embryo during the peri-implantation period. The immune cells also alter in number and function systemically and locally in the luteal phase. This modification of the reproductive and immune systems is profoundly influenced by several hormones, especially ovarian steroids.
Many innate and adaptive immune cells seem to play a significant role in establishment and maintenance of pregnancy. This preparation of the endometrium and immune system for pregnancy begins before the implantation occurs. As a result, significant biologic and immune modifications have already taken place before the window of implantation. This process is regulated by the complicated and delicate interaction of the endocrine-immune system.
Further studies are warranted to solve the enigma of the establishment of implantation and maintenance of successful pregnancy.
|
2014-10-01T00:00:00.000Z
|
2011-09-01T00:00:00.000
|
{
"year": 2011,
"sha1": "9f6f8020ce952b6d6b7bf4521b7ec4bc787db37d",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3283071?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f6f8020ce952b6d6b7bf4521b7ec4bc787db37d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
14458019
|
pes2o/s2orc
|
v3-fos-license
|
Dietary Habits of Type 2 Diabetes Patients: Variety and Frequency of Food Intake
The objective of the present study was to observe the dietary patterns and food frequencies of type 2 diabetes patients attending the clinics of the Family Practice Center of the University of Sri Jayewardenepura, located in a highly urbanized area in Sri Lanka. An interviewer administered questionnaire based cross-sectional study was conducted among randomly selected 100 type 2 diabetes patients [age 35–70 years; mean age 55 ± 9 (males = 44; females = 56)]. The data were analyzed by SPSS version 18.0 software. Vegetables, fatty foods, and poultry consumption were in accordance with the national guidelines. A significant percentage (45.5%) consumed rice mixed meals for all three meals and only 67% consumed fruits at least once a day. Majority (71%) consumed full-cream milk and sugar intake (77%) was in accordance with the guidelines. Noncaloric sweetener usage was nonexistent. Daily green leafy vegetable intake and the quantity consumed were inadequate to obtain beneficial effects. From the study population, 44% [females 50%; males 36%] of the patients were either overweight or obese. However, only 60% of those patients accepted that they were either overweight or obese. Only 14% exercised daily while 69% never exercised. Study revealed the importance of educating patients with type 2 diabetes on dietary changes and more importantly the involvement in regular physical exercises.
Introduction
Diabetes mellitus is a group of metabolic diseases characterized by hyperglycemia that result from defects in insulin secretion, action, or both. Improved glycaemic control among type 2 diabetes patients is vital in preventing micro-and macrovascular complications. Type 2 diabetes has become a severe global health threat with increasing incidence in Asian countries [1]. Sri Lanka is at high risk of diabetes with one in five adults having either diabetes or prediabetes [2]. Several studies emphasize the importance of good dietary practices in management of diabetes mainly by reducing the body weight as a strong relationship between obesity and the upsurge of diabetes [3] is reported. Management is advocated through reduction of sugar and fat intake, increased dietary fibre consumption, and involvement in regular exercise [3,4].
Urbanization has led many Sri Lankans towards a stressful, unhealthy life style, mainly altering their dietary patterns from the consumption of fresh, healthy food to more refined carbohydrates and high fat containing junk food and beverages. Thus unhealthy dietary habits and sedentary life style are among the leading causes of obesity and diabetes in Sri Lanka [5]. Almost 70% of a Sri Lankan study population exceeded the upper limit of the recommendations for starch intake [6] and consume 9 times more saturated fats compared to polyunsaturated fatty acids (PUFAs) [7]. More than 80% of the Sri Lankan adult population between the ages of 15 and 64 does not meet the recommended servings of fruits and vegetables. The recommended consumption of 5 portions of fruits and vegetables/day is practiced only by 3.5% of the Sri Lankan population [6]. The annual consumption of sugar by a Sri Lankan is around 30 kg with adults consuming over 3-5 portions of added sugar daily [6]. A recent research with adolescents of 17 years of age in 65 schools states that nearly 82% of the adolescents consume sugar-sweetened soft drinks once weekly or more often while 2% were daily consumers.
Journal of Nutrition and Metabolism
Seventy-seven percent and 48% consumed sugar-sweetened carbonated drinks and sugar-sweetened fruit drinks once weekly or more often, respectively [8]. A more recent study revealed that although most of the Sri Lankan diabetes patients restrict sugar intake, they consume improper sugar alternatives (i.e., dates) in normal portion or in higher amounts without restriction. Even with the knowledge of adverse effects of sugar consumption, some of the study participants (18 out of 50) had difficulty in controlling the intake of sugar containing foods [9].
Further, a correlation between consumption of fast foods and noncommunicable diseases [10,11] is reported. A study on the attitude of working women in Sri Lanka towards fast food consumption with reference to perceived taste, quality, nutrition value, convenience, and price found nonsignificant positive correlations with perceived taste and nutritional value [12]. Perceived convenience was the only significant factor for the increased fast food consumption among Sri Lankan working women [12].
The mean energy intake of type 2 diabetes patients in Sri Lanka was 1438 (SD 412) kcal/day. From the total energy intake, 68.1, 11.5, and 20.2% were from carbohydrate, protein, and fat, respectively. It was found that these patients consumed an energy restricted, high-carbohydrate, low fat diet compared to diabetes patients in Western countries [13]. Even though data are available on the dietary patterns and food frequencies of diabetes patients in Sri Lanka the information on macronutrient intake is scarce, except for the previously mentioned quantitative pilot study. Understanding the factors leading to improved glycaemic control would help professionals to identify major determinants of diabetes management. Therefore the objective of the present study was to identify the dietary patterns and food frequencies of type 2 diabetes patients attending the clinics of the Family Practice Center of the University of Sri Jayewardenepura, Nugegoda, Sri Lanka.
Methodology
A cross-sectional study was conducted among randomly selected type 2 diabetes patients ( = 100) between the ages of 35 and 70 years with a mean age of 55 ± 9 (males = 44; females = 56). Type 2 diabetes patients were identified from among those attending the clinic of the Family Practice Center of the University of Sri Jayewardenepura and employees of the University. Patients with severe medical and surgical complications were excluded.
Procedure.
Study was an interviewer administered questionnaire based cross-sectional study. Data related to (i) sociodemographic data, weight, and height, (ii) awareness on overweight or obesity, (iii) consumption of main and other meals, (iv) consumption of green leaves, (v) consumption of sweetened foods, (vi) prevalence of diabetes related complications, and (vii) duration and type of physical exercises engaged in were collected using the questionnaire.
Ethical Approval.
Ethical approval was obtained from the Ethics Review Committee of the Faculty of Medical Sciences, University of Sri Jayewardenepura (Approval number 632/12). Informed written consent was obtained from each volunteer prior to commencement of the study.
Consumption of Main and Other Meals.
Of the study population 98% consume all three meals on time and 34% ingest ≥5 small frequent meals/day as recommended by the Sri Lankan clinical practice guidelines for the management of diabetes [14]. High glycaemic index (GI) meals like bread [15], short eats, string hoppers [16], pittu (a steam cooked food made with rice flour and coconut), and so forth [17] are consumed by 17.8% for ≥5 days/week as breakfast while 8.9% consumed boiled legumes (low GI [17]) for breakfast. Consumption of rice meal as breakfast for all 7 days of the week was by 45.5%. Majority (72.2%) of the population consumes rice as breakfast for ≥4 days of the week.
All consumed rice meals for lunch which elicit a lower or medium glycaemic response irrespective of the rice variety [18], all days of the week except one. Consumption of high GI foods (white bread [15] and string hoppers [16], etc.) for dinner was by 13.3% (for 3 or more than 3 days/week) while only 4.4% relies on light diets (soup/soup with biscuits/biscuits with milk/small portion of a main meal/fresh fruits or vegetables, etc.) for ≥4 days/week. Rice consumption for dinner for all days of the week was by 45% and they consume rice for all three meals. From the population 83% consumed raw red rice (67%) or raw nadu (14%) or parboiled white/red rice (2%). Only 67% consumed fruits at least once a day while 100% consumed vegetables for all three meals for all 7 days. When considering consumption of eggs, 30% of the population consumed egg/day for 2 or more than 2 days per week while 24% refrained from consuming eggs. Seventy percent of the study population consumed less than 2 eggs per week. The average chicken, pork, beef, and mutton consumption was 68%, 2%, 2%, and 1% at least once a week, respectively. A majority of the study population does not consume pork (87%), beef (93%), or mutton (92%). The consumption of chicken at least once a week was by 38%. All nonvegetarians (97%) consumed fish once or more than once a week. Fried foods/short eats and so forth, for 2 or more than 2 times/week, were consumed by 33% while 51% consumed them once or less than once per week. Full-cream milk is consumed by 71% and 22% consumed nonfat milk (Figure 1). mixed with scraped coconut, green chillies, salt, and lime juice), and porridge. Sixty-four percent consumed green leaves daily and 35% consumed them 1-3 days/week. Sixtyseven percent (67%) of patients consumed leaves prepared by any method (mallum, curry, sambol, and porridge), while 33% consumed green leaves prepared by either one or two or three of the above methods. Herbal porridge consumption as part of the meal at least for 2-3 days per week was practiced by 23% and 41% consumed porridge 1-4 times/month while 33% did not consume porridges even though all herbal porridges elicit a low glycaemic response [19][20][21] and are rich in antioxidants [22]. Although the usage of green leaves as sambol, mallum, curry, or porridge is a long standing dietary practice, a change in the selection of the leaf varieties used was observed after diagnosis of diabetes in order to use leaves as a remedy by 58% of the study population.
Consumption of Green
Sambol, mallum, curry, extract (crude extract or tea), and porridge are used as herbal remedies and 58% of the population relies on herbal remedies in any form ( Figure 2) and not due to prescription by ayurvedic physicians. Among them 96% consumed herbal remedies due to knowledge gained from other patients or folklore and only 4% relied on research data published on mass media. Among patients who relied on herbal remedies, majority (85%) consumed leaves as sambol or mallum or curry, 31% consumed them as an extract, and only 8% consumed them as porridge.
From the patients who rely on herbal remedies 75% consume Cheilocostus speciosus (thebu) as a salad or extract. Scoparia dulcis (walkoththamalli), Nyctanthes arbor-tristis (sepalika) flowers, and Artocarpus heterophyllus (kos) leaves are also commonly ingested as water extracts while Cephalandra indica (kowakka), Murraya koenigii spreng (karapincha), and Adenanthera pavonina (madatiya) are used as porridge. However, only 37% perceived a reduction of symptoms due to these remedies. None reported allergies to any leaf variety they consumed. Only 9% used other commercially available herbal products, infrequently, mainly green tea. food or beverage once or more than once/day was by 3% of the population while 77% consumed any sweetened food or beverage once or less than once/week. However, none of the patients in the study used noncaloric sweeteners. Two-thirds (66%) of the population did not use sugar for tea and only 8% used more than 2 teaspoons of sugar for tea. The amount of sugar obtained from other foods was approximately less than 2 teaspoons/day for 56% of the population.
Physical Exercises.
Only 14% of the study population exercised daily while 69% never exercised. Patients who exercised daily were also not following the recommended guidelines of time and activity level (jogging, brisk walking, dancing, gardening, etc.; 150 minutes per week). Majority (99%) of the patients was of the view that "day-to-day activities" (cooking, sweeping, shopping, washing, duty at work place, etc.) were sufficient exercises.
Discussion.
Canadian diabetes association states that 80-90% of DM patients in the world are overweight or obese [23]. However, no recent data is available on the prevalence of overweight/obesity among diabetes patients in Sri Lanka. Among the overweight and obese subjects (44%) in the present study, 40% denied that they fell into either of these categories. Among the rest many were unaware whether their BMI is normal or not. This misperception on body weights was observed in a previous study as well. Nearly 66% and 44.7% of males and females who were overweight and over one-third of males and females who are obese considered themselves to be of "about right weight" [24]. The reason for overweight and obesity of Sri Lankans could be the lack of knowledge on proper diet control, consumption of highly refined carbohydrate containing foods which are conveniently available [7] prior to diagnosis of diabetes thus exceeding the recommendations [6], lack of physical exercise [6] even after diagnosis, and consumption of diets low in omega 3 fatty acids [25]. Consumption of 5 portions of fruits and vegetables/day is practiced only by 3.5% of healthy Sri Lankan population [6]. However, the present study with DM patients revealed this practice to be more prevalent among (45.5%) diabetes patients. As fatty food consumption of these patients was low, Sri Lankan diets with larger carbohydrate portions which exceed the upper limit of the recommendations for starch intake [6] and more importantly less exercise might be the contributory factors for overweight, obesity, and other diabetes complications among Sri Lankan diabetes patients. Rice mixed meals were the most consumed food among the study population. However, irrespective of rice mixed meals having a low GI if the rice portion is large the glycaemic load would be high and can lead to high glycaemic response [16]. Fish consumption among animal proteins was highest. Chicken was the meat more commonly consumed. Eggs were consumed by majority but within the recommendations (3-4 eggs/week). Thus the animal product consumption which will contribute to saturated fat and cholesterol intake could be a minor risk factor for the development of further complications in this population. Thirty-seven out of 50 participants, of a study conducted in 2015, were unaware of their protein intake and some were under the impression that they should discontinue meat consumption [9]. The practice of nonfat milk consumption was significantly low which was mainly due to less palatability.
Sweetened foods are responsible for an elevation of blood glucose. Persistent high blood glucose levels lead to the formation of advanced glycated end products responsible for diabetes related macrovascular and microvascular complications. Excess glucose is deposited as fat in adipose tissues causing overweight or obesity [26]. Therefore, limitation of sweetened foods or drinks consumption is a key step in the guidelines for the control of diabetes. Though the reported sugar consumption among Sri Lankans is significantly high [6] most tend to reduce sugar consumption after being diagnosed with diabetes. Thus, consumption of any sweetened food or beverage once or more than once a day was rare and only one-quarter reported to consume any sweetened food or beverage once or more than once a week. Accordingly the sugar intake of the diabetes patients in the present study was in accordance with the guidelines [14]. This "sugar" restriction could be due to the advice by health care professionals [9]. Seventy-two percent of diabetes patients in a study conducted in 2015 believed that diabetes arises due to high sugar intake during earlier stage in life and thus have completely avoided sugar consumption [9]. However, their knowledge on food types which elevate blood glucose concentration could be low, as some of them consume sugar Journal of Nutrition and Metabolism 5 alternatives (i.e., dates) in normal or even in higher amounts without restriction [9]. Noncaloric sweetener usage was not popular among the Sri Lankan diabetes patients. Most of them were unaware of the availability of noncaloric sweeteners, some were not satisfied with regard to safety and some disliked the taste. Educating patients regarding noncaloric sweeteners may further reduce the sugar consumption and increase the quality of life of diabetes patients.
A study revealed that an increase of one serving of green leafy vegetable consumption per day is associated with a modestly lower hazard of diabetes [27]. Consumption of leafy greens contributes to intake of certain vitamins and antioxidants and thus combats deficiencies that could arise with diabetes [28]. Green leaves made as mallum, curry, sambol, and porridge were consumed by the diabetes population. However, this study proved that one-third of Sri Lankan diabetes patients do not consume the recommended intake of green leafy vegetables (35%) which could contribute to reducing the carbohydrate intake and absorption.
Sri Lankan diabetes patients believe that certain foods such as fenugreek seeds (uluhal), Coscinium fenestratum or yellow vine (venivelgeta), curry leaves and powder, bitter-gourd, passion fruit leaves, finger millet, jack fruit leaves, wood apples, Scoparia dulcis (walkoththamalli) porridge, dried night jasmine flower, Wattakaka volubilis leaves (anguna kola), Costus speciosus (thebu) leaves, and ceylon olive fruit (veralu) are effective in reducing blood glucose [9]. Ninety percent of Sri Lankan diabetes patients are reported to use one or more herbs as part of a meal or as a medication to control diabetes [29]. However, only 58% of the present study population consumed herbal remedies which could be due to the lack of availability of certain herbs in urban and suburban areas of Colombo District. Most of the patients tend to follow the remedies that the others follow in an ad hoc manner maybe due to the lack of knowledge on research data due to lack of accessibility to data sources. As some herbs cause toxic effects with long term consumption, patients need to be educated on the use and the dose of herbal remedies.
Although a previous study revealed that 60% of patients were knowledgeable that regular exercise could control blood glucose [30], only 14% of the present study population engaged in regular exercise [14]. Among the individuals who reported that they exercise, most considered "walking" for any other purpose also as exercising. Similar results were obtained in another study, which revealed that a considerable number of diabetes patients (18/50) were satisfied with their normal regular household work and considered these activities as "physical activities" [9].
This study also revealed that a significant number of the diabetes patients are not satisfied with their quality of life due to the symptoms that arise with the fluctuations of blood glucose which they could not control owing to lack of knowledge on diet management. A previous study indicated that the prevalence of neuropathy, nephropathy, retinopathy, coronary vascular disease, stroke, peripheral disease, and hypertension among newly diagnosed diabetes patients was 25.1%, 29%, 15%, 21%, 5.6%, 4.8%, and 23%, respectively [31]. As these complications aggravate with the duration of diabetes, patients should be educated to identify and to control these symptoms by diet, medicines, exercise, and frequent medical checkups. A recently carried out study revealed that although 70% of the Sri Lankan T2DM patients have a "good" or "very good" (score > 65) knowledge on diabetes compared to other South Asian countries more than 90% of patients could not recognize the symptoms of hypoglycaemia/hyperglycaemia [30]. Therefore, by improving the patients' knowledge on diabetes related signs and symptoms, diet control and selfmanagement of diabetes could uplift their quality of life. Since this study was conducted at a single study center the results do not reflect the diet patterns of the general population and further studies are required to prove these outcomes.
Conclusion
Vegetables, fatty foods, and poultry consumption of diabetes patients of the present study were in accordance with the guidelines. Majority consumed full-cream milk and sugar intake was in accordance with the guidelines. The noncaloric sweetener usage was nonexistent and the majority was unaware of the availability of noncaloric sweeteners or foods made with noncaloric sweeteners. A significant percentage (45.5%) of diabetes patients consumed rice mixed meal for all three meals and consumption of five portions of fruits and vegetables/day was higher compared to the reported data of normal population. The daily green leafy vegetable intake and the quantity consumed were inadequate to obtain beneficial effects which can be correlated to the high rate of some complications and comorbidities. A considerable number of diabetes patients were unaware that they were either overweight or obese and some of the patients who were informed that they were obese or overweight were reluctant to accept the categorization. Also the patients were not aware of the wealth of scientific information available on many foods and herbal remedies.
|
2018-04-03T02:03:20.033Z
|
2016-12-22T00:00:00.000
|
{
"year": 2016,
"sha1": "8609881296014e494c6576233f0ab2c385f9604d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jnme/2016/7987395.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8609881296014e494c6576233f0ab2c385f9604d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239768458
|
pes2o/s2orc
|
v3-fos-license
|
Localised Dirac eigenmodes and Goldstone's theorem at finite temperature
I show that a finite density of near-zero localised Dirac modes can lead to the disappearance of the massless excitations predicted by the finite-temperature version of Goldstone's theorem in the chirally broken phase of a gauge theory.
Introduction
Ample evidence from lattice calculations shows that the lowest modes of the Euclidean Dirac operator / are localised in the high-temperature phase of QCD [1][2][3][4] and of other gauge theories [5][6][7][8][9][10][11][12][13] (see [14] for a recent review). Localised modes are supported essentially only in a finite spatial region whose size does not change as the system size grows. In contrast, delocalised modes extend over the whole system and keep spreading out as the system size is increased. The distinction is made quantitative by the scaling with the spatial volume of the inverse participation ratio (IPR), which for a normalised eigenmode ( ) reads where d is the spatial dimension of the system, ( ) 2 = , | , ( )| 2 is the local amplitude squared of the mode summed over colour ( ) and Dirac ( ) indices, and ∫ d+1 = ∫ 1 0 ∫ d , with the temperature of the system. Assuming that ( ) is non-negligible only in a region of size ( ), one can easily estimate that IPR ∼ − . For localised modes = 0, while for delocalised modes 0 < ≤ 1.
Lattice studies show the same situation in a variety of gauge theories, with different gauge groups and in different dimensions (also using different fermion discretisations): while delocalised in the low-temperature, confined phase, low modes are localised in the high-temperature, deconfined phase up to some critical point in the spectrum, above which they are again delocalised. Localisation is a well-known phenomenon in condensed matter physics, commonly appearing in disordered systems [15]. Technically, the Dirac operator can indeed be seen as ( times) the Hamiltonian of a disordered system, with disorder provided by the fluctuations of the gauge fields. It is then not surprising that the features of localisation observed in gauge theory are analogous to those found in condensed matter systems: for example, at the "mobility edge" , where localised modes turn into delocalised modes, one finds a second-order phase transition along the spectrum ("Anderson transition") [16] with critical spectral statistics [17] and multifractal eigenmodes [18], exactly as in condensed matter systems [19].
The physical consequences of localisation in disordered systems are clear: most notably, localisation of electron eigenmodes leads to the transition from conductor to insulator in a metal with a large amount of impurities [15]. The situation is instead not so clear for gauge theories, where the physical meaning of the localisation of Dirac modes has proved to be more elusive. There is, however, growing evidence of an intimate connection between localisation and deconfinement: in a variety of systems with a genuine deconfinement transition, localisation of the low Dirac modes appears in fact precisely at the critical point [7][8][9][10][11][12][13]. This is true even for the simplest model displaying a deconfinement transition, namely 2+1 dimensional Z 2 gauge theory [12]. Theoretical arguments for this behaviour have also been discussed in the literature [20][21][22]. This connection could help in better understanding confinement and the deconfinement transition.
Still, one would like to find a more direct physical interpretation for localisation in gauge theories. This may seem a hopeless task, given that no physical meaning is attached to individual points, or even regions, of the Dirac spectrum, with observables obtained only integrating over the whole spectrum. A notable exception to this state of affairs is the chiral limit: in this case the point = 0 is singled out as only near-zero modes are physically relevant, and the localisation properties of these modes may have direct physical implications. In particular, one wonders how a finite density of near-zero localised modes can affect (if at all) the usual picture of spontaneous chiral symmetry breaking and generation of Goldstone excitations. While I know of no model where such a scenario has been demonstrated, there are intriguing hints in (i) 2+1 flavour QCD towards the chiral limit, and (ii) SU(3) gauge theory with = 2 massless adjoint fermions.
(i) A peak of localised near-zero modes has been observed in overlap spectra computed in HISQ backgrounds for near-physical light-quark mass right above the crossover temperature [23]. This peak persists also for lighter-than-physical light-quark masses [24,25], but the localisation properties are not known in that case. It is possible that this peak will survive and the localised nature of the modes will not change in the chiral limit.
(ii) SU(3) gauge theory with = 2 massless adjoint fermions displays an intermediate, chirally broken but deconfined phase [26,27], where a nonzero density of near-zero Dirac modes is certainly present. As the theory is deconfined, one expects these modes to be localised.
Localised modes and Goldstone's theorem at zero temperature
It is instructive to discuss first the case = 0. Consider a gauge theory with degenerate flavours of fundamental quarks of mass . In such a theory, as a consequence of the Banks-Casher relation [28] and of Goldstone's theorem [29], a nonzero density of near-zero modes in the chiral limit implies the spontaneous breaking of chiral symmetry down to SU( ) , and in turn the presence of massless pseudoscalar Goldstone bosons in the particle spectrum. However, one should say more precisely "delocalised near-zero modes": in fact, it has been known for quite some time [30,31] that if the near-zero modes are localised then the Goldstone bosons disappear. To see this in the case at hand, one uses the SU( ) (axial nonsinglet) Ward-Takahashi (WT) identity, where =¯ 5 , =¯ 5 , and Σ = 1 ¯ , with and 5 the Euclidean Hermitian gamma matrices and the generators of SU( ) in the fundamental representation normalised as 2 tr = , and . . . is the Euclidean expectation value. In momentum space Eq. (2) becomes where , and similarly for G ( ). In the limit → 0, one finds near = 0 that with Σ denoting from now on the chiral condensate in the chiral limit. If Σ − R ≠ 0, G has a pole at zero momentum implying the existence of massless bosons. If G behaves reasonably as a function of in the chiral limit then R = 0, and massless bosons are present if chiral symmetry is spontaneously broken by a nonzero chiral condensate Σ. However, as I show below in Section 4, if there is a finite density of localised near-zero modes then G generally diverges like 1/ in the chiral limit. This divergence leads to a nonzero R proportional to the density of localised near-zero modes, by cancelling the factor of in a way reminiscent of how UV anomalies are formed. In particular, if a finite mobility edge is found in the chiral limit then the "anomalous remnant" R cancels Σ exactly, removing the pole from G , and so the Goldstone bosons from the spectrum. A non-vanishing anomalous remnant allows one to evade Goldstone's theorem. In fact, the anomalous remnant leads to chiral symmetry being explicitly broken in the chiral limit, with the resulting modification of the usual WT identity showing that the axial-vector current is not conserved. Current conservation is a fundamental hypothesis of the theorem, and since it does not hold the theorem does not apply.
Localised modes and Goldstone's theorem at finite temperature
The argument discussed above is not really relevant to realistic gauge theories (e.g., QCD and QCD-like theories), where no localised near-zero modes have been observed at = 0. However, it suggests a general strategy to study the physical effects of localisation in the chiral limit also at finite temperature: relate the properties of the Euclidean Dirac spectrum with those of the physical spectrum using the axial nonsinglet WT identity Eq. (2), that holds also at ≠ 0. In this case, due to technical reasons related to the breaking of O(4) invariance in the Euclidean setting, the physical spectrum is accessed more naturally by reconstructing the axial-vector-pseudoscalar spectral function (see [32]) from the Euclidean correlators, where . . . denotes the (real time) thermal expectation value, andˆ andˆ are the Minkowskian axial-vector and pseudoscalar operators. Using the WT identity Eq. (2) and the symmetry and analyticity properties of the correlation functions, one finds in the chiral limit at zero momentum [34] where Σ and R are now computed at finite temperature, i.e., compactifying the Euclidean time direction to size 1/ , and in particular The Dirac delta in Eq. (6) indicates the presence of massless quasi-particle excitations in the spectrum, as long as its coefficient is nonzero. Similarly to the zero-temperature case, if G is sufficiently well-behaved in the chiral limit then R = 0, and spontaneous breaking of chiral symmetry by a finite Σ leads to massless excitations in the spectrum. This is the finite-temperature version of Goldstone's theorem (see [35] and references therein). As shown below in Section 4, localised near-zero modes can lead to a nonzero R, which can remove these Goldstone excitations from the spectrum. Again, a finite anomalous remnant indicates explicit breaking of chiral symmetry in the massless limit, so that the axial current is not conserved and Goldstone's theorem at finite temperature is evaded.
It is also assumed that there is no transport peak in the pseudoscalar channel. This is expected on general grounds, and supported by numerical lattice results (see [33]).
Localised modes and the pseudoscalar correlator
I now show that a nonzero R is generally found in the presence of a finite density of localised near-zero modes [34]. Since UV divergences play a very limited role, the argument can be carried out safely (and more simply) in the continuum. One starts from the bare pseudoscalar correlator ( ) (0) at temperature in a finite spatial volume and for finite (bare) mass , written in terms of a double sum over Dirac modes, Here / = , with obeying antiperiodic (resp. periodic) temporal (resp. spatial) boundary conditions and normalised to 1, , ( ) * Γ ′ , ( ), and a UV cutoff on , ′ is understood to be in place. After renormalisation of the mass, = , and of Π , , including the removal of the divergent contact terms CT, one can take the thermodynamic and chiral limit (in this order) to find the following expression for the coefficient of the 1/ divergence of Π( ), where = −1 , ′ = ≠0 , and is a fixed but arbitrary mass scale, which will eventually play no role. Exact zero modes have been dropped since they are negligible in the thermodynamic limit. Modes outside of a neighbourhood of = 0 also become negligible in the chiral limit, leading in particular to the absence of divergent contact terms.
The quantity in Eq. (9) can be nonvanishing only if Γ survives the thermodynamic limit, and here the localisation properties of the eigenmodes play a crucial role. In fact, using Schwarz inequality and translation invariance one can bound the eigenmode correlators entering Γ as follows, Making the dependence of Γ on explicit by writing Γ , one then finds where ( ) ≡ ′ ( − ) and IPR( ) ≡ ′ ( − ) IPR / ( ) are the spectral density at finite and the average IPR computed locally in the spectrum, respectively. If modes near are supported in a region of size ( ( ) ), one has IPR( ) ∼ − ( ) , and so Γ ( ; ; ) → 0 The zero-temperature case is obtained by setting the calculation in a finite four-volume 4 , replacing / → 1/ 4 in the formulas below, and eventually taking the limit 4 → ∞.
The anomalous remnant R is now obtained by integrating Eq. (9) over Euclidean spacetime. Assuming that localised modes are present in the interval [0, ( )], one obtains where loc (0) is the density of localised near-zero modes, and is a function of the renormalisation-group invariant ratio ( ) in the chiral limit. In obtaining Eq. (13) one exploits the localised nature of the modes to exchange the order of integration, chiral limit, and thermodynamic limit, as well as the orthonormality of Dirac modes. As anticipated, R is proportional to the density of localised near-zero modes. The quantity ∈ [0, 1] depends on how the mobility edge scales in the chiral limit: = 0 if it vanishes faster than , 0 < < 1 if it vanishes like , and = 1 if it vanishes more slowly than , including not vanishing at all. The arbitrary scale does not appear in the final expression, as expected.
Localised modes and Goldstone excitations
Using Eq. (13) and the Banks-Casher relation Σ = − (0) [28], where (0) is the density of near-zero modes (localised or otherwise) in the chiral limit obtained from ( ) taking limits as in Eq. (14), one finds for the singular part of the spectral function in the chiral limit [34] One can now determine the fate of the Goldstone excitations. Since localised and delocalised modes usually do not coexist, one has loc (0) (0) = 1 or 0 depending on whether near-zero modes are localised or delocalised. There are four possible scenarios. 0. Near-zero modes are delocalised: Goldstone excitations are present as long as (0) ≠ 0. This is the standard scenario predicted by Goldstone's theorem.
1. Near-zero modes are localised and = 0: Goldstone excitations are present as long as (0) ≠ 0, i.e., localisation of near-zero modes has no effect on the Goldstone excitations, and the same standard scenario is found.
2. Near-zero modes are localised and 0 < < 1: Goldstone excitations are present if (0) = loc (0) ≠ 0, although the coefficient of the Dirac delta is reduced compared to scenarios 0 and 1. This is qualitatively the same as the standard scenario, but differs from it quantitatively.
Conclusions
I have shown how the pseudoscalar-pseudoscalar correlator generally develops a 1/ divergence in the chiral limit in the presence of a finite density of localised near-zero modes. This divergence leads to a finite anomalous remnant that modifies the usual form of the axial nonsinglet Ward-Takahashi identity in the chiral limit, signaling that chiral symmetry is broken explicitly even in this limit. This indicates non-conservation of the axial-vector current, and so the inapplicability of Goldstone's theorem, both at zero and at finite temperature. Depending on the detailed behaviour of the mobility edge as a function of , one can either recover the standard scenario with massless excitations, possibly up to a change in the coefficient of the singular term in the spectral function, or have Goldstone excitations removed from the spectrum.
So far, the presence of localised near-zero modes in the chiral limit has not been demonstrated explicitly in any model, although there are indications that it could be a feature of the chiral limit of QCD and of SU(3) gauge theory with = 2 flavours of adjoint fermions. It would certainly be interesting to find a model with this property, especially if it realised a non-standard scenario for Goldstone modes (i.e., cases 2 and 3 above). It would also be interesting to work out the possible signatures in the finite-mass theory originating from the realisation of a non-standard scenario in the chiral limit.
|
2021-10-26T01:16:50.777Z
|
2021-10-23T00:00:00.000
|
{
"year": 2022,
"sha1": "11e9a182f2a83743f6c2bae6fc2a9834c9c9ec0c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "11e9a182f2a83743f6c2bae6fc2a9834c9c9ec0c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
44213916
|
pes2o/s2orc
|
v3-fos-license
|
Multidisciplinary approach to the patient with cerebral palsy *
Aim: Disease of cerebral palsy can be described as neurodevelopmental disability non-progressive brain, which causes disruption of motor development children. Oft en spans multiple areas of development – combined with intellectual disabilities, sensory defects, and aff ects the development of communications kills. Th e aim of the study was to develop a new theory using metaparadigm nursing, using a “grounded theory”, which would prove the validity and correlation nursing care to every sphere of life (physical, psychological, social, legislative) for those aff ected children. Methods: Are working-out complex casuist of three case. On inquiries are elect method semistructured conversation and method study health documentation title person. Complex casuist are analyzing method “grounded theory”. Results: Information that are acquirement us off er precious expression following those are create new theory with metaparadigm nursing by methods “grounded theory”, which mention causality relation nursing care with region life (physically, psychic, social, legislative) child with spastic form infantile cerebral palsy and which also points to the need for multidisciplinary cooperation experts in neurology, physiotherapy, psychology, speechtherapy, special education and social interaction with in the area of the patients. Conclusion: Our theory shows that quality nursing care, comprehensive multidisciplinary approach experts in neurology, physiotherapy, psychology, special education and social care can improve the living conditions of chilren with cerebral palsy in its holistic range.
INTRODUCTION
"Cerebral palsy is a neurological disorder caused by damage to the brain.It is characterized by damage to the motor system non -progressive" (Love, Webb, 2009, p. 309).Komarek, Zumrová et al. (2008) are defined disease of child cerebral palsy motor development of the child's disability, which non-progressive as was founded on the basis of completed prenatal, perinatal and post-natal damage to the developing brain in a timely manner.
When a disease of childhood cerebral palsy in clinical picture is a combination of intellect, the senses, epileptic syndrome and in particular the disability disorders of mobility (Brozman et al., 2011).For this disease, it is fitting that is dominated by the failure of the development momentum of gross and fine motor skills,delays in the development of rotation, sitting, walking, standing, and walking.Speech disor-ders in terms of vocal development, developmental delays in children with cerebral palsy are frequent dysphasia, among others.Need help logopedical and foniatrical.Important are relaxation techniques, agility drills, drills of the coordinated breathing, tongue, lips (Murgaš, 2004).Cerebral palsy occurs in the form of dyskinetical, atonical and mixed.e most common form of spasmodic, occurs which is divided into diparetical (spasmodic paraparesis of the lower extremities without any sensitivity disorders), hemiparetical (contralateral half of the body, with more damage to the upper extremity impairment) and cvadruparetical form (disability of intellect, epileptic seizures) (Brozman et al., 2011).Treatment of child cerebral palsy is a drug rehab and corrective orthopaedic devices, operational.Supportive medications are used to enhance the use of the substance nootrop in the brain cells of oxygen and glucose lowering medications, spasticitis, myorelaxans.Application of botulinum toxin injection eliminates spasticitis achille tendons (Murgaš, 2004).
A part of the treatment is physiotherapy.In the age of the child, within one year of life in Slovakia and in the Czech Republic is the most utilized methodology Vojtova reflective locomotion.Later, he usually classifies the Bobath concept.It is possible to carry out rehabilitation as an outpatient, in the form of rehabilitation stays at various facilities (hospitals, nursing homes, children's integration centres, etc.) or a spa treatment, which it is appropriate to assign on a regular basis.It is always important for consistency and patience, rehabilitation is practically a lifetime (Okáľová, 2008).Vojta's method (reflective locomotion) is used for the child since birth.e methodology is carried out by a trained physiotherapist who gradually of both parents.Must be carried out 4 times per day.Exercise is accompanied by the child's crying, which responds to the position, not forced on pain (Boledovičová et al., 2010).Bobaths always stressed the importance of the earliest diagnosis.Aer the birth of a child has brain pathways, neural cell bodies, has preforming that are still "empty" without relevant information, which they assessed the development and mutual linking dendrits and neurits.In so far as this information network will begin to create in the form of bad programs, their fix will very difficult to impossible.e exercise program works with "artificial" motorical activity according to the sophisticated neurofyziological expertise and an irreplaceable personal experience (Pfeiffer, 2007).To facilitate the lives of the affected children will use a variety of prosthetic devices as follows.In children with a strong spasticitis is an important collaboration with the orthopedic surgeon and an orthopedic protetic.Using a variety of assistive devices: limb brace, brace the spine, orthopedic shoes.It is oen used as well as surgical treatment of complications from cerebral palsy (Okáľová, 2008).Waberžinek, Krajíčková et al. in its publication stating that ".. .thepart of the care of the child should be family psychotherapy, social assistance, balneotherapy, in older children, the possibility of integration into the system of basic occupational therapy schools and further education" (Waberžinek, Krajíčková et al., 2007, p. 279).
THE OBJECTIVE OF THE WORK
To determine and compare the problems of children with cerebral palsy, to identify deficiencies in the area of nursing care in the diagnosis and delineation of strategies to improve it.To create a theory of metaparadigm nursing, which would involve using the scope and impact of nursing care, not only in the sphere of health.We chose these goals because we like to uncover new aspects in the framework of the provision of nursing care in children with spastic cerebral palsy affected.For the purpose of practice we wanted to see this disease as much as possible about the problems and the resulting needs of this group of people, in order to become a comprehensive and high quality nursing care and to lead to the elimination of these problematic areas.At the same time, we will try to prove that nursing care is not only health, but also affects the social, legislative and other.Our next objective was to point out the merits of individual clinical departments in improving the quality of life of multidisciplinary cooperation among affected children.
FILE AND THE METHODOLOGY
In the study, we focused on three children with cerebral palsy with the spastic form of the disease.e first participant Patrik, age 14 years cvadruparetic present disability.Personal history: child from 3. pregnancy (1 times spontaneous abort), giving birth to a premature for bleeding of the uterus in 34.week ended, aer giving birth to baby cesarean section patient in the paediatric ICU, 1 month, birth weight 2150 grams, birth length 44 inches, breast-fed for 1 month.Rehabilitation treatment started from 2. the month of the child's age.e second participant Michal, age 15 years cvadruparetical present disability.Personal history: a child of 1. pregnancy, as the twin gemini 1., risk pregnancy terminated cesarean section for metroragia in 29.week of pregnancy.1400 grams of birth weight, birth length of 37 inches.Artificial pulmonary ventilation, hospitalization, child to ICU 2 months, asphyxia.Rehabilitation treatment started from 2. the month of the child's age.e third participant Patrícia, age 18 years,present diparetical disability.Personal history: a child of 2. physiological pregnancy, spontaneous physiological birth date, birth weight 600 grams, birth length 57 inches, from 12. month lag in development, rehabilitation launched in 12. the month of the child's age.
In our research we will use the term participant for the person and the concept of informant for parents (perfectly the same environment), so as suggested by Gavora (2006).Research questions we put to the legal representatives.Conscious choice we make as an appropriate method of selection of criterial due to the fact that they had the same life situation -a child with spastic form of cerebral palsy.
Found empirical data have been processed into three comprehensive case reports, which we analyzed the method of "grounded theory" as suggested by the Hendl (2008).We have created a new theory, which could be applicable to children with spastic form of cerebral palsy.Although qualitative research can not be generalized, we think that this theory would be to use as a tool to properly oriented and holistic nursing care, which also highlights the interconnectedness of nursing care with all the areas of life affected.ere are different strategies or techniques to build a theory of data.e most developed the technique of "grounded theory" by Glaser and Strauss.It works with three levels of encryption (open, axial, selective), which are arranged hierarchically (Gavora, 2006).
RESULTS AND DISCUSSION
Open coding is the lowest degree of working with data (Gavora, 2006).Recordings with various informants we store into a written form.Aer repeated reading, we delve into the meanings of the words we are each divided into individual segments according to the different periods of casuistic questions.We further analyzed the data and change in axial coding.
Axial coding is finding relations between categories, leading to the formulation of categories as open coding abstractly (Gavora, 2006).Segments of text from an open coding we designed redesigned.ese new segments should vary in length depending on the theme and of the same type as the thrown in various facets of the categories.e names of the categories we create meaning with regard to the possibility of a quick orientation in the categories and have been identified as more abstract concepts, which at the same time aptly calling it contained segments of text.For each category we assigned the code (the initials of the category) -table 1 (for a limited number of pages is only a short article for an example of the coding of the information obtained in the axial casuistic).At the same time, we have outlined the relationship between the various of categories (the proposition).We analysed the data incrementally with each informantom until there has been saturation theory.ere were no longer any new semantic categories.
Selective coding is the highest level in a hierarchy of relationships and results to the creation of a central category, which is subject to all of the sub-categories (Gavora, 2006).e category has become a central semantic category "professional help" (PH), which covers nursing and health care.e sub-categories, which are subject to other semantic categories were: life reality (LR), behavioural area (BA), the cognitive area (CA), technology (T), the soul of man (SM), a social interaction (SI), institutions (I), legislation (L), compensation (C), the vision of the future (VF) -in scheme 1.
From the above proposition result schema (relations) and also the hierarchy between the various categories of meaning.Professional assistance (nursing and health care) affects the living reality of the participants (admission diagnosis parents, settling in the phase of reconciliation with disability of the child support, enough interest and empathy on the part of health professionals).e reality of life is influenced by both the behavioural area (psychological emotion) and techniques (using exercises, alternative methods of treatment) and also the cognitive area (information, knowledge).Behavioural influence social interaction area (relationships, neighborhood) and the soul of man (participants character traits).e best area of influence of the institution (Office of labour, Social Affairs and family), from which informantion obtain, information on social assistance from the State.Institutions are affected by the legislation.e legislation affects the future (social application), vision (overdue but important steps) and offset (compensatory aids granted).e diagram clearly shows the horizontal and vertical relationships between of categories.Practice with it and use the concept of Bobath Guinea fowl.It is an ordinary bath into which the air flows from the Guinea fowl, which are balance, then the special equipment on the basis of this method are its muscles relaxed.is procedure I do it once a week.
13.1
Patrik L is unit cost me 800 euros and I bought it myself to make me any amount of the purchase, the State of the device.14 Patrik T Guinea fowl I bought because the spa treatment as well as at least some treatments cost money and me I even in domestic conditions.14.1 At the beginning of the study, we assumed that the nursing care only affects the health of life of these children's area.Deeper into the meaning of verbal statements by the presence of a informants, however, we come to the propozition, which interact with all areas of the life of children affected by spastic form of cerebral palsy.Nursing care as it affects the overall reality, all areas of their lives.It is important that you are aware of this fact, because our sister study pointed to the importance of the quality of nursing care, which is transformed into all walks of life of these children and thus helps to improve and eliminate problem areas.Nursing care is not confined to interventions within the health care facility, but also the overall quality of life, you may very well be significantly affect nurses needs and existing problems of this group of the population and to cooperate with the authorities of the State administration and territorial self-government in the adoption of such measures, which will improve the quality of life of these people.Proposition (relations between the categories) we create a new theory, which we have schematically with the use of metaparadigm nursing.Content filling, the subject of nursing are: the person, health, environment, nursing care and the relationships between them.In our research, we have identified metaparadigm as follows: Person -a child with cerebral palsy and his reality.Health -behavioural and cognitive area -mental health and knowledge leading to improved health, strengthening, -techniques to improve the health status and leadership -exercise approach to the State of health, -compensation -device features, help attain a certain degree of unsoundness of mind replacing health, to achieve personal fulfillment and the future -mental health, -vision -important measures leading to mental health, -social interaction -the relationships between the people influencing mental health, -the soul of man -character traits affecting the psychological health.Environment -legislation -laws affecting the life of the affected person, but also the practice of nursing in the Slovak Republic, the application of the laws of the institution -in practice in the Slovak Republic.Nursing care -professional help affecting the biological, psychological, social and spiritual health.
In Diagram 2 we show clearly the bases between the metaparadigm in the newly formed nursing theory.
Diagram 2 New theory with propositions in nursing care of children with spastic form of cerebral palsy with the use of metaparadigm nursing
Professional help
Professional help
Life reality
Behavioural area
Technology
Cognitive area The soul of man
A social interaction Institutions
The futute Compensation The vision of the future
Legislation
In the diagram we have identified each other connected areas and their mutual interaction.In a new theory we have come to the conclusion that professional assistance, namely nursing care affects all of us, because we are being into the category on semantic designed the Interior of the circle.Professional help is transformed into all walks of life of the child and his or her family (life reality).Behavioural area (psychic survival), techniques (exercises, methods of treatment) and cognitive area (knowledge, information about diseases, treatment options) are influenced by the quality of nursing care, at the same time affect the standard of reality (an ordinary, everyday life) of a child with spastic form of cerebral palsy.At the same time, we have found that these three areas influence each other.Social interaction, including interaction with other members of the multidisciplinary team and assisted relationships affect behavioural area child.e soul of man (the nature of the child) affects the psyche and psychic survival of the child and his or her family (behavioural area).e best area of influence of the institution (Office of labour, Social Affairs and family) that give information on social assistance and possible compensatory measures.Institutions are affected by a professional using the, which points out to the needs of a disabled child, and shows the real need to support this and maintain the utmost extent selfcare.e legislation affects the professional assistance (scope of nursing practice, the use of assistive devices), as well as professional assistance affects legislation that shows the problems and needs of a practice that is needed.From the diagram suggests that the legislation subsequently affect the future, the social application of children with spastic cerebral palsy in the form.At the same time affects the compensation to which they are utilities, according to the legislation of the Slovak Republic are entitled to these children.e legislation even if it indirectly affects the visions of the children and their families (dreams, desires, overdue measure).
As Silverman argues in his publication, the comparative method is a fundamental scientific method.Even if you can't find a comparable case, try to find a way to split your data to different files and compare the following (Silverman, 2005).Since we could not find the same or similar research on the formation of the theory of the cerebral palsy, we had the opportunity to view our new theory directly compare with other research studies.In the discussion, we will focus on the comparison of the results of our research with other authors.It was a comparison of the results of the case, not about the results in the formation of the new theory.Extensive research, which made Pali-sano, Kang, Chiarello, Orlin, Oeffinger, Maggs (2009) in the States of the USA: Chicago, Illinois, Erie, Pennsylvania, Lexington, Kentucky, Sacramento, California, Philadelphia, Springfield, Massachusetts, and Charlottesville for 500 children affected by cerebral palsy, revealed that children tethered to a wheelchair most of the time they spend for that your computer on the Internet or playing computer and video games.I have a problem with moving and don't have a lot of social interactions, because most of the time they spend at home.Children with cerebral palsy, which are mobile have more social interactions with peers, because they can run, jump, play around with them.Parents of children are marked with a wheelchair for problematic environmental barriers as well as the size of the rooms for children tethered to a wheelchair, wheelchair not adapted suitable public transport for these people.e authors of the study, on the basis of the statements made by the respondents recommend that elimination barriers to mobility and to seek appropriate means of transportation for those active people.Our research we can confirm the results and thus it is important to provide a barrier-free environment and appropriate means of transport to move people living with cerebral palsy and so encourage their social contacts.
Novosad in its publication States that families with these children are socially unrecognized.Parents have no work of relief and the possibility of their career growth and professional application may be limited, the family is economically weak.A common problem associated with disabled children, their families, isolation or loneliness and high demands on personality, mental stability and physical stamina of both parents (Novosad, 2009).ese correspond with the results of our research because all the thesis of informants expressed the desire for the Club, the Center for children with cerebral palsy where they could exchange experiences, to organize the collection, a variety of tours.If part of the Centre was also affected children's education and at the same time such a center to provide an opportunity of employment for parents of affected children, would also eliminate another phenomenon and that the feelings of parents, they are excluded from interaction relations with other people in the working-age population and at the same time such a device would allow some kind of earn extra money and thus to improve their financial situation.
CONCLUSION
We've tried to create a new theory of using nursing metaparadigm suitable for practical use for children with spastic form of cerebral palsy.We wanted her to point out the causality of nursing care with other areas such as social, physical, psychological and legal.We must remember that the work of the sisters is very responsible and challenging.Our theory proves that in order to provide adequate and quality care nurse must be aware that its procedure and the patient not only in the sphere of health, this affects the thinking but that her experience and knowledge can improve the conditions of life in a holistic way.eir activities must contribute to the sisters just to the authorities of the State administration and territorial self-government, aware of the deficit areas in the affected persons and to help remove them.Equally important is the need for multidisciplinary cooperation in solving problems as well as disabled children.
e contribution presents partial results of a research study of the rigorous work of the designer.
Table 1
Statements and codes of casuistic "Patrik" When he was 6 months old and still has not been stable in a sitting position, fell.eolder the more it was on it to see that it is lagging behind in development.Either with your hand during a data call or had stiff legs.Diagnosis of the doctors told me at birth.Rehabilitation treatment was started as soon as you have selected from the incubator.To me he doctor Vojta methods.As soon as we started practicing.Despite the forecast of the doctor... 24 hours a day at the expense of the older son and housework.Everything, the whole day, my sick child needs extended mode.I told him poems, sang songs, still went to the TV or radio, in order to have incentives to ...
|
2017-10-24T12:27:22.583Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "e2abeb3dcad959aeb1f6a2822b954be7f5a942db",
"oa_license": "CCBY",
"oa_url": "http://profeseonline.upol.cz/doi/10.5507/pol.2014.005.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eb7c40cb539dee9196a36527a6b74bb67cc00d3f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
239749698
|
pes2o/s2orc
|
v3-fos-license
|
Vāk sūktaṃ as the root of the āgamic traditions
Among the philosophical hymns of the Ṛgveda Vāk Sūkta is a very popular one. On the other hand, for the Āgamic traditions Vāk Sūktaṃ plays a major role in shaping their very root theories. Starting from the Bhairava Āgama to the comparatively later creations such as Devī Atharvaśirṣa, the significance of the Vāk Sūkta can be seen everywhere. This paper focuses on the means of how this one collection of hymns inspired to give rise to a vast tradition including a vast range of an independent philosophical
Introduction
Various traditions that we experience and follow today are none but got their root ideas from the Vedas. On the basis of the acknowledgement of the adaptations from the Vedas in the vast philosophical grounds indian schools of thoughts are divided into two major divisions, viz. Nāstikya (who did not acknowledge the teachings of the Vedas) and Āstikya (who acknowledged the teachings of the Vedas) [1] . Analysing chronologically we get the rise of the Āgamic traditions with the expansion of the Buddhist doctrines across the continent. This contemporary occurrence of the events sometimes lead to a conclusion that marks the Āgamic traditions as the mere by-product of the Buddhist doctrines [2] . The Nāstikya perspective of the Buddhist doctrines is unquestionable, and this firm root of the Buddhism inevitably affects the Āgamic teachings. Though the connection of the Buddhists with certain Āgamic practices cannot be denied, excluding some specific traditions of the Pāñcarātra school of thoughts most of the Āgamic schools face the accusation of their Buddhist roots. The reason of exclusion of the Pāñcarātra Āgamas can be seen as the impact of the Puranic age. The deities presented majorly in the Puranas mostly appear in the Pāñcarātra Āgamas without facing any distortion [3] . On the other hand other schools of the Āgamic traditions present the deities in a different manner than the Puranic means. For this reason no matter how close these depictions are to the root Vedic Samhita depictions, a mass of people detected distortion there. The root of the Āgamic tradition is not Buddhist, to prove this statement it is needed to establish the derivation of the Āgamic philosophical conclusions from the Vedas only and this is how the Nāstikya link of the Āgamas can be disproved. In the following sections we shall discuss about the philosophical hymns of the Ṛgveda and the position of the Vāk Sūktaṃ among them, the root of the Āgamic school of thoughts and the influence of the Vāk Sūktaṃ on it, the inclusion of Vāk Sūkta in the Devī Māhātmya and the conclusion.
Vāk Sūktaṃ as a Philosophical hymn
The Sūktas that we find in the Ṛgveda, on the basis of their objectives can be divided in five major divisions, such as i. Devastutiparaka Sūkta (praise to the Gods) ii. Dārśanika Sūkta (based on the Philosophical ideas) iii. Laukika Sūkta (teachings to the society) iv. Saṃvada Sūkta (dialogue between two characters) v. Ākhyāna Sūkta (a saga) etc [4] .
Among these divisions, the Dārśanika Sūktas represent a vast area of thoughts and explanations. Somehow these Sūktas can be seen as a way to answer one of the most primary queries arisen in the human psyche, like the origin and nature of the universe etc. Mostly we find these pieces in the 10th Mandala of the Ṛgveda. Vāk Sūktaṃ is no exception. But as a Dārśanika Sūkta this specific one is quite unique in its style of representation. When all the other Dārśanika Sūktas are explained as a series of statements made by someone who is as a mere audience experiencing the celestial occurring, Vāk Sūkta is explaining the action of the universe in first person [5] . This uniqueness is the key that allowed the Vāk Sūkta to act as a ground to give birth to a number of independent traditions. The idea of Self as as represented in the Vāk Sūktaṃ can be seen as an elaborated script of the Vedantic statement, this self is supreme (Brahman) ("अयमात्मा ब्रह्म", मा. उ. १.२) [6] . The Upanishads have vastly explained the theory of Brahman. In the Kathopanishad it is specifically said that the one who experiences that Brahman within the self their prosperity becomes beyond time ("तमात्मस्थं ये अनु पश्यन्र ि धीरााः ते षां सु खं शाश्वतं ने तरे षाम् ।। क. उ. २.२.१२") [7] . Similar idea can be found in most of the Upanishads. After the rise of the Buddhist doctrines the contribution of the Vedantic philosophers in order to maintain and protect the Vedic culture was greater than the other five schools of the Āstikya Darśana. The Vāk Sūkta was the only Dārśanika Sūkta that was considered to be the most apt scriptural proof (Śabda Pramāṇ) to establish the statements and ideas of the Vedanta [1] . Moreover the Godless philosophy among the six Āstikya Darśanas, the Sāṃkhya also draws the Śabda Pramaṇ from this Sūkta only for its unique representation of the idea of self [8] . The narrative of the Vāk Sūkta is not only unique but also bold at the same time. Where the other Dārśanika Sūktas merely describe a series of event, Vāk Sūkta has its own narrative full of bold, determined flawless undoubted statements [5] . So the position of Vāk Sūkta among the other Dārśanika Sūktas of the Ṛgveda is well established and is considered to be one of its kind for its unique narrative.
Vāk Sūkta shaping the Āgamic Philosophy
The traditional Āgamic philosophy we find in the ideas of the Kāśmir Śaiva Darśana. Though a number of Āgamic schools of thoughts propose various perspectives to explain the origin and nature of the universe, Kāśmir Śaiva Darśana serves as a compilation to all the root ideas [9] . There we find the concept of the Āmnāya Upāsana or the Directive Path oriented Practices and the introduction to the Paśchim Āmnāya or the West Directive Path. Two of the base scriptures made the foundation of this sect (if we can call them as sect at all) and eventually provided the base for all the later born Post Vedic schools of thought, like Śaiva, Vaiṣṇava, Kaumāra etc. those two scriptures were, the Manthāna Bhairava Tantra and the Brihat Baḍabānala Tantra. Later Acharya Adi Shankara Bhagavatpada mentioned Paścima Āmnāya as the greatest of all ("श्रे ष्ठाः अताः पन्रिमाम्नायस्तत्रत्याः शाम्भवाः अन्रसत याः", यन्रतदण्डै श्वर्ययय न्रवधानम् ४.२७). The Paścima Āmnāya with the introduction of Goddess Mālinī and Goddess Kubjikā has established the breeding ground for the practical philosophy in the later Vedic era [10] . Previously what the Pūrba Mimāṃsā of Jaimini provided for the practical Vedas, Paścima Āmnāya did the same thing to the post Vedic cultural background of the continent. The need to introduce the Paścima Āmnāya was inevitable as with time the applied Vedas were decaying with the loss of efficient and devoted practitioners. Both Manthāna Bhairava Tantra and the Brihat Baḍabānala Tantra modified the root Vedic teachings with respect to the changing times and a strong philosophical framework was established which we today call as the Bhairava Āgama [9] . The Bhairava Āgama further manifested as the Śaiva Krama, the Pāñcarātra corpus, the Śākta Krama with Kālī Kula and Śrī Kula being two prominent distinct practice grounds. Now we shall present the concluding principles of the Bhairava Āgama which were none but the mere reflections if the Vāk Sūkta. i. The initiation of the clan of Gods with Rudra: The Sūkta starts with the name of the Rudras followed by other Gods. The Āgamic traditions indicates the initiation of the clan of the Gods from Rudra, more specifically the form named Sadyojāta Mahākāla (not to be confused with the west-face of the five headed Śiva) [10] . With the help of this the application of spirits' offering in Vedic fire sacrifices was smoothly explained to be the nectar from the brain cells produced by the strict practice of Yoga [11]. The second mantra of the Vāk Sūkta indicates the origin of the fire sacrifice within the self to give rise to such Āgamic doctrines. iii. The science of consumption: The balance between penance and consumption of matters is maintained in a very elegant way in the Āgamic traditions, which others have failed to maintain (bhogasch mokshasch). The unnecessary abandonment of consumption in is established with the conclusion derived in the third mantra of the Vāk Sūkta, where the principle of consumption and the consumed is equated. iv. Multiple possibilities in the path of realisation: When the other doctrines taught salvation or just preached the union of the Self and Brahman, the Āgamas explored all the possibilities on the path. While teaching the path of realisation Āgamas put equal importance to knowledge and action on the basis of the idea generated in the fifth mantra of the Vāk Sūkta. The mantra mentioned the creation of Lord Brahmā, the action, the knowledge of Lord Bṛhaspati, the knowledge and the union of the Self with Brahman. v. Birth from the crown: The Āgamas teach the origin of the universe from Goddess Mālinī and Ādinātha residing on the crown chakra, the intellectual consciousness of the humans, influenced by the seventh mantra of the Vāk Sūkta [10] . vi. The Supremacy of the Divine Feminine: The complete Vāk Sūkta is composed in the feminine form of the self, which the Āgamas represent as the ultimate supremacy of the Devīne feminine, Goddess Kubjikā. Though the doctrines of Goddess Kubjikā are vast themselves, for now the original equity can be drawn from the principles established in the Vāk Sūkta.
Moreover while studying the root corpus of the Āgamic philosophy we can find none other than mere commentaries of the Vāk Sūkta only in every single idea proposed, like the four stages of speech, the manifestation of God as Tree, the subsections of the traditions like the Kaumāra Krama, Śaiva Krama, the Pāñcarātra Krama etc.
The Vāk Sūkta and Devī Māhātmya
A highly regarded excerpt of the Mārkandeya Purāṇa, a later-Vedic Pauranik text, popularly known as the Devī Māhātmya included the Vāk Sūkta as it is while stating the supremacy of the divine feminine. The similarity of the principles of the Devī Māhātmya with the Āgamic Doctrines is remarkable, which often allows the Agam practitioners to include it despite of its Puranic roots into the Āgamic practices and as a Śabda Pramāṇ to prove some of its principles. The chants-like practices of the excerpt is prescribed primarily on various Āgamic scriptures only. From that perspective it seems the inclusion of the Vāk Sūkta into the script is an Āgamic teaching. But even if we completely neglect the practical dialect and application of the Devī Māhātmya, in the root text we can see the inclusion of the Vāk Sūkta in a fraction but in a very prominent manner. The Sūkta itself is mentioned in the script's 13th chapter as a prime tool towards self-realisation in the context of the prescription of the ritualistic approaches of the previous chapters. Today when we start studying the Devī Māhātmya we can see the complete script established on the firm grounds of the Vāk Sūkta only. In the later periods each and every script that discussed over the Devī Māhātmya has established the Vāk Sūkta with prime attention [12] .
|
2021-10-26T00:08:44.091Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7c50022cc5f3f1dfdf0493691e1dfd55b3bad66f",
"oa_license": null,
"oa_url": "https://www.anantaajournal.com/archives/2021/vol7issue5/PartA/7-4-32-186.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a57741d31ea5a1465da4cb4601b63df58255d51b",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"History"
]
}
|
4869731
|
pes2o/s2orc
|
v3-fos-license
|
Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides
We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\pi,0)$.
The discovery of high-Tc superconductivity in doped cuprate materials [1] has triggered a lot of attention in the parent quasi-two-dimensional spin- 1 2 quantum antiferromagnets. So far complementary information on the antiferromagnetism has come from different probes like neutron scattering and Raman light scattering [2]. In this work we reinterpret recent infrared absorption measurements [3] (Fig. 1) in terms of phonon assisted multimagnon absorption. The narrow primary peak in Fig. 1 is explain in terms of a long lived virtual bound state of two magnons here referred to as a bimagnon. These new states are narrow resonances occuring when the magnon pair has total momentum close to (π, 0) and have a rezonably well defined energy and momentum in a substantial portion of the Brillouin zone.
In principle IR absorption of magnons is not allowed in the tetragonal structure of cuprate materials. This is because in a typical two-magnon excitation the presence of a center of inversion inhibits any asymmetric displacement of charge and hence the associated dipole moment vanishes. However the situation changes if phonons are taken into account. In a process in which one phonon and two magnons are absorbed, the symmetry of the lattice is effectively lower, and the process is allowed. A similar theory was put forward by Mizuno and Koide [4] to explain magnetically related IR absorption bands found many years ago by Newman and Chrenko in NiO [5]. To the best of our knowledge, we present the first explicit calculation of coupling constant for phonon-assisted absorption of light by multimagnon excitations and the line shape for two-magnon absorption. We consider a Cu-O layer but generalization of our results to other magnetically ordered insulators is trivial.
Consider a three-band-Peierls-Hubbard model [6] in the presence of an electric field (E) and in which for simplicity Cu atoms are kept fixed and O ions are allowed to move with displacements u i+δ/2 . Here i labels Cu sites and δ =x,ŷ, so that i + δ/2 labels O sites. Holes When an O ion moves in the direction of a Cu with displacement |u|, the corresponding on-site energy of Cu changes to first order by −β|u| and the corresponding Cu-O hopping by α|u|. Opposite signs apply when the O moves in the opposite direction [6].
To calculate the coupling constants of light with onephonon-multimagnon processes we first obtain a low energy Hamiltonian as a perturbation expansion valid when t << ∆, ǫ, U d and when the phonon field and the electric field vary slowly with respect to typical gap frequencies, Here B i+δ/2 = S i S i+δ with S i the spin operators, H ph is the phonon Hamiltonian containing spring constants and masses for the O ions (M ) and P ph is the phonon dipole moment. The first term in Eq. (1) contains spindependent fourth order corretions in t whereas fourth, second and zero order spin-independent processes are collected in the last two terms. As usual [7] we calculate the superexchange, J, as the energy difference between the singlet and triplet states of the spins located at Cu L and Cu R in Fig. 2. We only need to consider the three configurations (A,B,C) of the L-R bond and E. Next we Taylor expand J to first order in E and {u i+δ/2 } [8], Here λ = 1 for configuration A and λ = 0 for configurations B and C. In each configuration the displacement of the central O and the electric field are parallel, i.e. E = Eê, u 0 = u 0ê . The direction ofê is the same as the arrows at the bottom of Fig. 2. u L and u R are only relevant in configuration A. u L = u L1 + u L2 − u L3 , u R = −u R1 + u R2 + u R3 . The numbering and the direction of the displacements are shown in Fig. 2. The first term in Eq. (2) is the superexchange in the absence of the electric and phonon fields, The remaining quantities are a magnon-phonon coupling con- and effective charges associated with one phonon and multimagnon processes, a pd is the Cu-O distance. Within a point charge estimation the parameter βa pd ≈ 2U pd . The dipole moment is obtained as P = − ∂H ∂E and using Eq. (2) in the relevant configurations. We get up to fourth order in t, The first term describes conventional phonon absorption. We define δB i+δ/2 = B i+δ/2 − B i+δ/2 and its Fourier transform, δB δ p and the Fourier transform of u i+δ/2 , u δ p . After Fourier transforming, the dipole moment for one phonon and multimagnon processes for an in-plane field in the x direction has the form, and λ = 1. The case of an electric field perpendicular to the plane is obtained by putting λ = 0 and replacing u δ x by u δ z . N is the number of unit cells. The first term is isotropic being present in any configuration. Looking at the cluster in Fig. 2 it can be understood as a spin dependent correction to the charge in O 0 . Its physical origin is that fourth order corrections to the charges involve spin dependent processes. For example if the spins in Cu L and Cu R are parallel they cannot both transfer to O 0 whereas if they are antiparallel they can. Fig. 3(a) illustrates a typical process efficient in configuration B. The second term is anisotropic being present for an in-plane field only. It originates from a "charged phonon" like effect [10]. Consider the configuration in which the electric field and the displacement of O 0 are both parallel to the Cu L -Cu R bond (A in Fig. 2), and a phonon in which the O's around Cu L breathe in and the O's around Cu R breathe out. (We don't need to consider zero momentum phonons to couple to light since it is the total momentum, magnons plus phonons which has to add to zero.) The Madelung potential in Cu R decreases, and in Cu L it increases, creating a displacement of charge from left to right that contributes to the dipole moment. Fig. 3 (b) illustrates a typical process. But again this effect is spin dependent since if the two spins are parallel they cannot both transfer to Cu R .
The real part of the optical conductivity due to these processes is given by the dipole-moment-dipole-moment correlation function. Assuming, for simplicity, that only the u δ δ ′ p 's with the same δ and δ ′ mix, and decoupling the phonon system from the magnetic system which is valid in lowest order in the magnon-phonon coupling we get (h = 1), Here ω p is the frequency of the u x xp and u y yp phonons and ω ⊥p is the frequency of the u y xp and u x yp phonons. ω p can be associated with the frequency of Cu-O stretching mode phonons and ω ⊥p with that of Cu-O bending mode phonons. The supraindex in the Green functions indicates that the poles should be shifted by that amount. The absorption coefficient is obtain assuming weak absorption as α = 4π c √ ǫ1 σ with ǫ 1 the real part of the dielectric constant [3] (b).
To compute the magnon-magnon Green functions we use interacting spin-wave theory [11] with a Holstein-Primakoff transformation. On the A (B) sublattice we i and the corresponding Hermitian conjugates. S = 1/2 in our case and b i is a boson operator. With this definition the Hamiltonian is invariant under the exchange of the sublattices and we don't need to distinguish between them. Accordingly we work in the non-magnetic Brillouin zone. The Heisenberg Hamiltonian in this representation is expanded to zero order in 1/S and normal ordered with respect to the non-interacting spin-wave ground state. The non-interacting part is diagonalized by the Bogoliubov transformation , with γ k = 2/z δ cos(kδ), ω k = 1 − γ 2 k and z is the coordination number (z = 4 in our case). We define . Next we apply a standard RPA like decoupling for the equation of motion of g which can be solved for integrals over the Brillouin zone of g weighted with products of u's and v's [11]. Finally δB x −p ; δB x p can be written as a sum of such quantities with proper weighting factors [9]. The dashed-doted line in Fig. 1 gives the contribution to the line shape from magnon pairs with p = (π, 0) without the approximations that follow. A very sharp resonance occurs there indicating that a virtual bound state (bimagnon) is formed.
Analytic expressions for the Green functions can be obtained neglecting the contributions involving v's. The resulting error is small because at short wave length v is small to start with and at long wave lenght (low energies) the spectral weight is small [12]. This approximation does not shift appreciably the peak at p = (π, 0) and only decreases slightly the intensity [9]. The Green function takes the form δB x −p ; δB x p = S 2 N π G xx with, Where, xx starts to be different from zero at ω = E m (∼ 2J 0 ) and rises very slowly as the frequency is increased so that it is very small at the bimagnon energy resulting in a long life time or narrow peak. In the inset of Fig. 1 we show the imaginary part of the Green function from Eq. (8) for different values of p. This shows how the bimagnon disperses around p = (π, 0) it goes upwards on going towards (0, 0) and downwards on going towards (π, π). This indicates that (π, 0) is a saddle point and hence it should give a Van Hove singularity when integrated over p [Eq. (7)]. The dashed curve in Fig. 1 is the theoretical line shape in the same approximation and assuming that the anisotropic processes dominate (q I = 0) and treating the phonons as Einstein like with ω = 0.08eV [13]. It gives a surprisingly good fit for the primary peak. Such a good fit, especially for the width was not possible within RPA in the Raman case [14]. This can be partially reconciled by the fact that a structure that is artificially broadened at p = (π, 0) (but still much narrower than the integrated line shape) does not change significantly the final result.
Because of the Van Hove singularity the position of the p = (π, 0) bimagnon peak coincides with the peak in the line shape and is given by 1.179E m + ω = 2.731J 0 + ω . This provides an alternative way to estimate J 0 . We found J 0 = 0.121eV which is in good agreement with other estimates [2,14].
The ω phonon is very anomalous in orthorhombic La 2 CuO 4 since it splits due to anharmonicities [13] its partner being at ω ′ =0.06eV at room temperature. Presumably this produces the shoulder observed at lower energies in the experiments (Fig. 1), although the distance to the primary is larger than expected which may be due to other phonons involved. This feature was assigned to direct two-magnon absorption [3] made weakly allowed by the lower lattice symmetry according to the results of Ref. [15]. However we found that the dipole moment for this process is directed in the direction bisecting an angle made by the Cu-O-Cu bond [16] and hence can only contribute for a field perpendicular to the plane.
The oscillator strength in Fig. 1 was adjusted to fit the experiments. A rough estimate of the strength using q A = .1e and ǫ 1 = 5 gives a value ∼ 4 times smaller than observed which is quite reasonable given the uncertainties involved.
We interpret the side bands at higher energy to be due to higher multimagnon processes, neglected in the above approximations. In order to check both the validity of our approximations and the origin of the side bands we have also computed the absorption using exact diagonalization of a small cluster [9]. The exact result confirms the Green function calculation for the two-magnon peak and shows also side bands corresponding to higher multimagnon processes which we associate with the higher energy side bands observed in the experiments. The relative weight of the side bands seem to be smaller than in the experiments presumably because of finite site effects or the presence of other processes in the magnetic Hamiltonian as was suggested in the Raman case [17].
For an electric field polarized perpendicular to the plane only phonons perpendicular to the Cu-O bond contribute. We estimate the absorption to be roughly a factor of 8 smaller than the in-plane contribution. From the experimental side the anisotropy seems to be larger. One should be aware that the cuprates are in a regime where covalency is not small with respect to typical gap energies and hence a perturbation in t is helpful to identify the important processes and discuss trends, but quantitative estimations are to be taken with care [7]. As higher orders in t are included we expect that the anisotropic contributions grow with respect to the isotropic ones. For example the charged phonon effects of Fig. 3(b) can become very efficient if the second hole forms a Zhang-Rice singlet with the hole already present in Cu L since that process involves a much smaller gap. Note also that the larger the order in t the longer the range of the processes that contribute to the anisotropic charges whereas only local processes contribute to the isotropic charge. Longer range processes have in general very large form factors. These effects should give a stronger anisotropy in accordance with the experiments. This also partially justifies our simplifying assumption of taking q I = 0.
To conclude we have computed effective coupling constants of light with multimagnon excitations assisted by phonons and determined the line shape of the primary peak. Our results explain recent measured absorption bands in the mid IR of parent cuprate superconductors and demonstrate the existence of very sharp virtual bound states of magnons.
We acknowledge the authors of Ref.
[3] for sending us their work previous to publication and for enlightening discussions and M. Meinders for giving us the program and helping us with the exact diagonalization calculations. This investigation was supported by the Netherlandese Foundation for Fundamental Research on Matter (FOM) with financial support from the Netherlands Organization for the Advance of Pure Research (NWO) and Stichting Nationale Computer Faciliteiten (NCF) Computations where perform at SARA (Amsterdam). J.L. is supported by a postdoctoral fellowship granted by the Commission of the European Communities. Thick arrows represent the spin, thin short arrows represent lattice displacements and thin long arrows represent the direction of the electric field. We have represented u0 in A configuration. In general its direction is equal to the direction of the electric field.
FIG. 3. Typical processes contributing to the isotropic (a) and anisotropic (b) effective charges. The meaning of the symbols is the same as in Fig. 2.
|
2018-04-03T03:53:48.444Z
|
1995-01-19T00:00:00.000
|
{
"year": 1995,
"sha1": "cf9a1d8d526b97c9dd87d162af8d01a15ad321cb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/supr-con/9501001",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cf9a1d8d526b97c9dd87d162af8d01a15ad321cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
}
|
225064435
|
pes2o/s2orc
|
v3-fos-license
|
Attempts to remodel the pathways of gemcitabine metabolism: Recent approaches to overcoming tumours with acquired chemoresistance
Gemcitabine is a cytidine analogue frequently used in the treatment of various cancers. However, the development of chemoresistance limits its effectiveness. Gemcitabine resistance is regulated by various factors, including aberrant genetic and epigenetic controls, metabolism of gemcitabine, the microenvironment, epithelial-to-mesenchymal transition, and acquisition of cancer stem cell properties. In many situations, results using cell lines offer valuable lessons leading to the first steps of important findings. In this review, we mainly discuss the factors involved in gemcitabine metabolism in association with chemoresistance, including nucleoside transporters, deoxycytidine kinase, cytidine deaminase, and ATP-binding cassette transporters, and outline new perspectives for enhancing the efficacy of gemcitabine to overcome acquired chemoresistance.
INTRODUCTION
Gemcitabine [2' ,2'-difluoro-2'-deoxycytidine (dFdC)], was first described by Eli Lilly and Company in 1986 [1] and is the most important deoxycytidine nucleoside analogue with fluorine substituents at the 2' position of the pentose ring [ Figure 1] [2] . Its metabolic pathway is illustrated in Figure 2. This molecule is hydrophilic, and can be transported into cells by nucleoside transporters (hNTs), including both sodium-dependent concentrative nucleoside transporters (hCNTs) and sodium-independent equilibrative nucleoside transporters (hENTs). hCNTs mediate unidirectional transportation of nucleosides. hENT1 can uptake gemcitabine with high affinity but low capacity, whereas hENT2 can uptake gemcitabine with low affinity but high capacity. The intracellular uptake of gemcitabine is mainly mediated by hENT1 in cancer cells. In hepatocytes, the uptake of gemcitabine is mainly mediated by low affinity hENT2 [3,4] .
Gemcitabine is a prodrug which requires intracellular phosphorylation for activation. Inside the cell, gemcitabine is phosphorylated to its monophosphate form (dFdCMP) by deoxycytidine kinase (DCK) and is then further phosphorylated to its diphosphate (dFdCDP) and then triphosphate forms (dFdCTP), as . dFdC is transported into the cell through nucleoside transporters (hNTs), then stepwise phosphorylated by deoxycytidine kinase (DCK), nucleoside monophosphate kinase (NMPK), and nucleoside diphosphate kinase (NDPK), to form active triphosphate metabolite (dFdCTP). This molecule then inhibits DNA and RNA synthesis. Diphosphate metabolite (dFdCDP) inhibits ribonucleotide reductase (RR), an enzyme that catalyses the conversion of ribonucleotide (CDP) to deoxyribonucleotide (dCDP). The majority of dFdC is inactivated mainly by cytidine deaminase (CDA) mediated conversion to difluorodeoxyuridine (dFdU) and then excreted through the ABC transporter. Deamination of dFdCMP to dFdUMP by deoxycytidylate deaminase (dCMP deaminase) and subsequent dephosphorylation forms dFdU; this is another inactivation pathway of dFdC. dFdUMP inhibits thymidylate synthase (TS), resulting in the depletion of the dTMP pool. dFdCTP inhibits dCMP deaminase shown in Figure 2. The resulting dFdCTP is incorporated into DNA and then the DNA strand synthesis is terminated after incorporation of another nucleotide, hiding dFdCTP from DNA repair enzymes [5] . dFdCTP is also incorporated into RNA [6,7] , and sensitivity to gemcitabine is related to differences in RNA incorporation [8] . RNA incorporation of gemcitabine may play an important role in its activity. dFdCDP is an effective inhibitor of ribonucleoside-diphosphate reductase, an enzyme that transforms CDP into dCDP; this results in a decrease of the dCTP pool. Deamination of dFdCMP by dCMP-deaminase forms dFdUMP. Thymidylate synthase, which plays a key role in the synthesis of thymidine monophosphate (TMP) [9] , is another target for gemcitabine, via dFdUMP. The natural substrate of TS, 2'-deoxyuridine monophosphate (dUMP), resembles dFdUMP, and it inhibits TS resulting in a depletion of the TMP pool.
Evidence for the usefulness of gemcitabine as a potent anti-tumour reagent has been reported; it is used either alone, or in combination with other agents for patients with pancreatic ductal adenocarcinoma (PDAC) [10] and several other human cancers, such as non-small cell lung cancer, breast cancer, ovarian cancer, and bladder cancer [11] (approved by FDA). However, acquisition of chemoresistance against gemcitabine significantly limits its effectiveness. Chemoresistance can be divided into two categories, intrinsic and acquired, in the course of drug treatment [12] . Activities of drug transporters and metabolizing enzymes have been considered to be strongly involved in the chemoresistance to gemcitabine. Epithelialto-mesenchymal transition (EMT) is not only related to a phenotypic change in the tumour cells; it also contributes to gemcitabine resistance [13] . Based on gene expression profiles of pancreatic cancer cell lines, gemcitabine-resistant cells contain many features consistent with EMT [14] . Exosomes have shown to be involved in gemcitabine resistance by delivering miRNAs. Exosomal miR-106b from cancer-associated fibroblasts [15] and miR-210 from cancer stem cells [16] both promote gemcitabine resistance. However, these areas are beyond the focus of this review, and we will discuss the challenges of remodelling the gemcitabine metabolizing pathway to overcome acquired chemoresistance against gemcitabine.
IMPROVEMENT OF GEMCITABINE UPTAKE
The membrane permeability of gemcitabine is poor in human cells. It is mediated by five distinct hNTs with different affinities; two equilibrative-type (hENT1, hENT2) and three concentrative-type transporters (hCNT1, hCNT2, hCNT3) [17][18][19] . Among these, hENT1 functions as the major gemcitabine transporter; in vitro experiments have demonstrated that increased expression of hENT1 is the critical factor for sensitivity to gemcitabine [20] . Restriction of intracellular uptake of gemcitabine by suppressed expression of hENT1 is one of the established mechanisms of drug resistance [19,21] . The majority of studies on patients with resected pancreatic cancer have suggested that high expression of this hENT1 may be predictive of improved survival in patients treated with gemcitabine [22][23][24] . Disrupted expression of hENT2 on the plasma membrane causes impaired uptake of gemcitabine, resulting in acquired chemoresistance of pancreatic cancer cells [25] .
Currently, several approaches to enhancing the efficacy of gemcitabine uptake or to bypass the hNTs have been introduced. hCNT1 is frequently diminished in pancreatic cancer cells compared with normal pancreatic ductal epithelial cells [26] , so drug inhibition or degradation of hCNT1 can increase the transportation of gemcitabine, and thus improve its efficacy [27] . A recent study indicated that mucin 4 (MUC4) suppresses hCNT1 expression and that inhibition of MUC4 enhances gemcitabine sensitivity [28] .
NEO6002 is a gemcitabine modified cardiolipin [ Figure 3A]. This molecule entes the cell independently of hNT, and exerts higher activity, with lower toxic adverse side effects in mouse tumour xenograft model [29] . Another lipophilic prodrug, gemcitabine-elaidic acid conjugate CP-4126 [ Figure 3A], also known as CO-101, is transported into the cells independently of hENT1 and has been demonstrated to be effective in vitro and in various human cancer models [30] . However, a long-term survival analysis found that the survival rate of patients using CP-4126 was not superior to gemcitabine in patients with low expression of hENT1 in Table 1] [31] . This study was performed using an antibody against hENT (clone SP120), but recent report by Raffenne et al. [32] using another antibody for hENT1 showed different results. They used a clone 10D7G2 and demonstrated that hENT tumour expression was significantly associated with better DFS and OS in PDAC patients. Thus, the usefulness of CP-4126 should be re-evaluated.
Recently, nanoparticles loaded with gemcitabine have been developed. Nanoparticle encapsulation allows chemotherapeutic drugs to pass easily without being affected by cell surface NTs. GEM-HSA-NP is a gemcitabine-loaded albumin nanoparticle; using patient-derived xenograft models, this nanoparticle has been shown to be more effective than gemcitabine in inhibition of tumour growth, irrespective of expression levels of hENTs [33] . Squalenoyl-gemcitabine bioconjugate (SQdFdC) is self-assembled into a stable nanoparticle [34] . This particle passively diffuses into cancer cells, mainly accumulated within the cellular membrane including those of endoplasmic reticulum. Subsequently, it is released gradually into the cytoplasm and cleaved into dFdC [35] . This is an original transporter-independent pathway, and SQdFdC can overcome the acquired resistance in a transporter-deficient human leukemic cell line, in vivo [35] .
Chitkara et al. [36] made gemcitabine conjugated to poly (ethylene glycol)-block-poly (2-methyl-2-carboxyl-propylene carbonate) (PEG-PCC) which could selfassemble into micelles of 23.6 nm. These micelles were shown to afford protection to gemcitabine from plasma metabolism. Wonganan et al. [37] created PLGAb-PEG-OH nanoparticles incorporated with gemcitabine. They delivered gemcitabine effectively into hCNT-decreased tumour cells and were significantly more cytotoxic than free gemcitabine. These nanoparticles are summarized in Table 2.
The above mentioned strategies are promising delivery systems to address transporter-deficient resistant cancer in the clinical setting.
REGULATION OF CDA EXPRESSION AND CDA INHIBITORS
Cytidine deaminase (CDA) is a ubiquitously expressed enzyme that catalyses cytidine and deoxycytidine into uridine and deoxyuridine, respectively.
This enzyme participates in the pyrimidine salvage pathway that maintains the nucleotide pool balance for DNA and RNA synthesis. The great majority of gemcitabine is inactivated mainly by CDA [ Figure 2], that mediates conversion from gemcitabine to difluorodeoxyuridine (dFdU) [38] . After deamination of gemcitabine, the metabolite is not further degraded but excreted from the cell [39] . CDA is activated in many organs, and dFdU is the major form of in vivo clearance which is the sole metabolite in the urine [40] . CDA is released from the cell and is found in the serum [41] ; CDA has been detected in patients with several cancer types and correlates with responses to chemotherapy [42,43] . The CDA gene is affected by several genetic alterations, and marked variations in function ranging from null to increased activity have been observed [44] . A study conducted on pancreatic cancer patients with gemcitabine treatment demonstrated a correlation between CDA activity and chemoresistance and concluded that patients with 6U/mg or higher of CDA activity showed progression of disease by five-fold or more [45] . A recent systemic review concludes that CDA 79A > C polymorphism is a potential biomarker for toxicity of gemcitabine-based chemotherapy and that CDA testing is preferential before administration of gemcitabine [46] .
CDA upregulation decreases the cellular gemcitabine concentration [ Figure 4], and several studies have reported that increased CDA activity associates with gemcitabine resistance in cancer cells. A hematopoietic cell line with overexpression of CDA showed resistance to gemcitabine (2.4-fold in IC 50 and 2.5-fold in IC 80 ) [47] . On the other hand, studies using human tumour cell lines and tumour xenografts reported no association between chemoresistance and CDA activity [48,49] . These data showed that CDA is not the only determining factor for gemcitabine sensitivity in vivo, but its modulation may defeat chemoresistance.
In cancer cells, aberrations of the copy number of the CDA gene are not reported. CDA expression is mainly regulated transcriptionally and/or post-transcriptionally. CDA expression in most cancers is lower than in corresponding normal tissues because of DNA methylation in the promoter region [50,51] . miRNAs also regulate CDA expression; miR-484 directly inhibits CDA translation by targeting CDA 3'UTR and induces chemoresistance in breast cancer cells [52] , and decreased expression of miR-608 correlates with upregulation of CDA to induce chemoresistance in pancreatic cancer cells [53] . Albumin-conjugated paclitaxel (nab-paclitaxel) was shown to reduce the CDA protein by producing reactive oxygen species in a mouse pancreatic cancer model; this evidence may explain the usefulness of gemcitabine plus nabpaclitaxel (GnP) [54] .
GEM-HSA-NP albumin in vitro Inhibited cell proliferation, arrest cell cycle and induced apoptosis in
pancreatic cancer cell lines. [33] in vivo More effective than gemcitabine when inhibiting tumour growth whether the expression levels of hENT1 were high or low in PDX models. The biotoxicity did not increase compared with gemcitabine. SQdFdC squalene in vitro Exhibited superior anticancer activity in human cancer cells and gemcitabine-resistant murine leukaemia cells. [34] in vivo Exhibited superior anticancer activity in experimental leukemic mouse modes both after intravenous and oral administration. PEG-PCC GEM PEG-PCC in vitro Induced cell apoptosis in pancreatic cancer cell lines [36] in vivo Significantly inhibited tumour growth in xenograft bearing mice PLGA-b-PEG-OH GEM PLGA-b-PEG-OH in vitro Effectively delivered gemcitabine into hCNT-decreased ovarian cancer cells and showed significant cytotoxicity compared to free gemcitabine.
[37] in 1967 using an affinity capture method with CDA as bait [58] . The inhibitory action of THU is based on its C4 hydroxyl group in the pyrimidine ring. Since the bioavailability of THU is weak [59] , a new fluorinated version of this drug termed (4R)-2'-deoxy-2' ,2'w-difluoro-3 4, 5, 6-tetrahydrouridine [ Figure 4] has been developed with better oral bioavailability [60] . DR was discovered in 1981; it cannot interact with CDA through the water/zinc complex. Its inhibitory activity instead results from an electrostatic interaction utilizing π electrons of the DR ring and the benzene ring of the F137 of CDA, the catalytic site of the enzyme [61] . However, no results of DR effectiveness have yet been reported even in cultured cells.
As mentioned before, CDA high-expressing tumours are theoretically more resistant to cytidine-based therapies, including gemcitabine. With this assumption, several studies combining various chemotherapies and CDA inhibitors have been conducted to date. A Phase II clinical trial (ClinicalTraials.gov: NCT00978250, see Table 1), combining treatment with 5-fluoro-2'-deoxcytidine and THU, has just been completed; all 93 patients eligible for the study were assessed as PFS, including patients with advanced nonsmall cell lung cancer, breast cancer, bladder cancer, or head and neck cancer (https://www.clinicaltrials. gov/ct2/show/results/NCT00978250). Weizman et al. [62] suggested that tumour infiltrating macrophages were responsible for stimulating the upregulation of CDA and acquisition of chemoresistance against gemcitabine in pancreatic cancer cells. Modulation of macrophage trafficking may offer a new strategy for response of cancer cells to gemcitabine [62,63] . Therefore, although CDA does not appear to be the only factor determining sensitivity to gemcitabine, its modulation remains a common strategy to overcome resistance.
TRANSPORTERS INVOLVED IN EFFLUX OF GEMCITABINE AND ITS METABOLITES
ATP-binding cassette (ABC) transporters are known to translocate a wide variety of substrates across the cell membrane and to mediate resistance against many therapeutic drugs, including anti-neoplastics and anti-infectives [64] . In addition, ABC transporters associate with a fraction of stem-like cells called side population (SP), refractory to Hoechst 33342 dye staining. This subpopulation was first isolated from murine hematopoietic cells [65] and then from human cells. Isolated SP cells from various kinds of human solid cancers escape from chemotherapy due to overexpression of the ABC transporters [66] , and Borst reviewed pan-resistance and ABC transporters [67] .
Several studies examining the importance of ABC-transporters in gemcitabine resistance have confirmed that the abnormal expression of ABCB1, ABCC, and ABCG2 is associated with multidrug-resistance in pancreatic cancer [68] . On the other hand, MDR variants in two cell lines of small cell lung cancer showed increased DCK activity [69] , and human cancer cell lines overexpressing ABCB1 or ABCC1 showed increased sensitivity to gemcitabine [70] . Overexpression of ABCC4 and ABCC5 confer resistance to cytrabine and troxacitabine, but not gemcitabine [71] . Inhibition of one or even several ABCC transporters (ABCC3, ABCC5 and ABCC10) did not efficiently or completely inhibit efflux of gemcitabine [72] . Thus, the contribution of ABC transporters for gemcitabine resistance warrants further investigation.
PRODRUGS OF DCK FOR BYPASSING THE INTRACELLULAR PHOSPHORYLATION STEP
Once gemcitabine is transported into cells, phosphorylation by DCK is considered to be the major rate-limiting factor for activation. DCK has a Km value of 4.6 μmol/L for gemcitabine compared to 1.5 μmol/L for deoxycytidine, which makes this drug an appropriate substrate [73] . Gemcitabine is also phosphorylated by thymidine kinase 2. This is a mitochondrial enzyme which phosphorylates a broad range of natural nucleosides [74] , but its precise role for both gemcitabine host toxicity and anti-tumour activity is unclear [7] . Inactivation of DCK has been shown to be one of the key mechanisms for acquisition of gemcitabine resistance. The DCK gene is inactivated in all of the seven obtained gemcitabine-resistant cancer cell lines [75,76] . Knockdown of DCK leads to gemcitabine resistance in gemcitabine sensitive cell lines, while re-expression of DCK restored the chemo-sensitivity of gemcitabine in gemcitabine-resistant cell lines [75,77,78] . Clinical studies have shown that the DCK expression level in pancreatic cancer tissue is a reliable prognostic indicator of PFS, suggesting that DCK is a good biomarker of gemcitabine sensitivity for pancreatic cancer patients treated with gemcitabine [79,80] . Hu antigen R (HuR) is an RNA-binding protein that regulates DCK post-transcriptionally. HuR is strongly associated with the DCK mRNA level, and HuRoverexpressing cancer cells have been shown to be more sensitive to gemcitabine treatment [81,82] .
Modification of phosphorylated gemcitabine to bypass DCK-mediated activation may be an effective way to improve its function. NUC-1031 [ Figure 3B] is a gemcitabine phosphoramidate prodrug that is produced by ProTide Technology [83] . NUC-1031 enters into the cell independently of the hENT1 transporter and does not require activation by DCK. Similar to the phosphorylated forms of gemcitabine, NUC-1031 is not subject to breakdown by CDA. In a Phase I study (NCT01621854), NUC-1031 demonstrated clinically significant anti-tumour activity even in patients with prior gemcitabine exposure and in cancers not traditionally perceived as gemcitabine-responsive [ Table 1] [84] . A global randomized study (NuTide:121) including 828 patients with untreated advanced biliary tract cancer is ongoing [85] . NUC-1031 is the first anti-cancer drug with which ProTide has achieved initial success in clinical trials.
Δ-Tocopherol-monophosphate gemcitabine (NUC050) is a vitamin E phosphate nucleoside prodrug [ Figure 3B] designed to bypass two mechanisms of gemcitabine resistance: downregulation of hNTs, and downregulation of DCK. Incorporation of NUC050 is not affected by hNTs, suggesting that it can bypass them. NUC050 retains most of the activity in DCK deficient cells, indicating that gemcitabine monophosphate is delivered in the cell [86] .
Further formulation development will increase the safety and efficacy of these prodrugs to overcome the cancer chemoresistance induced by the down-regulation of DCK.
RADIATION-INDUCED ACTIVATION OF DCK
Most studies searching for synergism of radiation in combination with chemotherapeutic agents, including nucleoside analogues have been attempted to achieve radiosensitization of cancer cells. Gemcitabine is also employed clinically as a radiosensitizer [87] . The contribution of nucleoside analogues to synergic effects is thought to involve inhibition of DNA repair and modulation of nucleotide synthesis and availability. An alternative explanation for the synergism between radiation and nucleoside analogues is radiationmediated chemosensitization. A number of studies have demonstrated that radiation alone can enhance the activity of DCK [88][89][90] . One previous study showed that DCK is phosphorylated at S74 by the DNA damage responsive kinase ATM, and may be activated [91] ; this indicates a direct link between radiation and DCK activation. Another study showed that the ATM related kinase ATR is also involved in phosphorylation of DCK at S74 [92] . S74Q mutation of DCK increases K cat values by 11-fold for deoxycytidine and 3-fold for gemcitabine [93] . This in turn would explain the higher levels of active gemcitabine.
Recently, neoadjuvant therapy including radiation concurrent with gemcitabine has been conducted for borderline resectable pancreatic cancer [8] . Radiation may improve the cytotoxicity of gemcitabine by enhancing DCK activation.
CONCLUSION
Gemcitabine-based chemotherapy remains a cornerstone of treatment for patients with advanced cancers. Chemoresistance against gemcitabine is multifaceted; therefore, pursuing the improvement of this chemotherapy is still an important challenge. Novel methodologies are required to improve patients' prognoses.
In order to achieve an effective gemcitabine concentration within tumour cells, several considerations are needed. Nanoparticle-based medicine (nanomedicine) has numerous advantages compared with conventional medicines, including being able to protect gemcitabine from degradation, and provide a targeting delivery system. Some nanomedicines can accumulate inside tumour cells by the incorporation of ligands that target molecules overexpressed on the cancer cell surface [94] . Elechalawar et al. [95] developed a targeted drug delivery system to pancreatic cancer using gold nanoparticles as the delivery vehicle, the anti-EGFR antibody cetuximab (C225/C) as the targeting agent, gemcitabine as the effective drug, and polyethylene glycol (PEG) as the stealth molecule. This nanoconjugate, termed ACG44P1000, showed enhanced cellular uptake and cytotoxicity to pancreatic cancer cell lines in vitro study. Although the effect of this nanoconjugate may be limited, further investigations will lead to more effective improvements.
Tumours are heterogeneous and exhibit molecular complexity, with significant variation among patients. Treatments of cancer patients require precision medicine-based genetic and biomolecular characteristics. The traditional chemotherapeutic approach (one-size-fits-all) can lead to unnecessary exposure to adverse side effects without the anticipated survival benefits [96] . In the last decade, improvements in highthroughput sequencing methods and profiling of transcripts have led to the discovery of many new targets for treatments. The identification of receptor overexpression in cancer cells will lead to the development of nanomedicines to improve the selectivity to the cancer cells and reduce off-target toxicities of gemcitabine. Further studies are needed for gemcitabine-based treatment to be included in personalized medicine tailored for numerous molecular therapeutic targets in multiple pathogenic pathways.
|
2020-10-25T13:39:33.969Z
|
2020-10-12T00:00:00.000
|
{
"year": 2020,
"sha1": "4c41973712277b5cd9997b9af963bc38816fdf54",
"oa_license": "CCBY",
"oa_url": "https://cdrjournal.com/article/download/3688",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbe2215b89be2a4da30af2449cad7294a4d39c78",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
224818206
|
pes2o/s2orc
|
v3-fos-license
|
Hypoxia pathway proteins regulate the synthesis and release of epinephrine in the mouse adrenal gland
The adrenal gland and its hormones regulate numerous fundamental biological processes; however, the impact of hypoxia signalling on its function remains scarcely understood. Here, we reveal that deficiency of HIF (Hypoxia Inducible Factors) prolyl hydroxylase domain protein-2 (PHD2) in the adrenal medulla of mice results in HIF2α-mediated reduction in phenylethanolamine N-methyltransferase (PNMT) expression, and consequent reduction in epinephrine synthesis. Concomitant loss of PHD2 in renal erythropoietin (EPO) producing cells stimulated HIF2α-driven EPO overproduction, excessive RBC formation (erythrocytosis) and systemic hypoglycaemia. Using mouse lines displaying only EPO-induced erythrocytosis or anaemia, we show that hypo- or hyperglycaemia is necessary and sufficient to respectively enhance or reduce exocytosis of epinephrine from the adrenal gland. Based on these results, we propose that the PHD2-HIF2α axis in the adrenal medulla and beyond regulates both synthesis and release of catecholamines, especially epinephrine. Our findings are also of great significance in view of the small molecule PHD inhibitors being tested in phase III global clinical development trials for use in renal anaemia patients.
INTRODUCTION
The hypoxia signaling pathway regulates the expression of myriad genes involved in various biological processes in living animals, and the Hypoxia Inducible Factors (mainly HIF1 and 2) are the central transcription factors that regulate these processes. HIF expression, in turn, is under the direct control of a set of oxygen sensors known as the HIF-prolyl hydroxylase domaincontaining proteins (PHD1-3). Under normoxic conditions, PHDs use oxygen as a co-factor to hydroxylate two prolyl residues in the HIFα subunits, thereby making HIFs accessible to the von Hippel-Lindau protein complex (pVHL) for subsequent ubiquitination and degradation [1].
Reduced cellular oxygen levels preclude such hydroxylation of HIFs by PHDs, resulting in stabilization of HIFα and direct transcriptional activation of more than 1000 genes. HIF transcriptional targets are primarily involved in a wide range of biological processes that serve to reverse the unfavourable hypoxic state, like erythropoiesis, blood pressure regulation and cell survival [2,3]. Hypoxia is also a central feature of multiple pathologies, including local and systemic inflammation, and various stages of carcinogenesis, including pheochromocytomas, which are tumors originating from the adrenal chromaffin cells [4].
Adrenal chromaffin cells are part of the adrenal medulla and, as the source of catecholamines, are crucially involved in the fight-or-flight response, which requires epinephrine secretion [5].
Biosynthesis of epinephrine, an important catecholamine, is a complex multistep process in which the last step involves methylation of norepinephrine into epinephrine. This is catalysed by the enzyme phenylethanolamine N-methyltransferase (PNMT). Whereas all human chromaffin cells in the adrenal medulla produce epinephrine, in rodents, 20% of these cells don't express PNMT and produce only norepinephrine [6]. Several in vitro studies have focused on HIF involvement in regulating the enzymatic activity required for catecholamine synthesis, and chromaffin cell-derived tumor cell lines have shown an essential role for HIF2α in catecholamine production; specifically, it regulated the expression of intermediate enzymes, namely, dopamine β -hydroxylase (DBH) and Dopa decarboxylase (DDC) [7]. Conversely, another in vitro study found no impact of HIF2α expression on tyrosine hydroxylase (TH) or Dbh [8], while downregulating PNMT expression [9]. The latter finding is in line with results from a number of other studies that have connected HIF2α activity in the adrenal medulla or in pheochomracytomas with reduced PNMT production [9][10][11]. Tumors with pVHL mutations also overexpress HIF2 target genes such as erythropoietin (EPO), a hormone central to red blood cell (RBC) formation [12,13]. Despite these observations, studies focusing on how modulations of hypoxia pathway proteins affect adrenal function are rather sparse, and only recently has a transgenic mouse line harbouring a whole body HIF2α gain-of-function mutation been described, which showed reduced PNMT levels in the adrenal glands [14].
The concentration of epinephrine after stimulation of the sympathetic nervous system, including through hypoglycemia [5,15] can increase up to 30-fold in circulation [16][17][18]. Even though hyperglycemia is associated with inhibition of insulin exocytosis from β -cells [19,20], it is unknown if also influences epinephrine release from chromaffin cells. Interestingly, elevated levels of systemic EPO or treatment with erythropoiesis stimulating agents are directly linked to a reduction in blood glucose levels (BGL), possibly due to higher glucose consumption by greater numbers of RBCs [21-23]. As the production and release of epinephrine are critical, in vivo studies are essential to better understand the direct and/or indirect impact of alterations in hypoxia pathway proteins on epinephrine production and release from the adrenal gland.
Here, we performed an in-depth study of the synthesis of epinephrine and its release from the adrenal gland using several transgenic mice that exhibit functional changes in one or more hypoxia pathway proteins. Our results demonstrate that PHD2 deficiency in the adrenal medulla results in HIF2α-mediated reduction in PNMT and consequent inhibition of epinephrine synthesis in the adrenal gland. Moreover, we show that enhanced exocytosis of epinephrine into circulation is dependent on EPO-induced hypoglycemia.
Mice
All mouse strains were maintained under specific pathogen-free conditions at the Experimental Peripheral blood was drawn from mice by retro-orbital sinus puncture using heparinized microhematocrit capillaries (VWR, Germany), and plasma separated and stored at -80°C until further analysis. Urine was immediately frozen on dry ice after collection and stored at -80°C until further analysis. Mice were sacrificed by cervical dislocation and adrenals were isolated, snap frozen in liquid nitrogen, and stored at -80°C for hormone analysis or gene expression analysis. All mice were bred and maintained in accordance with facility guidelines on animal welfare and with protocols approved by the Landesdirektion Sachsen, Germany.
Laser microdissection
Adrenal
Alterations in hypoxia pathway proteins reduce adrenal epinephrine
Previously Figure 1C). Next, catecholamine levels in adrenal gland lysates showed dramatic decrease in only epinephrine in P2H1 mice compared to their WT littermates, but not of the upstream hormones, namely, dopamine and nor-epinephrine ( Figure 1A). We also observed a corresponding marked decrease in PNMT mRNA, protein, and enzymatic activity ( Figure 1B-D). Conversely, no differences in the expression of other catecholamine-associated enzymes, such as Th or Dbh, were detected (Supplementary Figure 1D). Taken together, these results show that inhibition of the oxygen sensor PHD2 and one of its downstream HIF targets in the adrenal gland leads to diminished epinephrine synthesis.
Decreased adrenal epinephrine synthesis is unrelated to loss of HIF1a
To verify the impact of the individual hypoxia pathway proteins, we took advantage of the Figure 2). However, compared to WT mice, H1 mice showed no differences in any of hormones measured ( Figure 2C), strongly suggesting that loss of HIF1α alone does not play a significant role in catecholamine production in the adrenal gland. Importantly, these results support our initial observation that PHD2 alters epinephrine synthesis, probably due to HIF2α stabilization.
Increased EPO is associated with PNMT-independent reduction in adrenal epinephrine synthesis
We have previously shown that both P2 and P2H1 mice, but not H1 mice, exhibit EPO-induced along with dopamine levels ( Figure 3A). In contrast to P2H1 and P2 mice, however, no differences in PNMT activity were observed, despite increased PNMT mRNA levels ( Figure 3B).
Congruently, mRNA analysis on whole adrenals revealed no changes in Th or Dbh (Supplementary Figure 3B). Taken together, these observations suggest that PNMT-independent reduction in adrenal epinephrine levels correlated to high systemic EPO.
EPO/erythropoiesis regulates epinephrine release from the adrenal gland
To characterize the impact of systemic EPO/erythropoiesis on epinephrine in adrenal glands, we looked at potential changes in secretion by measuring its levels in urine and also by calculating the ratio of epinephrine and its precursor, nor-epinephrine, i.e., EPI/NEPI. Interestingly, we
EPO-induced hypoglycaemia activates epinephrine release
Catecholamine release from chromaffin cells requires their exocytosis and this process is selectively stimulated by hypoglycaemia [18]. Recently, it has been shown that hyperactive erythropoiesis, as seen in EPO Tg6 mice, increases systemic glucose consumption and consequent hypoglycaemia, which is most probably attributable to a greater number of circulating RBCs [22]. Therefore, we sought to understand if and how EPO-associated hypoglycaemia can affect adrenal epinephrine release. We first measured blood glucose levels (BGL) in all mouse lines and found that both erythrocytotic P2H1 and EPO Tg6 mice display significant hypoglycaemia ( Figure 6A), while conversely, the anaemic FOXD1:cre-HIF2α f/f mice were dramatically hyperglycaemic ( Figure 6B). Next, we used a mouse pheochromocytoma cell line (MPC) to study how changes in blood glucose levels could affect cellular exocytosis. That exocytosis is a calcium-dependent process is well-established, and increased calcium uptake by a cell is directly correlated with greater exocytosis [37, 38]; thus, calcium uptake is an appropriate readout of exocytosis. Next, we exposed MPCs to increasing concentrations of glucose and observed significant inhibition of calcium uptake, indicating diminished exocytosis ( Figure 6C).
In contrast, direct exposure of MPCs to high EPO did not lead to changes in intracellular calcium (Supplementary Figure 5A and B), clearly suggesting that EPO would not directly influence epinephrine exocytosis; rather it is systemic hypoglycaemia secondary to EPO/erythrocytosis in mice that enhances adrenal epinephrine release.
Discussion
Here, we show that decreased PNMT expression in the PHD2 deficient adrenal medulla reduces epinephrine synthesis, but systemic effects in these PHD2 deficient mice, such as enhanced EPO-production, consequent RBC excess, and hypoglycaemia in REPCs, lead to greater exocytosis, i.e., enhanced release of this hormone from the adrenal gland ( Figure 7). These results indicate uncoupling between synthesis and excretion of epinephrine upon PHD2 loss.
Mechanistically, we show that excessive epinephrine secretion is related to EPO-induced systemic hypoglycaemia, rather than a direct cell-level effect of EPO per se.
Multiple hypoxia pathway components have been suggested to define the functionality of the adrenal medulla. In the physiological sympathoadrenal setting, pVHL has been shown to be essential during development and it is required by the peripheral oxygen These conclusions are borne out by the contrasting observations in the anaemic FOXD1:cre- HIF2α f/f mice [27] which show dramatically low levels of epinephrine in urine that were linked to hyperglycaemia, even though adrenal epinephrine and PNMT levels remained unchanged. We postulate that higher dopamine levels in these mice serve to avoid NEPI and EPI overproduction in the adrenal gland.
In summary, we show that PHD2-mediated HIF2α stabilization in the adrenal gland has divergent local and systemic effects in vivo, i.e., while epinephrine synthesis is diminished, its urinary excretion is enhanced. Importantly, in these mice, adrenal synthesis of epinephrine and its urinary secretion appear to be clearly uncoupled as excessive urinary epinephrine secretion appears to be due to systemic EPO-induced effects via the PHD2-HIF2-EPO axis, which include excessive erythrocytosis and consequent increase in glucose consumption by these RBCs.
Mechanistically, we propose that the EPO-induced RBC excess and consequent hypoglycaemia lead to enhanced exocytosis of epinephrine by increasing Ca 2+ was supported by the Heisenberg program, DFG, Germany; WI3291/5-1 and 12-1). We would like to thank Dr. Vasuprada Iyengar for English Language and content editing.
Conflict-of-interest
The authors have declared that no conflict of interest exists. Statistical significance was defined using one-tailed or two-tailed Mann-Whitney U test, as required (*p<0.05; **p<0.005).
Supplementary figure 2. Gene expression analysis in the adrenals of P2 mice.
Bar graphs showing the qPCR-based mRNA expression analysis of Th and Dbh from the adrenals (n=4 vs 4) of the P2 mice and littermate WT controls. All the data are normalized to the average measurements in WT mice.
|
2020-10-22T19:03:56.029Z
|
2020-10-15T00:00:00.000
|
{
"year": 2020,
"sha1": "17c0273eefbb246ab086d4202e55c8d39cb5e923",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/10/15/2020.10.15.340943.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "17c0273eefbb246ab086d4202e55c8d39cb5e923",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
}
|
236372179
|
pes2o/s2orc
|
v3-fos-license
|
Inflammatory response to a marathon run in amateur athletes
1 Institute of Radiology, Cantonal Hospital of Aargau, Tellstrasse 25, CH-5001 Aarau, Switzerland 2 Department of Cardiology and Electrotherapy, Medical University of Gdańsk 3 First Department of Cardiology, Medical University of Gdańsk, Poland 4 Department of Medical Laboratory Diagnostics Biobank BBMRI.pl, Medical University of Gdańsk, Poland 5 Beckman Institute for Advanced Science and Technology, 405 N Mathews Ave, MC-251, Urbana, IL, 61801, USA
Introduction
Statistics reveal a trend of increasing participation in mass endurance sports events, with the runners older and slower than ever before [1], which means that the number of amateur runners has increased. Amateur runners constitute a heterogeneous group in terms of fitness level, training regimen, medical history and cardiovascular risk factors. Moreover, the definition of an "amateur athlete" is not precise. The American Heart Association distinguishes elite, competitive and recreational athletes. The first and the second groups train with high intensity in organized teams with an emphasis on competition and performance, whereas the latter engage in sports activity for pleasure and in their spare time.
Regular exercise reduces the cardiovascular risk and all-cause mortality, with a 20-30% reduction in cardiovascular adverse events compared with patients who have sedentary lifestyle. The current European Society of Cardiology guidelines recommend a minimum of 150 min of exercise of moderate-intensity over 5 days or 75min of vigorous exercise over 3 days per week for a healthy adult [2].
Although the benefits of regular moderate intensity exercise remain indisputable, there is a concern that the long duration high-intensity endurance sports may elicit negative effects on the heart by triggering the structural, functional and electrical remodeling, hence increasing the risk of arrhythmias. One of the postulated mechanisms is the inflammatory response following an intensive endurance exercise [3]. Sorokin et al. in their review showed that endurance--trained athletes are at increased risk of developing atrial fibrillation with the possible mechanisms being increased parasympathetic tone, increased atrial size and increased inflammatory reaction [4]. There are also reports that high-intensity leisure-time physical activity by stimulating the inflammatory reaction may contribute to the development of atherosclerosis in the long run [5].
For over 50 years, the utility of various biomarkers in the diagnosis of cardiovascular diseases was analyzed [6], yet their implications still remain not fully understood. Many novel biomarkers were recently discovered including inflammatory biomarkers such as pentraxin-3 (PTX-3) and neopterin. However, there is a lack of data about amateur athletes as to whether such sport activities are associated with the activation of an inflammatory reaction. The aim of this study was to investigate the effect of running a marathon on the inflammatory response in the group of male amateur runners.
Material and Methods
The study was carried out on a group of 40 male amateur marathoners, who competed in and finished the 2nd PZU Marathon in Gdańsk, Poland. The participation in the study was voluntary. Enrolment into the study was completed via invitations sent to sports clubs. Each participant signed a written consent form prior to enrolment. The study protocol was approved by Independent Bioethics Committee for Scientific Research at Medical University of Gdańsk (No. NKBBN 104/2016). Information about health and training conditions was gathered via structured interviews. Exclusion criteria were: history of past or chronic illness/es. After the finishing the marathon run, we asked the participants to suspend high-intensity training as well as participation in any upcoming competitions. The characteristics of a study group were described previously [7].
We divided the study into three stages. Blood samples from the cubital vein were collected at each stage. Stage 1 was carried out 2 weeks before the run, Stage 2 directly after finishing the run on the finish line and Stage 3 took place 2 weeks after the marathon. Fasting blood samples at Stage 1 and Stage 3 were collected at the cardiology department. Serum was prepared immediately after collection by centrifugation at 2000 rpm at room temperature for 12 minutes and then stored in -80˚C for the further analysis [7].
Samples from each stage were analyzed in terms of the amount of leukocytes, neutrocytes, lymphocytes, monocytes, eosinophils, basophils, immature granulocytes and the concentration of fibrinogen and creatine kinase. Biochemical parameters were analyzed using Architec c8000 (Abbott). Endothelin-1 (ET-1) concentration was measured using a solid phase sandwich Quantikine ELISA (R&D Systems) with sensitivity of 0.207 pg/ml and detection range from 0.39 to 25 pg/ ml. PTX-3 concentration was measured using a solid phase sandwich ELISA Human Pentraxin 3/TSG-14 Du-oSet (R&D Systems) with detection range from 218 to 14000 pg/ml. Neopterin concentration was measured using a solid phase competition ELISA (Demeditec Diagnostics) with sensitivity of 0.7 nmol/l and detection range from 1.35 to 111 nmol/l. Interleukin 6 (IL-6) concentration was measured using a solid phase sandwich Quantikine ELISA (R&D Systems) with a sensitivity of 0.7 pg/ml and detection range from 3.1 to 300 pg/ml. Continuous variables were expressed as means ± standard deviation (SD). Before the statistical analyses, Shapiro-Wilk test was used to test the normal distribution of variables. Analysis of variance (ANOVA) for repeated measures was used to test statistical differences between groups of variables. Post-hoc analysis was performed with a Tukey's test. For the variables analyzed at two stages only, the t-test for dependent variables was used. The data was analyzed using Statistica 12 software (StatSoft). A p value < 0.05 was considered statistically significant [7].
Study group
The characteristics of the studied group are presented in Table 1 [7].
Biochemical analysis
The results of the analysis of white blood cells counts fibrinogen and creatine kinase concentrations are presented in Table 2.
Mean leukocyte count at Stage 1 was 5.8 G/l. At Stage 2 it was 16.5 G/l and it differed significantly from the results at Stage 1 and Stage 3. There was no significant difference between leukocyte count at Table 3 shows the concentrations of the analyzed biomarkers.
The mean concentration of ET-1 was the highest at Stage 2 (3.2 ± 0.9 pg/ml) and it differed significantly from both Stage 1 and Stage 3. Neopterin showed the same trend. PTX-3 concentrations differed significantly between all the stages, with the highest concentration at Stage 2 and the lowest at Stage 3. The concentration of IL-6 was significantly higher at Stage 2 compared to Stage 1 and exceeded the norm many times (Norm: lower or equal to 1.8 pg/ml). At S3 the mean concentrations of IL-6 were undetectable.
Discussion
In our study, we found that running a marathon increased the inflammatory response in amateur runners. This was probably due to skeletal muscle damage, as inflammatory biomarkers were normalizing 2 weeks after the run. To our knowledge, this is the first study to demonstrate the impact of a bout of intense exercise on the inflammatory response in male amateur marathoners, assessed on the basis of changes in PTX-3 and neopterin concentrations. In our study group, increased concentrations of creatine kinase at Stage 2 suggest exercise-induced muscle damage. This was accompanied by a significant increase in concentrations of all the studied biomarkers, compared with Stage 1 and Stage 3. At Stage 2 significant leukocytosis with an increase in all leukocyte-fractions was also observed. Kosowski et al. investigated cardiovascular stress biomarkers in middle--aged non-athlete marathon runners. Blood samples were collected (before, just after and 7 days after the marathon) and analyzed for endothelin-1, troponin I and N-terminal pro B-type natriuretic peptide concentrations. The authors concluded that the marathon was associated with a significant increase in cardiovascular stress biomarkers but the profile of these changes did not suggest irreversible myocardial damage [8].
It has been suggested that completing a marathon has similar physiological sequelae to the acute-phase response: neutrophil leucocytosis, increased creatine kinase activity, a rise in C-reactive protein and fibrinogen levels and an increase in plasma cortisol concentration [9]. On the other hand, significant increases of the creatine kinase concentration and elevation of inflammatory markers have been observed after prolonged cardiopulmonary resuscitation [10] or direct current cardioversion [11].
Endothelin-1, which is released by leukocytes, macrophages and fibroblasts [12], is not only a potent vasoconstrictor of the smooth muscle cells but it also has a pro-inflammatory effect [13,14]. Its expression is increased in response to cytokines, reactive oxygen species, angiotensin II and thrombin [15]. ET-1 has been shown to stimulate monocytes to produce interleukin-8 (IL-8) and monocyte chemoattractant protein-1 (MCP-1) [16], and also acts as a mast cell activator resulting in the release of inflammatory cytokines such as tumor necrosis factor alpha (TNF-alpha) and IL-6 [17]. A study of mice showed that intensive endurance exercise increases the occurrence of atrial fibrillation in a mechanism of inflammation and atrial fibrosis with the involvement of a soluble TNF-alpha signaling pathway [18]. More than a 33-fold increase in the mean concentration of IL-6 at S2 compared to the baseline value is consistent with the observations of Pinho et al. [19] (a group of Ironman race participants) and Schobersberger et al. [15] (participants of an ultramarathon). Intensive physical exercise causes an increase in oxygen consumption and induces oxidative stress due to free radical production, which in turn stimulates cytokine production from various cell types and upregulates the inflammatory cascade [20][21].
Neopterin is a biomarker of the cellular immune response released by activated macrophages and dendritic cells upon activation with gamma-interferon and acts as a modulator and mediator in inflammatory and infectious processes [22]. A significant increase in the post-run concentration of neopterin is consistent with the observations published by Schobersberger et al. [23] and Sprenger et al. [24] who examined well-trained runners after running 20 km in 2 hours. The pentraxins superfamily comprises of short and long pentraxins. PTX-3 is a member of long pentraxin group and is believed to play a regulatory role in innate immunity, sterile and non-sterile inflammation, tissue repair, and cancer [25]. PTX-3 is an acute phase protein, produced locally by monocytes, endothelial cells and fibroblasts in response to pro-inflammatory signals like interleukin 1 beta or TNF alpha. The major source of PTX-3 are vascular endothelial cells [26]. Increased plasma levels of PTX-3 were found in patients with acute myocardial infarction, heart failure, atherosclerosis and after cardiac arrest. Salio et al. indicated that PTX-3 plays a protective role against myocardial ischemia in their study on a mouse model of myocardial infarction. PTX3-deficient mice showed exacerbated cardiac damage with greater no-reflow area, increased neutrophil aggregation, decreased number of capillaries and increased number of apoptotic cardiomyocytes [27].
The influence of intense endurance exercise on plasma concentrations of PTX-3 in humans has not yet been extensively studied. Miyaki et al. measured plasma PTX-3 concentrations in young male endurance runners and sedentary controls and found higher concentrations in the first group as a result of a postulated training-induced cardioprotection [28]. In contrast, Suzuki et al. showed a hypertrophic response and left ventricular systolic dysfunction as a consequence of increased afterload in a mouse model of transverse aortic constriction. Transverse aortic constriction (TAC) is one of the most common surgical models of pressure overload-induced cardiac hypertrophy and heart failure. In the TAC model, a permanent constriction is placed around the transverse aorta, limiting left ventricular outflow and thereby creating pressure overload in the left ventricle. Echocardiography indicated that PTX-3 overexpression promoted tissue remodelling, left ventricular systolic dysfunction and myocardial fibrosis, while these responses were suppressed in PTX3-deficient mice [29].
Two weeks after the marathon, white blood cell counts, creatine kinase and fibrinogen levels, as well as ET-1 and neopterin concentrations returned to baseline, and PTX-3 level fell below the Stage 1 value. The concentration of IL-6 at S3 was undetectable. We explain this by the lack of intensive training or participation in any sporting events between S2 and S3 compared to the pre-marathon preparation period according to the study protocol. This trend suggests that a marathon run does not cause a prolonged inflammatory response.
On the other hand, numerous studies have reported that health benefits from extreme forms of physical activity such as ultra-endurance sports, are attenuated in an inverted J-curve dose-response model, with increased risk of adverse ventricular remodeling, fibrosis and arrhythmias. La Gerche emphasizes the phenomenon of cardiac overtraining in the potential mechanism of arrhythmogenesis in endurance athletes, in which a chronic adverse cardiac remodeling depicts an imbalance between the exercise-induced injury and an insufficient period of regeneration [30][31].
Kwaśniewska et al. observed the population of physically active men for over 25 years and reported that the most favourable effect against atherosclerosis was associated with energy expenditure between 2050 and 3840 kcal/week. Regular and very high physical activity was accompanied by the deterioration of the examined indicators of atherosclerosis (increased calcification of the coronary arteries and intima-media thickness). The authors postulated that intense physical activity in free time may be associated with the intensification of low-grade inflammation and thus has a pro-atherosclerotic effect [5].
Currently, there is little data available regarding the long-term effects of intense endurance activity on the inflammatory response. However, a prospective, long-term study of at least 130 marathon runners is currently underway by Schoenfeld et al. [32] and may provide important information on this topic. The aim of this study is to assess the physiological response of the cardiovascular system and potential abnormalities after 10 years of long-term vigorous endurance exercise.
The main limitation of our study is the lack of a long-time observation. The biomarker kinetics were not monitored in the time interval between S2 and S3. Secondly, the study was conducted on a relatively small group of male participants only. Finally, we did not include echocardiographic imaging, however this was not the purpose of this part of the study.
Conclusions
Our study appears to be the first to investigate the changes in PTX-3 and neopterin concentrations in amateur athletes. We demonstrated that male amateur marathon runners follow similar trends in inflammation biomarker kinetics as in Ironmen, ultramarathoners and elite athletes. Intensive endurance exercise causes an acute transient rise in the concentrations of inflammatory biomarkers in amateur marathon runners together with leukocytosis and increased creatine kinase. In the short-term follow-up, the concentrations of all studied parameters normalized, suggesting that the inflammatory cascade is mainly induced by exercise-induced muscular damage. Further research is needed to investigate the long-term effects of recurrent exercise-induced inflammation on the cardiovascular system.
|
2021-07-27T00:05:41.451Z
|
2021-05-25T00:00:00.000
|
{
"year": 2021,
"sha1": "ee9b37993a46f104ae8b38558513171e0f821d91",
"oa_license": null,
"oa_url": "https://ejtcm.gumed.edu.pl/articles/91.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03caa03d00288076f1351ad810716b796de0a9f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53020518
|
pes2o/s2orc
|
v3-fos-license
|
First record of Scolopendrellopsis from China with the description of a new species (Myriapoda, Symphyla)
Abstract The genus Scolopendrellopsis Bagnall, 1913 is recorded from China for the first time and Scolopendrellopsisglabrussp. n. is described and illustrated. The new species is characterized by the short central rod on head, third tergite complete, four kinds of sensory organs present on antenna, and the cerci rather short and covered with a low number of straight setae.
Introduction
There are 204 symphylan species known in the world to date (Szucsich and Scheller 2011;Domínguez Camacho and Vandenspiegel 2012;Bu and Jin 2018); however, only few publications deal with those from Asia. Hansen firstly described five species of Symphyla from Southeast Asia (Hansen 1903). After that several species were described from India (Scheller 1971), Indonesia (Scheller 1988), USSR (Scheller and Golovatch 1982), Russian Far East (Scheller and Mikhaljova 2000) and Iran . Symphyla is poorly studied in China with only Hanseniella caldaria from Zhejiang province and Geophilella orientalis from Hebei province recorded (Zhang and Wang 1992;Bu and Jin 2018). Three genera, Scutigerella, Scolopendrelloides, and Symphylella, were also mentioned for China, but without determined species recorded (Zhang and Wang 1992). During our ecological survey of soil animals of Zhejiang, Jiangsu, and Hainan provinces in recent years, many symphylans were obtained. Among them, one new species of Scolopendrellopsis was identified and is described in the present paper.
Materials and methods
Most specimens were collected during a project for soil animal survey of Gutian Mountain of Zhejiang Province during the years 2012 to 2013; others were collected in Jiangsu province and Hainan province recently. All were extracted by means of the Tullgren funnels from soil and humus samples and preserved in 75% ethanol. They were mounted under slides using Hoyer's solution and dried in an oven at 60 °C. Observations were made with a phase contrast microscope (Leica DM 2500). Photographs were taken by a digital camera installed on the microscope (Leica DMC 4500). Line drawings were drawn using a drawing tube. All specimens are deposited in the collections of Shanghai Natural History Museum (SNHM) and Shanghai Entomological Museum (SEM), Shanghai, China.
Taxonomy
Family Scolopendrellidae Bagnall, 1913 Genus Scolopendrellopsis Bagnall, 1913, new record Diagnosis. Habitus slender. First pair of legs present, 3-segmented and with claws, not more than one-half length of the following pairs. Trunk with 16 or 17 tergites and most of tergites with a pair of posterior processes, without any striped band between each pair of processes on tergites, some tergites transversely divided.
Distribution. The genus Scolopendrellopsis includes fifteen species and is subcosmopolitan, widely distributed in Palaearctic, Nearctic, Neotropical, Ethiopian, Oriental, and Australian regions (Szucsich and Scheller 2011). It is newly recorded from China in this paper.
Scolopendrellopsis glabrus sp. n.
http://zoobank.org/95E5B444-5DEF-49CB-A699-E9730BD69528 Figs 1-3. Tables 1-3 Diagnosis. Scolopendrellopsis glabrus sp. n. is characterized by the short central rod on head, 3 rd tergite not divided and with only weak middle indentation, rod-like sensory organs with setae surrounded on dorsal side of 3 rd -17 th antennal segments, cavityshaped organs on dorsal side of subapical 5-6 antennal segments, mushroom-shaped organs at lateral side of subapical 4-7 segments and bladder-shaped organs on subapical 3-6 antennal segments, first pair of legs longer than the tarsus of the last pair of legs, cerci short and covered with a low number of straight setae. Description. Adult body 1.57 mm long in average (1.45-1.65 mm, n = 8), holotype 1.65 mm ( Figure 1A). Head longer than wide, length 145-175 μm, width 133-170 μm, with widest part a little behind the middle on a level with the points of articulation of mandibles. Central rod distinct and with anterior part absent, length 45-49 μm, approximately one-third of head. Dorsal side of head covered with sparse setae of different length, longest setae (12-17 μm) located at the anterior part of head, approx. 3.0 times as long as central ones (4-5 μm). Cuticle around Tömösváry organ and anterolateral part of head with rather coarse granulation. Central and posterior part of head with dense pubescence (Figs 1B, 3A). (Holotype) A habitus B head, dorsal view C right antenna, 3 th -6 th segments, dorsal view D right antenna, 12 th -16 th segments, ventral view E right antenna, 11 th -16 th segments, dosal view F left Tömösváry organ G stylus on base of 6 th leg (arrow indicated) H stylus on base of 11 th leg (arrow indicated) I first pair of legs J 3 rd leg and coxal sac K 9 th leg and coxal sacs L cerci, dorsal view. ro-rod-like sensory organs with surrounded setae, co-cavity-shaped organ, mo-mushroom-shaped organ, bo-bladder-shaped organ. Scale bars: 100 μm (A), 20 μm (B-L).
Mandible with eleven teeth and divided into two parts by a gap, with five anterior and six posterior teeth respectively. First maxilla has two lobes, inner lobe with four hook-shaped teeth, palp bud-like with two distal points close to outer lobe ( Figure 3B). Anterior part of second maxilla with many small protuberances and posterior part with sparse setae. Cuticle of second maxilla covered with pubescence ( Figure 2A).
Antennae 15-19 segments (16 in holotype), length 250-350 μm (320 μm in holotype), approx. 0.2 of the length of the body. First segment cylindrical, greatest diameter a little wider than long (20-26 μm: 16-25 μm), with four setae in one whorl, the longest seta (6-11 μm) inserted at the inner side and distinctly longer than outer ones (5-8 μm). Second segment wider (20-30μm) than long (18-22 μm), with six or seven setae evenly inserted around the segment and inner setae (6-10 μm) a little longer than outer ones (5-7 μm). Chaetotaxy of 3 rd segment similar to preceding ones ( Figure 3C). Setae on the basal segments 1-3 are slender and on proximal and distal segments rather short. Basal and median parts of the antennae with only primary whorl of setae, in subapical segments one or two minute setae present in secondary whorl ( Figure 3E). Four kinds of sensory organs present on antenna: rod-like sensory organs with setae surrounded present on dorsal side of 3 rd -17 th segments (Figs 1C, 3C, 3D); cavity-shaped organs present on dorsal side of subapical 5-6 segments (Figs 1E, 3D); mushroom-shaped organs present on lateral side of subapical 4-7 segments and bladder-shaped organs on subapical 3-6 segments (Figs 1D, 1E, 3D, 3E). Apical segment subspherical, width 21-22 μm, length 19-20 μm, with 10-12 short setae and wide connection to preceding segment and with two fire-shaped and three baculiform organs present on apex (Figs 1D, 3D, E). All segments covered with short pubescence. Chaetotaxy and sensory organs of antennae are given in Table 1. Trunk: seventeen dorsal tergites present, with 6 th , 9 th , 12 th , and 15 th tergites transversely divided, longer than preceding ones (Figs 2E, 2G, 2H, 2J). Intertergal zones between former and later tergites present, except for 14 th and 15 th , 16 th , and 17 th tergites. Tergites 2 th -13 th and 15 th each with one pair of slender chitinous processes, slightly finger-like. Basal distances between processes are approx. the same length as their length from base to tip, which is longer than its basal width. All tergites pubescent and the margins of apical part of processes ornamented with rowed coarse granules. Apical seta on processes slightly anteriorly located and anterolateral setae slightly longer than other setae. No seta between apical and inner basal setae (Figs 2B-2H).
Etymology. The species name glabrus, meaning bald, to indicate the lower number of setae on cerci.
Distribution. China (Zhejiang, Jiangsu, Hainan). Remarks. Scolopendrellopsis glabrus sp. n. is similar to S. hirta (Scheller, 1971) and S. spinosa (Sheller, 1979) in the shape of 3 rd tergite which is not divided, shape of processes on tergites, shape of sensory organs on antennae. It differs from the latter two species in the absence of anterior part of central rod (anterior part present but indistinct in S. hirta, distinct in S. spinosa), chaetotaxy of the 2 nd and 3 rd tergites (with four and five lateromarginal setae in S. glabrus respectively, five and six in the other two species), cerci with lower number of setae (more setae in S. hirta and S. spinosa), all setae on cerci long and straight (setae on inner side of cerci slightly curved in S. hirta, most setae on cerci short and curved in S. spinosa). It is also similar to the worldwide species S. subnuda in the shape of the first three tergites, number of lateromarginal setae of the 3 rd tergite, shape and number of setae of the cerci, but differs in the absence of anterior part of central rod (anterior part present in S. subnuda), apical seta on processes slightly anteriorly located (rather close to the apex in S. subnuda).
|
2018-11-09T23:10:05.036Z
|
2018-10-10T00:00:00.000
|
{
"year": 2018,
"sha1": "5fd1fc2d0de3fbf0beb435dcb2e8c2d10d3b48fe",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/27356/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fd1fc2d0de3fbf0beb435dcb2e8c2d10d3b48fe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
136312296
|
pes2o/s2orc
|
v3-fos-license
|
Long Physico-Chemical and Biological Monitoring for Treated Artificial Tidal Flats with Recycled Paper Sludge in Ago Bay, Japan
Treated dredged sediment by using Hi Biah System (HBS) was used for the construction of five different artificial tidal flats (E1-E5) in Ago Bay, Japan. AGOCLEAN-P which is mainly made from paper sludge ashes was used as a coagulant and hardener. After the construction, continuous monitoring for the physico-chemical and biological parameters was carried out quarterly for 28 months from May 2005 to August 2007. Physico-chemical parameters measured were; particle size, loss on ignition (LOI), total organic carbon (TOC), chemical oxygen demand (COD), water content (WC), chlorophyll a, and acid volatile sulphide (AVS). At the end of the experiment, physico-chemical parameters for the artificial tidal flats along with the sandy natural flat (S2) were almost similar. In regards to the natural muddy flats (S1) the results showed remarkable increase in the particle size <0.75 μm throughout the experimental period, whereas insignificant decrease was observed for the medium particle size. In addition the area around station (S1) was characterized by high concentration of silt/clay around 75% for the particle size less than 75μm. Biological parameter was represented by macrobenthos abundant as number of individuals, biomass, and species number. The abundant macrobenthos during the first year of monitoring was mollusca followed by polychaeta then bivalve, however at the end of the experiment, bivalve was the most dominant macrobenthos followed by mollusca and then polychaeta indicating that healthy environment was created.
Treated dredged sediment by using Hi Biah System (HBS) was used for the construction of five different artificial tidal flats (E1-E5) in Ago Bay, Japan. AGOCLEAN-P which is mainly made from paper sludge ashes was used as a coagulant and hardener. After the construction, continuous monitoring for the physico-chemical and biological parameters was carried out quarterly for 28 months from May 2005 to August 2007. Physico-chemical parameters measured were; particle size, loss on ignition (LOI), total organic carbon (TOC), chemical oxygen demand (COD), water content (WC), chlorophyll a, and acid volatile sulphide (AVS). At the end of the experiment, physico-chemical parameters for the artificial tidal flats along with the sandy natural flat (S2) were almost similar. In regards to the natural muddy flats (S1) the results showed remarkable increase in the particle size <0.75 µm throughout the experimental period, whereas insignificant decrease was observed for the medium particle size. In addition the area around station (S1) was characterized by high concentration of silt/clay around 75% for the particle size less than 75µm. Biological parameter was represented by macrobenthos abundant as number of individuals, biomass, and species number. The abundant macrobenthos during the first year of monitoring was mollusca followed by polychaeta then bivalve, however at the end of the experiment, bivalve was the most dominant macrobenthos followed by mollusca and then polychaeta indicating that healthy environment was created.
According to Mie Prefecture Department of Agriculture, Fisheries, Commerce and Industry, 2004, the annual pearl oyster production from Ago Bay is estimated to be 91 tons (dry weight), and a pearl oyster shell is estimated to be 821 tons. Although pearl oysters are not fed with formulated feeds, it is believed that the pearl farms are one of the important contributor to the organic enrichment of the under lying sediments of the bay by filtering phytoplankton and discharging feces in concentrated form [1]. In addition after the nucleus insertion into the oyster body, cleaning should be carried out weekly throughout all the cultivation period for nets and shells to remove accumulated dirt. Dredging the organically rich sediments is one of the ways to help ecosystem to achieve its prosperity. Usually water content in the dredged sediments is over 90-wt %, thus transportation cost increases dramatically especially if the CDFs far away from the dredging activities. In order to reduce the water content from watery mud sediments Hi Biah System (HBS) was developed [2]. The water content of the dredged sediments was reduced from 90-wt % before treatment to 60-wt % after treatment by using AGOCLEAN-P as a coagulant.
The produced sediment from this innovated technology has been used in many applications such as; making granular micro-habitat beads for microorganism in order to treat contaminated seawater and polluted sediments. The sintered products made from dredged sediments fabricated at 400°C were found to be very effective adsorbents not only for the removal of heavy metals such as arsenic(III), cadmium(II), chromium(VI) and lead(II), but also good adsorbent for phosphate removal and hexavalnet chromium in aqueous solution [3,4]. Another application is creating stable surface for culturing seagrass such as . Sea bottom sediment was also used as a substitute for fine sand aggregate for the fabrication of concrete solids and marine reefs [5]. In constructing artificial tidal flats, the selection of a suitable sediment medium may be the primary factor in creating the sediment environment. For this reason mountainous sand alone for the construction of artificial tidal flats lacks a silt and clay components as well as organic matter, which is necessary to establish the same physic-chemical or biological structure as natural tidal flats [6]. In Ago Bay five artificial tidal flats were constructed mainly from treated dredged sediment mixed with natural sand from the same area and monitored for 20 months continuously. Results showed that the environmental conditions, number of benthos individuals and growth of short-necked clams in the artificial tidal flat were shown to be similar to those observed in a natural tidal flat [2].
This work is a continuation of the previous work where sampling was carried out for 20 months continuously plus seasonal data collected for the last 8 months for the physico-chemical properties and for the macrobenthos abundance. Data from other two nearby stations for natural mudflats (S1, S2) stations were characterized as muddy and sandy natural flats respectively were also monitored for the same parameters to get complete picture about the local macrobenthos community which represents important aquatic ecosystem health indicator. Figure 1 shows the summary of tidal flats (E1-E5) construction method in Ago Bay. Ago Bay and constructed tidal flats (E1-E5) along with natural flats (S1 & S2), hardeners used and other engineering details along with parameters analyzed and Ago Bay location have been described somewhere else [2]. Starting with particle size analysis, the time series monitoring for the constructed artificial tidal flats (E1-E5) showed a dissimilarity for the particle size <0.75µm and for the medium particle size for the first 9 months of monitoring (till Feb 2006) as shown in Fig. 2a. Natural flat (S2) shows the same pattern. After this period the graph became somehow stable. Range of the size particle from all artificial flats (E1-E5 along with S2) was between 15 and 45% with particle size less than 75µm.
Advanced Materials Research Vol. 795 309
At the end of the experiment there was no significant variation in the median particle size as shown in Fig. 2b. It is interesting to notice that higher values were observed for flat E5 for the first 3 months and E2 for the first 6 months (Fig.2a & b) due to the flats preparation methods where steel sludge was used as hardening materials with ball shape in E5, at the same time pellets shape (~3cm L & 1cm d), were prepared in E2 with inorganic coagulant (AGOCLEAN-P) made mainly made from paper sludge ashes. The seasonal variation on phsyco-chemical parameters such as; loss on ignition (LOI), total organic carbon (TOC), and chemical oxygen demand (COD) give a very good indication about the organic matters in the sediments therefore almost same pattern was obtained for the three mentioned parameters for the artificial flats (E1-E5) plus natural station (S2) as shown in Figure 2. The minimum LOI was observed for the flat made from oligotrophic sand (no treatment with HBS), TOC & COD show the same pattern for the same study period. Water content (WC) varied between 18 ~ 50% for the artificial flats whereas for the natural was in general higher. Benthic growth strongly connected with the existence of chlorophyll a [6]. After 18 months of construction, chlorophyll a reached its maximum values for all flats after that slight reduction was noticed (Fig.3). The amount of acid volatile sulphide (AVS) in sediments serves as a critical parameter in determining metal bioavailability and toxicity [7], that was high for muddy natural flat (S1) at zero time. After 28 months of continuous monitoring, AVS was kept within reasonable values (between 0.28 and 0.09 mg/g for the artificial flats and natural sandy flat S2), whereas for S1 the AVS value was 0.55 mg/g. " Sampling was conducted in quarterly basis from May 2005 till August 2007 (Fig. 4 a-c). Three months after construction the population of macrobenthos in the artificial tidal flats (E1-E5) showed gradual increase throughout the experimental period compared to the natural ones (S1,S2) except in few occasions where the total number of macrobenthos was similar or lower (Fig. 4a). Even though low wet biomass of macrobentos was observed at all artificial and natural flats during June & September 2006 (Fig. 4b), the abundance of macrobenthos as individuals/0.1m 2 and number of species were not affected indicating that large macrobenthos such as bivalve were minority and the abundance macobenthos was mullusca (Fig. 4 a-c). It is worthy to note that there was high correlation between the obtained high values for the AVS and TOC during September 2006 (Fig. 3) and the sudden decrease in the wet biomass (Fig. 4c) indicating that the high concentrations of TOC and AVS might be the primary factors affecting the types of macrobenthos.
The dominant species in the first year were mollusca followed by polychaeta then bivalve. However at the end of the monitoring period it was notable that bivalve was the dominant species, coming next Mollusca, followed by polychaeta. The reason for this phenomenon could be attributed to the food net in the marine ecosystem where microalgae are the primary producer in the tidal flat, at the same time arthropoda, mullusca and polychaeta probably the main food providers for large crustaceans.
#
Conclusion derived from this long monitoring work can be summarized as follows; particle size at the end of the experiment for the tides treated by AGOCLEAN-P reached almost the same level for the artificial flats along with sandy natural flat, whereas muddy natural flat was higher in silt/clay and lower in the average particle size. The artificial tidal flats had lower content of LOI, TOC, and AVS. In contrast, chlorophyll a was higher for the artificial flats throughout the monitoring period. Macrobenthos population after 3 months increased gradually for the artificial flats having higher individual numbers compared to natural. Different pattern was noticed for wet weight biomass and number of species. In the first year both values were higher in the natural mudflats, after that reverse results were noticed. The outcome results from this long monitoring towards enhancement the biological productivity was visible based on the environmental parameters tested. Furthermore, toxicity of the coagulant and/or hardeners used in the tidal flats construction was negligible.
$ %
This work was performed as part of a joint collaboration research project entitled, "Environmental Restoration Project on the Enclosed Coastal Seas, Ago Bay", supported by CREATE (Collaboration of Regional Entities for the Advancement of Technological Excellence) organized by the Japan Science and Technology (JST) Agency. The main research was partly supported by the Ministry of Education, Culture, Sports, Science, and Technology of Japan. Parts of the experiments were conducted at Mie University, Japan and Tati University College, Malaysia. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the view of the supporting organizations.
|
2019-04-28T13:09:42.260Z
|
2013-09-01T00:00:00.000
|
{
"year": 2013,
"sha1": "9d2964666a67393f2e97ea7f43063313c32df250",
"oa_license": "CCBY",
"oa_url": "https://www.scientific.net/AMR.795.308.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScientificNet",
"pdf_hash": "09657ebc691720cc7aa5032ad89c62fe846e87f8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
169806874
|
pes2o/s2orc
|
v3-fos-license
|
Sustainable Development: Myth or Reality?
The report “Our Common Future” gives a definition of sustainable development. In principle, the idea of sustainable development is extremely humane and noble, and it has no alternative. But at the same time this idea in the modern world looks very unrealistic. This is more a slogan than a scientific concept. Sustainable development of our planet is a global process, it is an ideal, because our planet is a single balanced geoecological system. However, today theoretically sustainable development can be achieved only in a small number of highly developed post-industrial countries. In developing countries, unfortunately, there can be no question of sustainable development. In other words, at the global level, it is not possible to achieve sustainable development in the near future. There can be no sustainable development in a single country. But this does not mean that all countries without exception do not need to implement environmental protection activity. On the contrary, it is necessary to carry out such activities everywhere. But this will not be sustainable development, this will be local measures for the rational use of nature. But all these measures are of a local nature, they will not become global, which means that this will not be a sustainable development. However, the term “sustainable development” has gained wide popularity, is humane in nature, so it may remain, but we should remember that this is just a conditional term, and in fact it is a rational use of nature on a local level. Examples of sustainable development strategies and projects in a number of countries are given. It is shown that most of these projects are in essence projects on rational nature use in individual regions. The other part which concerns global problems, can be implemented only by developed countries, they also cannot be sustainable development projects. www.scholink.org/ojs/index.php/jecs Journal of Education and Culture Studies Vol. 2, No. 4, 2018 304 Published by SCHOLINK INC.
Introduction
In 1972, the UN Conference on the Environment was held in Stockholm. The most important conclusion reached by the Conference is the recognition of the existence of an inseparable link between a safe environment and socio-economic development. The outcome of the Conference was a report prepared in 1987 by the World Commission on Environment and Development, chaired by the former Prime Minister of Norway, Ms. Harlem Brundtland, on the optimal development of humanity "Our Common Future" (Our Common Future, 1987). The report proved the necessity and possibility of sustainable triune development, which unites the environment, social and economic components, as the only real path for the further development of civilization.
The main conclusion of the Commission, as H. Brundtland stated, "the environment is the place of our life, and development is our actions to improve our well-being in it. Both of these concepts are inseparable". And as a consequence, there is a need to achieve sustainable socio-economic development, in which decisions would be taken with full consideration of geoecological factors.
I deliberately emphasize-geoecological, not ecological factors, because these two terms are often confused, but these are completely different things. Ecology is a biological science that studies the interaction of a living being with the physical environment surrounding it, i.e., nature. Geoecology or geographical ecology is a geographic science that studies the interaction of a living being (in this case a human being) with the physical, economic (anthropogenic) and social environments surrounding it (Gorbanyov, 2018).
Methodology Frame
The Commission formulated the definition of sustainable development. This "is a development that can provide needs of the present time without compromising the ability of future generations to meet their own needs". It is important to note that this definition is very close to the ideas expressed by the Soviet geographer David Armand back in 1964 in his famous book "To us and the grandchildren" (Armand, 1964) . Even the title of the book echoes the idea of sustainable development. Armand writes: "The moral duty of each generation is to leave the next natural wealth in better condition and in greater quantity than it received from the previous one". It is not difficult to see that this idea of Armand is extremely close to the ideas stated in the report of the Commission: environmental protection "should be considered as part of our moral duty towards other people and future generations". Just an amazing resemblance! But the book of David Armand was published 23 years earlier than the report of the Brundtland Commission. However, Armand did not use the term "sustainable development", instead of it he talked about the rational use of nature.
Theoretical Base and Discussion
It was mentioned above that David Armand did not speak about sustainable development, but about rational use of nature, which, in our opinion, is much more correct. Many Russian scientists (although not all of them), including myself, are skeptical about the concept of sustainable development. In principle, the idea of sustainable development is absolutely humane and noble, but it is more a slogan than a scientific concept. The idea of sustainable development resembles a horizon or communism, to which one can seek, but never reach.
The conditions and goals of sustainable development are very utopian. Russian geographer Dmitry Lury stressed that the concept of sustainable development should consist of a system of restrictions: limiting the growth of population, limiting the consumption of natural resources, limiting the growth of efficiency in the use of natural resources, limiting the destruction of ecosystems and even limiting freedom and scientific and technological development (Ljuri, 2006). But in the coming decades, mankind will not be able to realize these limitations. Therefore, the most likely scenario for D. Lury will be further destabilization of the situation, which will lead to a global environmental crisis, and the concept of sustainable development will remain a wonderful dream. This crisis is a natural stage in the development of civilization. Therefore, we need to prepare for a controlled crisis, i.e., find the ways to curb this crisis and minimize it. development. Only such a society is able to consciously go to the restrictions set by D.Lurie, and at the same time preserve or even increase their social and economic potential. It is in these countries that sustainable development is theoretically possible. But the population of developed countries, as is known, is about 20%. 80% of the populations are developing countries that are in the industrial or even pre-industrial phase of development. In these countries, there can be no question of sustainable development. In the foreseeable future, the population, consumption of natural resources, energy, degradation of ecosystems will only increase. Therefore, at the global level, sustainable development is impossible in the foreseeable future.
It is quite another matter that in some countries and regions the projects of rational use of nature, which D. Armand spoke about, can be carried out: projects on the rational use of natural resources, on the protection of ecosystems and the environment, on combating desertification, deforestation, hunger, poverty etc. All these projects are of a local level, they can take control of the ecological crisis, or in other words, in their essence, all these projects on "controlled crisis" which D. Lurie spoke about.
Soviet geographer Vsevolod Anuchin most clearly showed the interdependence and unity of nature and society (Anuchin, 1978). He first gave a philosophical and theoretical rationale for the concept of rational nature use emphasizing that rational use of nature is a capacious concept. It includes the problems of the integrated use of natural resources in a given territory. Nature use implies not only the effective involvement of natural resources in the production process, but also their protection, and also often their restoration and transformation. Without understanding the unity of society and nature, rational use of nature is impossible.
The most characteristic feature of modernity is the ecologization of economic development. The economy of the industrial era is aimed at economic growth in the context of increased consumption, and therefore the destruction of the environment and in particular of ecosystems, which are, first of all, the foundation of life, and only in the second and third turn the natural resource.
The approaching era of post-industrialization radically changes the crux of nature use. If earlier it was a question of the state of certain types of natural resources, now humanity is faced with a global geoecological problem, where all components of the environment-natural, technogenic, social-are interwoven into a single knot.
In post-industrial, developed countries in the last 20-30 years, there has been a sharp reduction in the consumption of raw materials. The "knowledge economy" contributes to the mitigation of environmental problems. At the same time, in industrialized countries and, even more so in pre-industrial countries, poverty persists, environmental degradation intensifies, geoecological catastrophes become more frequent. In the EU countries, significant funds are allocated for environmental projects: 4-9% of GDP, in the USA less-about 2.5% of GDP, and this figure is growing.
As a result of environmental measures in these countries, it has been possible to reduce the burden on the environment while increasing production volumes. Developed countries that provide more than 80% of world GDP produce about 65% of global wastes and 50% of carbon dioxide emissions. Although, of course, it should be noted that these countries account for less than 20% of the world's population. According to the estimates of the IMF, the consumption of natural resources in developed countries per unit of finished products is reduced annually by 1.23%. The use of recyclables is expanding: in Germany, agricultural waste, used oils are disposed of 90%, car bodies-by 98%. At the same time, rational use of natural resources is achieved due to geographical shifts in the structure of the economy: energy and material-based industries are being replaced by knowledge-based industries, with the former are increasingly moving to developing countries.
Of course, it should be emphasized objectively that post-industrial functions in developed countries difficult to intertwine with industrial ones, which is typical first of all for medium and small enterprises and companies. The needs of the population in developed countries have by no means diminished, on the contrary, they are growing all the time. Today, one resident of developed countries consumes as much resources as 20 people in developing countries, and energy consumption by one American is equivalent to its consumption by 14 Chinese or 531 Ethiopians. In general, developed countries consume 50% of global energy and 80% of raw materials.
And yet, the intensive economy of post-industrial countries demonstrates flexibility and the ability to reorient to the changing conditions of nature use. As a result of the introduction of resource-saving technologies, they managed to reduce the resource intensity of their GDP by 1.5-2.0 times in 10-20 years.
At the same time, post-socialist and developing countries continue to develop an extensive way of catching-up development, i.e., the volumes of resource consumption at them vary in parallel (or even more rapidly) with the growth of the economy.
As noted above, population growth plays a decisive role in the use of natural resources, leading to increased consumption. For the first time this link was indicated by the Englishman Thomas Malthus, who concluded that population growth will exceed the growth of livelihoods, which will lead to hunger and other negative consequences.
In addition to resource constraints of the world economy growth and, accordingly, the use of natural resources, there are geoecological restrictions, which include negative changes in the quality of the environment. Environmental quality is influenced by such processes as global climate change, biodiversity reduction, deforestation, desertification, soil degradation, marine, fresh water, soil, atmosphere pollution, water scarcity and other negative processes and phenomena.
The Results and Recommendations
To study the correlation between sustainable development processes and post-industrialization, we assessed the dependence of the Environmental Performance Index (EPI) and the share of population engaged in services (Figure 1), as well as the dependence of the EPI and GDP per capita in selected countries (Gorbanyov, 2011). The EPI Index was developed jointly by Yale and Columbia Universities.
The main goal of the index is to assess the degree of environmental sustainability in individual countries. This takes into account not the fact how degraded the environment is, but the actions taken to prevent its degradation. Each country is assessed on the basis of 25 criteria collected in 10 groups, which in turn are divided into two parts: ecosystem Vitality and environmental health. In the first part there are 7 groups of criteria, in the second -3. In this case, the role of each group is estimated as a percentage. So in the part of "ecosystem vitality", the key role-25%-is played by climate change criteria as a result of anthropogenic emissions of gases. In the second part of "environment health", 25% is due to the consequences of the morbidity of the population due to environmental degradation. 120 countries were considered; those vast majority of countries, with the exception of some least developed countries and small countries in the Caribbean and Oceania. Graphs of the dependence of the EPI index on the share of the population employed in the service sector and on the per capita GDP were constructed. Through the obtained field of points, the averaged curve was drawn, and in parallel to this curve there are additional curves to the left and to the right of it, between which the greatest density of the data scatter is observed-the "main data band". Analyzing the obtained graphs, it is easy to see that this dependence has an exponential character: the larger the proportion of the population engaged in services, the higher the EPI, i.e., with the growth of post-industriality, the measures to prevent environmental degradation are increased. The same trend can be observed on the graph of dependence of the EPI index on GDP per capita: the more economically developed the country is, the greater the measures taken to prevent deterioration of the environment. Four groups of countries can be identified on the graphs of dependence of the index EPI on employment in the service sector and on GDP per capita.
The first group is the leaders of post-industrial development and policy in the field of environmental protection; In these countries the EPI index is more than 80, employment in the service sector is more than 70%, and GDP per capita is over 35 thousand US dollars. This group includes only four countries: Iceland, Switzerland, Sweden and Norway. At the same time, it can be seen that on the graph of the dependence of the EPI index on GDP per capita these countries "fell out" of the main band (except Norway).
The second group is also the countries that now are joining or have already entered the post-industrial phase, where the EPI index is more than 70, the share of employment in the service sector is more than 65% and GDP per capita is more than 20 thousand US dollars, except for the countries of the first group. This group includes France, Austria, Malta, Great Britain, Finland, Germany, Italy, Japan, New Zealand, Spain, Singapore. A special place is occupied by Panama and Belize. In terms of employment in the service sector (67%), they fall into this group, as these countries are very actively involved in ecotourism. However, in terms of GDP per capita, they immediately "fly out" from this group. The reverse situation has developed with Slovakia, Portugal and the Czech Republic: they came into this group due to rather high GDP per capita, however, when using the share of the population engaged in services, these countries leave this group and hardly meet the criteria of post-industrialism.
The third, rather numerous, but compact group includes countries of an industrial type or a catching-up countries, but also includes a number of post-industrial countries. These countries have more than 55 EPI, more than 50% of the population are employed in the services sector, and GDP per capita is more which, undoubtedly, have entered the post-industrial phase of development, but they fell out of the "main band", because EPI is not very high (58-68), which indicates insufficient measures taken by states to improve the state of the environment. This factor does not allow these countries to enter the second and even more so in the first group.
Finally, the fourth group includes countries where the EPI index is more than 30, the share of the population employed in the service sector is more than 10% and the GDP per capita is more than $ 500, except for the countries of the first, second and third groups. This includes mainly developing countries that are in the industrial and even pre-industrial phase of development, where care for the environment is minimal, and in some countries is none at all. These are countries such as Romania, Sri Lanka, Very interesting is the Canadian Program to manage the territory of the Fraser River Basin-the world's largest river system for the reproduction of salmon. This program, according to its authors, is an excellent experience of a regional approach to the implementation of a sustainable development strategy for a river basin, where more than 2 million people live. The program brings together government agencies at the federal and provincial levels, broad sections of the local population. Similar programs have gained immense popularity in Canada. Hundreds of villages and local communities across Canada are developing similar "sustainable development" plans and strategies. Again, this is a very successful and useful program on regional nature use, but not on sustainable development.
Back in 1993, the President of the United States issued a decree establishing the Presidential Council on Sustainable Development, which is designed to advise the president on all aspects of sustainable development. The Council relied in its activities on the concept of unity of economic development, environmental protection and the achievement of social justice. On this basis, ten goals were formulated for the country's transition to sustainable development: -health and environment; -support economic prosperity; -equality between people; -protection of natural resources; -responsible management; -sustainable communities in order to achieve a healthy society; -citizen participation in decision making; -ensuring the stabilization of the population; -international responsibility; -equal access to education.
In 2009, the US president signed a decree obliging federal agencies to carry out activities to achieve sustainable development goals: reducing the use of petroleum products by cars by 30%, increasing water use efficiency by 26% by 2020, increasing the degree of waste recycling and reducing them by 50% by 2020. At the regional level, measures were also taken to achieve sustainable development. So background.
Although it should be noted that the United States and Canada were the first countries where environmental legislation was introduced to the rank of public policy. At the same time, it is known that President D. Trump spoke out sharply against measures to combat climate change. It's hard to say how Mr. Trump will act on sustainable development.
In the UK, the Sustainable Development Strategy was developed in 1999. The Strategy focuses not on reducing the volume of production, but on increasing the efficiency of the use of natural resources.
Priority is given to increasing the efficiency of energy use and waste utilization. Since 1970, energy consumption has practically not changed, although GDP has increased by 80%. As for waste reduction, there are strategic directions such as the reduction of waste production and their recycling.
In 2017, the German Federal Government adopted a new version of the German Sustainable Development Strategy. Each federal agency is called upon to make a reasonable contribution to the achievement of the goals set. Strategy aims to cost-effective, socially balanced and ecologically friendly development. The Sustainable Development Strategy presents the measures envisaged by Germany for the implementation of the 17 sustainable development goals, including the following: -elimination of poverty and hunger, ensuring gender equality; -ensuring universal quality education; -ensuring the availability and rational use of water resources; -providing access to energy sources; -promoting progressive and sustainable economic growth; -ensuring the safety and environmental sustainability of cities; -taking urgent measures to combat climate change; -rational use of the seas and oceans; -protection and restoration of terrestrial ecosystems and their rational use.
The strategy of sustainable development worked out by Germany exactly corresponds to the concept of sustainable development worked out at the international level and, unlike other national concepts, is more global (with a few exceptions) than regional in nature and that is why it looks quite utopian. The concept does not even hint at regional projects. implementation of sustainable development goals. The program covers the Baltic Sea region, which includes all Scandinavian countries, Germany, Poland and Russia. As stated in the documents of the Program, the consideration and solution of problems of sustainable development is due to the fact that these countries, cities and the entire population of the Baltic Sea region can achieve sustainable development only if they act in concert and will constantly cooperate regardless of political and economic differences and frontiers.
The content of Agenda 21 includes four main groups: social aspects, rational use of natural resources, strengthening the role of major population groups, means of subsistence.
All noted national strategies and projects are in their essence measures for rational nature use either of a regional or national nature. And projects of a global nature, such as combating climate change or the rational use of the oceans, relate exclusively to developed countries, and the vast majority of developing countries remain aloof from this process; therefore, these projects have nothing to do with sustainable development.
In 1960, Russia adopted the first nature-protection law "On Nature Protection in the Russia". This law for the first time formulated the main provisions of the concept of rational use of nature. In particular, the idea of the unity of the use and protection of nature, the responsibility of the state and society for the preservation of the natural environment and a number of others consonant with the idea of sustainable development were formulated.
In 1996, the President of Russia issued a decree on the "Concept of the transition of the Russian Federation to sustainable development", which instructed to develop a strategy for sustainable development of Russia. However, such a strategy has not yet been adopted at the official level. At the same time, there are local strategies for sustainable development, and quite successful. However, I repeat: in essence, these are local strategies of rational use of nature. For example, Nevel district of the Pskov region, which is one of the most underdeveloped regions of Russia. Here the Nevel-21 project was carried out. A special place in the project was occupied by a section devoted to the development of ecological agriculture, industrial fisheries, as well as environmental education and recreation. Later this project turned into a new one-the creation of the "Pskov Center for Sustainable Development of Border Territories with the Republic of Belarus".
Conclusion
Summarizing what has been said, I would like to emphasize: despite the negative sides the concept of sustainable development has the right to exist. Taking into account the support that the concept of sustainable development has received in the world and its enormous educational potential, it seems that there is no point in abandoning it, but one should always keep in mind that in fact it is a matter of rational use of nature in a particular region of the globe. The way to very remote sustainable development lies through local projects of rational use of nature, covering both natural and
|
2019-05-30T23:46:00.044Z
|
2018-11-02T00:00:00.000
|
{
"year": 2018,
"sha1": "a5c9b2eb4fe89c9656c55f3aed9aac30abfffe05",
"oa_license": "CCBY",
"oa_url": "http://www.scholink.org/ojs/index.php/jecs/article/download/1668/1811",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b891486ad83fe12308042d4b5445cce70bf7340d",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
256217678
|
pes2o/s2orc
|
v3-fos-license
|
Detection of Detached Ice-fragments at Martian Polar Scarps Using a Convolutional Neural Network
Repeated high-resolution imaging has revealed current mass wasting in the form of ice block falls at steep scarps of Mars. However, both the accuracy and efficiency of ice-fragments’ detection are limited when using conventional computer vision methods. Existing deep learning methods suffer from the problem of shadow interference and indistinguishability between classes. To address these issues, we proposed a deep learning-driven change detection model that focuses on regions of interest. A convolutional neural network simultaneously analyzed bitemporal images, i.e., pre- and postdetach images. An augmented attention module was integrated in order to suppress irrelevant regions such as shadows while highlighting the detached ice-fragments. A combination of dice loss and focal loss was introduced to deal with the issue of imbalanced classes and hard, misclassified samples. Our method showed a true positive rate of 84.2% and a false discovery rate of 16.9%. Regarding the shape of the detections, the pixel-based evaluation showed a balanced accuracy of 85% and an F1 score of 73.2% for the detached ice-fragments. This last score reflected the difficulty in delineating the exact boundaries of some events both by a human and the machine. Compared with five state-of-the-art change detection methods, our method can achieve a higher F1 score and surpass other methods in excluding the interference of the changed shadows. Assessing the detections of the detached ice-fragments with the help of previously detected corresponding shadow changes demonstrated the capability and robustness of our proposed model. Furthermore, the good performance and quick processing speed of our developed model allow us to efficiently study large-scale areas, which is an important step in estimating the ongoing mass wasting and studying the evolution of the martian polar scarps.
. HiRISE operates on a nearly sun-synchronous orbit and takes images at the same local time of the day [2]. Therefore, HiRISE imagery is very well suited for scientific investigations requiring change detection techniques. Active mass wasting, such as gully activity including erosion and deposition of material [3] and ice block falls [4], [5], [6] can be investigated through change detection. Equatorward-facing steep scarps at the periphery of the martian North Polar Layered Deposits (NPLD) are composed of several-kilometer thick stacks of dusty water ice layers that record martian climate history over millions of years [7]. Due to thermoelastic stresses [8], [9], they experience fracturing that leads to ice block falls [1], [4]. There are two ways to estimate the area of the mass wasting volume: the vacant gap left in the source scarp and the collection of ice blocks at the foot of the scarp. The source regions of block fall events were first mapped with the help of changed shadows' detection by [10], [11]. Fig. 1(a) shows an example of the NPLD scarp in a HiRISE image. The topography shows that the slope of this scarp reaches up to 45°[see Fig. 1(b)]. Obvious fractured slab-like ice-fragments can be seen along the scarp. These ice-fragments can detach from the scarp [one example of which is indicated by the red arrow in Fig. 1(c)]. They rest as ice blocks of different sizes at the underlying Basal Unit [inside the red circle in Fig. 1(d)]. A coarse-to-fine ice blocks' detection approach by considering the illumination properties was proposed in [6], which provides a robust and effective way to identify the ice blocks larger than 0.5 m in diameter. However, some ice blocks break into pieces or even into finer, pulverized material during rolling downslope, leaving no trace in remote sensing images. Hence, the investigation of the sources of ice block falls (i.e., the detached ice-fragments in the source scarp) is a more reliable way to monitor the ongoing activity of the NPLD.
Manual searching and mapping of the detached ice-fragments is very time consuming as we are facing large amounts of data. Conventional computer vision methods may not reach the required efficiency and accuracy [12], [13]. Artificial intelligence not only can reduce the workload of humans, especially for image analysis over large-scale areas [14], but can also achieve satisfactory detection accuracy [15], [16], [17]. In this article, we use a deep learning method to perform change detection in order to extract the detached ice-fragments at the scarps of the NPLD. More specifically, the area of the ice-fragment is automatically mapped by segmentation, which has been widely used in medical image diagnostics [18], [19] and remote sensing image analysis [20], [21], [22], [23]. U-Net is a typical convolutional neural network architecture for image segmentation [24]. It is a Ushaped architecture consisting of a specific encoder-decoder scheme: the encoder path reduces the spatial dimensions of feature maps but increases the channels, and the decoder path reconstructs the spatial dimensions and reduces the channels. Since then, improved U-Net models have been developed, such as U-Net++ [25] and U-Net 3+ [26]. Furthermore, modified U-Net models, which substitute the basic forward convolutional units with other convolutional blocks, such as VGG-16 weight layers [27], residual blocks [28], or recurrent convolutional layers [29], have substantially increased the performance of segmentation.
The deep residual network (ResNet) introduces a shortcut connection to handle the degradation problem that the performance of the model decreases while increasing the depth of the network [30], [31]. There are many variants of the ResNet architecture, such as ResNet-34, ResNet-50, and ResNet-101, which have the same concept but with different number of layers. Starting from the 50-layer ResNet, a 3-layer 'bottleneck' block was introduced by [30], which contains a stack of 3 layers: 1×1, 3×3, and 1×1 convolution layers. ResNet combined with other networks has been widely used in object detection and classification tasks [32], [33], [34]. For example, ResU-Net, which combines ResNet and U-Net, has been used in many medical scenarios, such as identifying organs [35], [36] and detecting medical devices [37], [38]. In addition, in remote sensing, tasks like landslide mapping [39], [40], building extraction [41], [42], and land cover segmentation [43], [44] can also be handled by the ResU-Net.
In the application of change detection, unlike ordinary single image detection, we need to simultaneously feed two or more images covering the same geographical area into the model. The Siamese network is an architecture that contains two or more identical subnetworks to generate and compare feature vectors for each input [45]. It can be applied to different cases, such as detecting duplicates, finding anomalies as well as face recognition [46], [47]. The idea of using same weights while extracting features from bitemporal images is also suitable for change detection.
The attention mechanism has been well used in neural networks to improve the performance of the encoder-decoder scheme [48]. It permits the network to devote more focus on regions of interest so that the most relevant vectors will be attributed the highest weights. Oktay et al. [49] suggested to integrate an attention gate into the U-Net model (attention U-Net), which can improve the prediction performance and preserve computational efficiency as well. Li et al. [50] developed a pyramid attention network, which combines an attention mechanism and a spatial pyramid, to extract precise dense features. Ni et al. [38] proposed an augmented attention module to fuse semantic information in high-level feature maps with global context in low-level feature maps, aiming to learn discriminative features and emphasize key semantic features. He et al. [51] considered using the low-resolution semantic images as prior to guide the attention module to focus on the target of interest. Recent work has shown that adding the attention mechanism to the change detection process can improve the recognition ability of the model [52], [53]. The challenge in our study area is that shadows are much easier to identify than ice-fragments. Adding an attention module into the decoder part can greatly help the model learn to suppress irrelevant regions while highlighting specific objects.
The loss function is a way to measure how well a designed model is in predicting by quantifying the error between the prediction and the ground truth. Various loss functions can be used to handle image segmentation problems, such as cross entropy loss [54], dice loss [55], and focal loss [56]. Cross entropy loss function is a distribution-based loss function aiming to minimize the dissimilarity between the predictive distribution and the true distribution. It is popular in segmentation tasks due to its stability [57]. However, when facing with serious class imbalance issue, i.e., the number of foreground pixels much smaller than the background pixels, the prediction is heavily biased towards the background. Dice loss, inspired from dice coefficient, measures the relative overlap between the prediction and the ground truth, and is not affected by imbalanced data [55]. The focal loss is an improved version of cross entropy loss to combat the difficulty in detecting hard, misclassified objects [56]. The loss functions behave differently when responding to specific segmentation tasks. When detecting objects that are small in proportion and difficult to identify, a combined loss function may effectively utilize their respective merits.
Our study area only consists of ice. The existing segmentation methods mainly target diverse categories that are distinguishable from each other. The main difficulty of our task is that the icefragments are hard to classify even by visual detection, because they are very similar to the background. Furthermore, extracting the detached parts while excluding the changed shadows requires a customized deep learning model. The contributions of our work are as follows.
1) Under the Siamese network architecture, the bitemporal images are handled with identical subnetworks in tandem so as to generate respective features of the bitemporal images, and the features of their difference image are also rich with multilevels. 2) The augmented attention module takes the features of the difference image into consideration, so that the network puts more weight on the changed area, which is consistent with the essence of change detection. 3) A combination of dice loss and focal loss alleviates the issue of imbalanced classes as well as hard, misclassified samples. The rest of this article is organized as follows. Section II describes the details of our proposed deep learning-driven change detection model as well as the loss function. In Section III, we show the experimental work and our results. Section IV is the discussion of the benefits and limitations of the technique. Finally, Section V concludes this article.
A. Deep Learning-Driven Change Detection Model
The overall procedure of our deep learning model for change detection is illustrated in Fig. 2. T1 and T2 are a pair of coregistered HiRISE images showing pre-and postdetach event. They are separately fed into two identical convolutional neural networks that have the same architecture, parameters, and weights. The features extracted from T1 and T2 can be expressed as where feature maps f T 1 , f T 2 ∈ R H×W ×C , H, W , and C represent the height, width, and channel dimension of the feature map. H f (.) represents the residual function followed by an activation function. The weight w is mirrored to update during training. We use residual blocks from ResNet-50 as the backbone to extract features from the low level to the high level [30], [31].
Residual networks help to overcome the degradation problem while increasing the depth of the network [30], [31]. In our model, encoding takes five stages to obtain multilevel features, which is exactly applied to T1. And for T2, only the first four stages are required for extracting multilevel features. The reason for this is based on the way T1 and T2 are connected. The first stage is a convolution layer followed by a 3×3 max pooling. Max pooling picks the maximum value from each 3×3 patch to reduce the size of the feature map in order to have fewer parameters in the model while keeping essential features. Stages 2-5 apply 3, 4, 6, and 3 times "bottleneck" block in sequence.
At each stage, we subtract features of T2 from T1 The absolute difference image d T 1 −T 2 will be skip connected into the attention module. The difference image helps guiding the network to focus on the changed area.
After the fifth stage, up-sampling is applied on T1, which goes to the decoder part. We use deconvolution, a mathematical operation that reverses the effect of convolution, to reconstruct the spatial dimension of the image, which is mapping a low dimension to a high dimension while maintaining the connectivity patterns between them. After each deconvolution layer, the absolute difference image d T 1 −T 2 is skip connected into an augmented attention module, which was introduced by [38] for segmentation on surgical instruments. The attention vector can be computed as follows: where H 1 (.) represents a 1×1 convolution with batch normalization followed by the rectified linear unit activation function. H 2 (.) represents an 1×1 convolution followed by a softmax activation function. G(.) is the global average pooling, directly applied on f T 1 and d T 1 −T 2 to squeeze the global information into 1×1×channels vectors. The equation is as follows: where 1 ≤ k ≤ C, C is the number of channels. The final attentive feature map is generated as follows: Ni et al. [38] mentioned that the attention module can be flexibly embedded in different networks due to the advantage of using very few parameters. In our attention architecture, we capture the deep features from T1 and emphasize the target features from the difference image [see Fig. 3].
After five deconvolution steps, two convolution layers map the channels to the desired number of classes. A softmax layer transforms the outputs into a normalized probability distribution.
B. Loss Function
We annotate two classes for our segmentation task. Class 1 represents the detached ice-fragments, while Class 0 represents the background including unchanged areas and the changed shadows. However, the foreground class 1 occupies significantly smaller area than the background class 0 [see Fig. 4]. For the whole training data, the foreground class 1 has about 3.4×10 6 pixels, while the background class 0 has about 5.1×10 8 pixels. Therefore, these classes are typically imbalanced. The dice loss, based on the dice coefficient, can handle class imbalanced problems [55]. The dice loss function is formulated as where X and Y refer to pairs of corresponding pixel values of the prediction and the reference, respectively. The value of dice score is between 0 and 1. The negative natural logarithm of the dice score extends the value range from 0 to positive infinity. If the prediction result is significantly different from the reference, the dice score is small and the dice loss value L dice will be infinitely great.
Ice-fragments are harder to identify compared to the surrounding shadows. The focal loss function focuses on learning hard, misclassified samples by down-weighting the loss for easy-classified samples, which is formulated as where p is the model's estimated probability for class 1; α and γ are two hyperparameters that can be tweaked for better performance. Here, we set α = 0.25, γ = 2 based on our experimental experience. Our final loss function is a combination of the dice loss and the focal loss to alleviate the problems we are facing. The hybrid loss function is formulated as where λ is a weight to balance the contribution of the dice loss and the focal loss. Based on multiple experiments, λ is set to 0.1. The experiment analysis is discussed in Section III-D.
A. Dataset and Implementation
Our change detection dataset contains 10 510 pairs of coregistered HiRISE image tiles of 256×256 pixels. The training data account for 80%, the validation data account for 10% and the test data account for the rest 10%. The preprocessing of the HiRISE images including ortho-rectification and coregistration has been described in [11]. The tile width and length are set to 256 pixels, typically 64 m, because the vast majority of ice-fragments (the longer length < 100 pixels) are smaller than that. Images have 50% overlap both horizontally and vertically to ensure that the complete ice-fragments can be preserved. Fig. 4 shows three data examples: the first two rows include the detached ice-fragments (class 1) and the third row has only class 0.
Online data augmentation including perspective transformation, slight optical distortion, and randomly changing brightness and contrast have been used to increase the complexity and diversity of our training data. However, operations such as rotating or flipping the images cannot be used for the training data, as these operations change the fixed positional relationship between the ice-fragments and their corresponding shadows in the image. Online augmentation creates different datasets at each epoch without saving them on the disk, which is more efficient than offline data augmentation [58].
All training data were fed into our deep learning-driven change detection model. We used Adam, an adaptive learning rate optimization algorithm, to optimize the model [59]. The learning rate was initially set to 0.0001 and decayed every epoch by a factor of 0.95. The 10% validation data were used to check the accuracy and calculate the validation loss after each training epoch. We have done 100 epochs of training during experiments and found that the average validation loss did no longer decrease after around 40 epochs. So, typically, our model was trained for a total of 50 epochs. The model that achieved the minimum average loss on the validation data was saved as the best model. We then applied the best model to the test data for the final evaluation of our proposed model.
B. Evaluation Metrics
Two kinds of quantitative assessment will be discussed in the following: pixel-based and event-based metrics. The F1 score and the balanced accuracy are pixel-based metrics, which are widely used for evaluating the performance of segmentation models [60], [61]. They count the number of pixels of each class. The F1 score is a measure of the accuracy of a model, which combines precision and recall into a formula (10) where true positives (TP) are the predicted class 1 pixels associated with the reference class 1 pixels, true negatives (TN) are the predicted class 0 pixels associated with the reference class 0 pixels, false positives (FP) are the predicted class 1 pixels not associated with the reference class 1 pixels, and false negatives (FN) are pixels where the reference class 1 has no associated predictions.
The boundaries of the detached ice-fragments are not always visually clear, e.g., the one shown in the second row of Fig. 4 compared to the two in the first row where the boundaries of ice-fragments are clear. So, the manual mapping as well as the model's predictions of the ice-fragments' shape are only approximations. Therefore, we also calculated the event-based true positive rate (TPR) and false discovery rate (FDR) of the detections. They are more suitable of demonstrating the performance of our model than the pixel-based evaluation because although the exact shape is not always visible, the occurrence of an event always is. The true positives (TP) are the number of change detections. The metrics are defined as
C. Results on Test Data
The test data were used for the final evaluation of the trained model. Three sets of results from our proposed method are visualized in Fig. 5. Our deep learning model is able to detect and delineate the detached ice-fragments, and is resistant to the complex environment and does not pick on other changes such as the shadows of the ice-fragments or the bright linear ice exposures appearing and disappearing as pointed out by white arrows in Fig. 5(c). An example such as the area indicated by the blue arrow in Fig. 5(b) has a clear boundary and is easily extracted. However, in some cases marked by pink arrows in Fig. 5, even our manual identifications are only approximate areas. Therefore, we call our detections the approximate areas of the detached ice-fragments. The examples also show inconsistencies between the predictions and the reference. When a large area of ice-fragment is released from the scarp, the change detection of the detached ice-fragment may be incomplete due to the blurry boundaries. Fig. 5(a) shows a large detached icefragment, while the model can only detect two parts of it. The model cannot predict the middle part that has no corresponding shadow as a change. This indicates that the shadows play an important role in helping the machine locate their corresponding ice-fragments. In Fig. 5(c), a small portion of ice (pointed by yellow arrows) was shed, however, our model did not detect it. We speculate that the machine confused it with image distortion.
The quantitative evaluation results on the test data are organized in Table I. The pixel-based evaluation shows that the F1 score for class 1 is 73.2% and the balanced accuracy is 85%.
The event-based evaluation shows a true positive rate of 84.2% and a false discovery rate of 16.9%.
D. Experiment Analysis of the Weight λ of the Hybrid Loss Function
The hybrid loss function is a combination of the dice loss and the focal loss to alleviate the problems of imbalanced classes and hard, misclassified samples. Therefore, the weight λ is essential for balancing the dice loss and the focal loss. We varied λ from 0 to 1 to choose the best performance weight. Note that when λ is 0, loss function is the dice loss; when λ is 1, loss function is the focal loss. The effect of different λ is shown in Fig. 6. When λ is 0.1, our model achieves the highest F1 score for class 1, balanced accuracy and TPR.
E. Ablation Studies 1) Ablation on Attention Module:
To test if attention module helps improve the model performance, we removed the augmented attention module from our model. The test results show that it gets a TPR of 66.7% and an FDR of 9.8% for the event-based evaluation. For the pixel-based evaluation, it gets an F1 score of 65.2% for class 1 and a balanced accuracy of 76.4%. Without the attention module, the model's ability of picking on the detached ice-fragments is greatly reduced (from 84.2% to 66.7% in TPR). Moreover, both the F1 score and the balanced accuracy drop a lot when not adding the attention module. We thus demonstrate that the attention module improves the network's ability in detecting regions of interest.
2) Ablation on Loss Function: To verify the effectiveness of our proposed hybrid loss function, we replace it with three most common segmentation based loss functions: weighted binary cross-entropy (WBCE) loss, dice loss, and focal loss. The comparison results are listed in Table II. Our hybrid loss function, which combines the dice loss and the focal loss achieves the highest F1 score and the lowest FDR. WBCE gets a balanced 6%), and lead to a very high FDR (56.1%), too. The significant deficiency of using WBCE is its inability of excluding the changed shadows [see Fig. 7]. The detections include the whole area of the changed shadows of the detached ice-fragments. The combination of dice loss and focus loss can not only exclude the areas of the changed shadows [see Fig. 7], but also achieve a higher accuracy compared to the case where only one of them is used [see Table II]. This demonstrates the advantage of using our proposed hybrid loss function when facing the issue of imbalanced classes and the hard-to-classify ice-fragments.
F. Comparison to State-of-the-Art Methods
A comparison to five state-of-the-art change detection methods is displayed in Fig. 5. FC-EF, FC-Siam-conc, and FC-Siamdiff were proposed in [45]. These three architecture are all based on the U-Net model. FC-EF takes the concatenation of the bitemporal images as a single input, and then pass the input through the fully convolutional network. FC-Siam-conc and FC-Siam-diff both pass the bitemporal images separately into the Siamese network. But at the decoding part, FC-Siam-conc concatenates both features from the encoding part, while FC-Siam-diff concatenates the absolute difference of the features. BIT_CD was proposed in [21], which introduces a transformer-based model to replace the last convolutional stage of the ResNet architecture. MFCN_CD was proposed in [22] that uses multiscale fully convolutional neural network to learn features of different scales.
Excluding the interference of the changed shadows is one of the important factors to measure the effectiveness of the model. Only BIT_CD and our proposed method can effectively exclude the changed shadows. However, BIT_CD can only detect parts of the changes and has a lot of miss detections. MFCN_CD totally fails in detecting the correct detached ice-fragments. The detections of FC-EF, FC-Siam-conc, and FC-Siam-diff all include parts of the changed shadows. FC-Siam-conc detects the change of the bright linear ice exposure [see Fig. 5(c)].
The quantitative evaluations in Table I show that our proposed method has the highest F1 score. Even though MFCN_CD gets the highest balanced accuracy and TPR, but its FDR is very high (89.3%). According to the visualization results, MFCN_CD not only pick the detached ice-fragments, but also detect large number of unchanged areas. FC-EF, FC-Siam-conc, and FC-Siam-diff show the ability to detect the detached ice-fragments and also have relatively high balanced accuracy and TPR, but their detections include shadow areas. Our proposed method achieves the basically satisfactory balanced accuracy and TPR, and has a very low FDR except BIT_CD.
G. Application and Association Analysis
We applied our trained model to detect the detached ice-fragments from an NPLD scarp on a pair of fullscene HiRISE images that were taken one Mars year apart: PSP_009648_2650_RED in Mars Year 29 and ESP_018905_2650_RED in Mars Year 30. The location of the scarp is indicated in Fig. 8(a). The size of a single image can reach several Gigabytes, which is a big challenge for manual identification. However, it only took ∼8 min by using a NVIDIA GeForce GTX 1070 with Max-Q Design to process, orders of magnitude faster than manual work. Su et al. [11] used an automated change detection method to detect the corresponding shadows of the detached ice-fragments in the same scarp region with the same images. The change detection method proposed by Su et al. [11] can achieve an average true positive rate of ∼97.6% in detecting the shadows of the detached ice-fragments. As the detected ice-fragments in this study and the detected shadows in the article by Su et al. [11] are relevant, we use the results of shadow detection to evaluate the detection of the detached ice-fragments. To reduce interference from noise and radiance difference of bitemporal images, both detections of the detached ice-fragments and the shadows, smaller than 5 pixels, were excluded.
In Fig. 8(b), the red dots indicate where the detached icefragments and their corresponding shadows were both detected (in total 260), the beige dots indicate where only the icefragments were detected (in total 117), and the black dots indicate where only the shadows were detected (in total 186). Fig. 8(c), (d), and (e) shows the detailed mapping of icefragments (green) and shadows (purple) from three different parts of the scarp. The ice-fragments and their corresponding shadows distinct from each other, but are closely related. Note the background image is the predetach image. Some purple areas inside of the green areas are not false positives, but the shadows appearing in the postdetach image. Because when the ice-fragments detach, their surrounding parts may cast new shadows. However, there is a probability of false negatives or false positives in the areas where only shadows or ice-fragments are detected. For example, in the upper right of Fig. 8(c), detections are false positives. This is because of the differing imaging and illumination conditions that caused discrepancies in the same shadow of different periods. The detected parts here are the discrepancies. In Fig. 8(e), a number of purple areas do not match the full shadow boundaries. They are false detections due to the severe geometric deformation of the image here. Considering the randomly selected validation areas in [11], validation may have avoided those areas with image distortion, thus lowering the false discovery rate. Therefore, the overall false discovery rate of shadow detection in [11] may be higher than 9.4%. Among the 186 black dots, the majority are probably the false positives of shadows. We find much less false positives of ice-fragments than shadows, indicating that our deep learning-based change detection method is more robust against image distortion and shadow deformation than the method in [11]. Also, the majority of the 117 beige dots could be the true positives of ice-fragments. Additionally, areas in Fig. 8(d) where only ice-fragments or shadows are detected indicate their corresponding shadows or detached ice-fragments may not have been detected correctly.
IV. DISCUSSION
The results on both test data and full-scene HiRISE application show that our deep learning-driven change detection model is able to automatically detect the detached ice-fragments at the NPLD scarps. To demonstrate that the model can be region-and time-independent, we have chosen HiRISE images from two different tens-of-kilometers-long scarps. Moreover, the time interval between the bitemporal datasets is not limited to one Mars year. Therefore, our model is flexible for short-or long-term change detection.
A feature that is recognizable on the image is the cast shadows of the fractured ice-fragments due to the low-sun conditions in the polar regions of Mars [11]. An experienced human operator is able to distinguish the ice-fragments and their corresponding shadows based on prior knowledge, e.g., the position of the sun and the slope direction of the scarp. However, small size images (256×256 pixels) are the only input to the machine training algorithm. In order to help the machine in recognizing the mutual positional relationship between the ice-fragments and their corresponding shadows, we kept the training data with a constant positional relationship, i.e., the shadows are always below the ice-fragments in the image [e.g., Figs. 4,5,7,and 8]. The shadow then helps the machine to locate its corresponding ice-fragment. However, this would be a restriction when training our convolutional neural network. Our augmentation on the training data excludes operations such as random image rotation as well as vertical and horizontal flipping because they would cause the positional relationship to be turned. In another case, we may also not apply the trained model directly to the images, which have an opposite positional relationship between ice-fragments and shadows, i.e., the ice-fragments are below the shadows from top to bottom of the image.
To the best of our knowledge, this is the first attempt to use a deep learning-based method to detect the detached ice-fragments at the martian scarps by comparing multitemporal images. Application of deep convolutional neural networks on a similar task such as landslide recognition has been studied [39], [40], [62]. However, in terms of specific application scenarios and data, it is difficult to make a direct comparison. An association analysis as described in Section III-G can help in such an evaluation. There are several reasons to be confident in the capability and robustness of our deep learning-based method in detecting the detached ice-fragments at the steep scarp areas. First, most of the ice-fragments detected by deep learning correspond well to the shadows detected by Su et al. [11], which achieves an average true positive rate of ∼97.6%. Second, the number of false positives of ice-fragments is much lower than that of shadows especially at areas with image distortion.
A visual assessment of our model's performance reveals that false detections or undetected changes can be caused by the following. 1) DTM generation is challenging at the steep scarp area, which will sometimes cause ortho-rectification to fail, and thus the bi-temporal images are not well aligned. 2) Indistinct intensity difference between the ice-fragments and the surrounding shadows will create difficulty for the machine to differentiate. See one falsely detected ice-fragment at upper right of the proposed method image in Fig. 5(a). 3) When a large area of ice-fragment detaches from the scarp, our method may detect only parts of the ice-fragment because there are no shadows to help the model locate their corresponding ice-fragments, or the boundaries of the ice-fragments are blurry. Like the example shown in Fig. 5(a), our method cannot predict the middle part that has no corresponding shadow as a change. It is not very common to have a very large area of ice-fragment detaching from the scarp at one time. If it happens, our model has the ability to detect at least parts of these large ice-fragments with clear boundaries. The undetected parts reduce the pixel-based accuracy of our method, but do not affect the event-based evaluation as long as the method can detect parts of the detached ice-fragments. 4) If a small portion of ice is shed from an ice-fragment [e.g., the false negative in the proposed method image of Fig. 5(c)], it is difficult for the machine to distinguish whether it is the missing portion or the ice-fragment's deformation caused by the image distortion. The first factor could cause wrong detections when the images are misaligned. Better change detection results can be obtained with careful preprocessing of high-quality HiRISE imagery. Errors introduced by factors 2-4 are the shortcomings of our deep learning model. These false and miss detections can be accepted as they occupy a small percentage compared to the area of all detections.
When further probing into the calculation of ice-fragments' volume, we need to consider the uncertainties mentioned previously. Another general problem for accurate volume calculation is the blurry boundaries of the ice-fragments. As we have mentioned before, it is hard to map the accurate boundaries of some ice-fragments. In some cases, the break lines, where the ice-fragments break from their main body (i.e., the steep scarp), are difficult to be identified in the images. Especially in images taken multiple Mars years apart, the break line may be obscured by geological processes. This is a real limitation even when visually mapping, let alone automatic machine detection. However, it is worthy to note that the shape of all detached ice-fragments can be roughly determined by comparing the preand postdetach images as well as considering the size of the cast shadows of the fractured ice-fragments. Therefore, the size of the detections will not be extremely larger than the actual size.
Although the accurate boundaries of the detached icefragments are not always visible, the occurrence of these detachment events are. So, we believe that the event-based evaluation is more convincing. Evaluation based on pixels has restrictions for these blurry shape of ice-fragments. In order to address the pixel-based evaluation problem, in our future work we will divide assessment into two categories: ice-fragments with/without clear boundaries. Then, the final pixel-based evaluation will combine these two assessment results together. This may help to assess the accuracy of our predictions more precisely.
V. CONCLUSION AND OUTLOOK
In this article, a deep learning-driven change detection model is proposed to automatically detect detached ice-fragments at the steep scarps of the NPLD on Mars. We use a U-Net convolutional neural network architecture, which integrates both the ResNet-50 to extract features and an augmented attention module to highlight the target, i.e., the detached ice-fragment. The bitemporal images are fed into the Siamese network to mine their respective information. A hybrid loss function based on dice loss and focal loss is introduced to deal with the issue of imbalanced classes as well as hard, misclassified samples. Test results show that our change detection model is capable of localizing and mapping the changed areas, achieving an F1 score of 73.2% for the detached ice-fragments' class, a balanced accuracy of 85%, a true positive rate of 84.2%, and a false discovery rate of 16.9%. Compared to five state-of-the-art change detection methods, our model is more robust in extracting the approximate areas of the changed ice-fragments while excluding other changes on the images, and is more resistant to the complex topography of the NPLD scarps and even slight image distortions. An association analysis of our detection of the detached ice-fragments with a previous detection of the corresponding shadows demonstrates the capability and robustness of our deep learning-based model.
Fast processing speed and automation demonstrate the potential to apply this method across the whole NPLD area, and even to terrestrial mass wasting. The shape of the detached ice-fragments is an important parameter for estimating the flux and volume of ongoing mass wasting and studying the dynamic evolution of the NPLD scarps. From another perspective, the avalanches have been investigated visually for those fractured NPLD scarps which display ice block fall deposits at their base [7], [63]. We will extend our deep learning method to automatically detect active avalanches, to help reduce human work and complete an automated monitoring pipeline of this area [5], [64]. Ice block falls and avalanches are the main mass wasting activity of the NPLD scarps. Monitoring and investigation of long-term mass wasting over the whole NPLD scarps will provide insights into ice behavior, supporting modeling studies of martian viscous flow velocity [65], thermoelastic stress [9], and climate change. Her research interests include digital image processing, deep neural networks for image segmentation, change detection of planetary surface, and large-scale remote sensing image analysis.
|
2023-01-25T16:10:07.987Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "fcb21ecd2a79ede56a9afe6bba6fe76ac94fbca8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/jstars.2023.3238968",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "b3cab1d551eb43c6ca5903f2f1f00d35022bcb63",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
29150085
|
pes2o/s2orc
|
v3-fos-license
|
Allergy immunotherapy restores airway epithelial barrier dysfunction through suppressing IL-25 -induced endoplasmic reticulum stress in asthma
Constant exposure to allergen triggers destructive type 2 cell-mediated inflammation. The effect of allergen specific immunotherapy (SIT) in maintaining airway epithelial barrier function in asthma remains unknown. In the current study, we showed that SIT maintained airway epithelial homeostasis in mice exposed to dermatophagoides farinae (Der f), which induced increased expression of IL-25, endoplasmic reticulum (ER) stress and airway epithelial apoptosis. Meanwhile, SIT treatment ameliorated airway inflammatory infiltration and hyper-responsiveness in allergic mice. SIT treatment restored the airway epithelial integrity, attenuated Der f -induced airway epithelial ER stress and epithelial apoptosis. We also found that 4-PBA, an inhibitor of ER stress, suppressed airway epithelial ER stress and apoptosis in vitro. The pathological changes were partially induced by IL-25-induced ER stress, epithelial tight junction damage, and cell apoptosis in airways following allergen exposure. Furthermore, IL-25 induced ER stress in airway epithelial cells in vitro. The IL-25-induced airway epithelial apoptosis dependent on PERK activity was inhibited by 4-PBA. Taken together, we demonstrate that SIT is effective in allergic asthma and dependent on its depressive effect on the expression of IL-25, epithelial integrity damage, and epithelial ER stress.
Animals and experimental protocol. [6][7][8]week-old female BALB/c mice were obtained from the Animal Center of Guangdong Province and maintained under specific pathogen-free condition in the Animal Experimental Center of Shenzhen University. All animal experimental protocols and procedures in this study were approved and performed in accordance with the guidelines of the Committee of Animal Experiments Center of Shenzhen University and the National Institute of Health guidelines on the care and use of animals.
Mice were intraperitoneally sensitized on days 0, 7 and 14 with 50 μg of dermatophagoides farinae (Der f) extracts and 2 mg of aluminum hydroxide. The mice were challenged with an intranasal administration of 10 μg of Der f in 50 μl of phosphate buffered saline (PBS) 14 . For SIT treatment mice, the sensitized mice were given to a subcutaneous injection of 100 μg Der f in 100 μl of PBS 8 times at an interval of 2 days from 28-42 days. Control group of PBS, Der f-exposed, and 4-PBA-treated mice were given PBS. For 4-Phenylbutyric acid (4-PBA) positive control mice, 4-PBA (1 g/kg body weight per day; Sigma-Aldrich) diluted in PBS were given to sensitized mice by intragastric administration 2 hours before the challenge with Der f extracts.
Airway responsiveness was measured with inhaled methacholine (Sigma, The Netherlands) using Buxco whole-body plethysmography system (Buxco Research Company, United States) 15 . The enhanced pause (Penh) was used to represent airway responsiveness. Twenty-four hours after the final intranasal challenge, the mice were killed with an intraperitoneal injection of overdose pentobarbital (150 mg/kg). The mouse trachea was cannulated with a 20-gauge catheter, and the lungs were slowly lavaged with 500 μl of PBS. The total cell numbers and cell differentials were counted as described previously in bronchoalveolar lavage fluid. Supernatants were stored at −80 °C for cytokines analyses.
Histology and TUNEL assay. Lung tissues were fixed in 10% neutral-buffered formalin for 24 h and then embedded in paraffin. The sections (5μm) of the lung specimens were stained with hematoxylin-eosin staining (H&E) methods to assess histology. For apoptosis assay, we used the TdT-mediated dUTP nick-end labeling (TUNEL) assay to detect apoptosis in lung according to previous report 16 . After treated with 0.1%Triton X-100 and Proteinase K, the sections were incubated with TUNEL reaction mixture and incubated with converter-POD. DAB was used as substrate. The stained slides were observed with a blinded fashion by two independent pulmonary observers using DM4000 Leica light microscope (Leica, Germany). The degree of peribronchial and perivascular inflammation was scored from 0 to 3 according to our previous methods, with approximately 10 areas scored in total 17 . The average percentages of positive apoptotic cells were calculated by analyzing in ten random fields from different sections.
Cell culture. The human bronchial epithelial cell (16HBE) line was obtained from a cell bank (American Type Culture Collection (ATCC); Manassas, USA) and kept in our laboratory. 16HBEs were cultured with Dulbecco's Modified Eagle's Medium (DMEM) culture medium containing 10% fetal bovine serum (FBS) and grown to 70% confluence at 37 °C in with 5% CO2 condition. After the 16HBE cells reached 80% confluence in 6-well plastic plates, the cells were pretreated with serum-free DMEM culture medium. Several concentrations of recombinant protein IL-25 (1 ng/ml, 10 ng/ml, 100 ng/ml) were added to stimulate 16HBE cells in a concentration dependent manner.
Flow cytometry. Mice spleen were removed, homogenized and filtered through a 200-mesh screen.
Mononuclear cells (MCs) in splenocytes were isolated using lymphocyte ficoll 18 . The frequency of CD4 + CD25 + Foxp3 + Treg cells in MCs was measured to assess therapeutic effect of SIT. A number of 10 6 cells of every sample were set to be stained with the following antibodies: fluorescein isothiocyanate (FITC)-conjugated anti-CD4, APC-conjugated anti-CD25, and phycoerythrin (PE)-conjugated anti-Foxp3 and analyzed by a FACS array (BD Bioscience, USA). Data were analyzed by software flowjo (FlowJo flow cytometry analysis software, Tree Star, USA).
Immunohistochemical staining and Western blotting. The frozen sections of lung tissues with a thickness of 4 μm were prepared from lung embedded in optimal cutting temperature (OCT) compound. Frozen sections and cells in culture dishes were fixed with methanol and permeabilized with PBS containing 0.25% Triton X-100 for 10 min at room temperature separately. Specimens were blocked with 1% BSA in PBS containing 0.05% Tween for 1 hour, then incubated with BIP (Immunoglobulin heavy chain binding protein), CCAAT/ enhancer-binding protein homologous protein (CHOP), ZO-1 antibodies at 4 °C overnight. FITC-conjugated goat anti-rabbit or TRITC-conjugated goat anti-rabbit abs were used to bind primary Abs. Nuclei were stained with 4′-6-diamidino -2-phenylindole dihydrochloride (DAPI). The primary antibody was replaced with a isotype control as the negative control. The specimens were observed by SP5 confocal microscopy (Leica, Germany) and analyzed with Leica Application Suite Software.
Assessing cells apoptosis. 16HBE cells were seeded in 6 well plates and treated with treated with IL-25 (100 ng/ml), thapsigargin (Tg, 0.5uM) and 4-PBA (5 mM; Sigma, St Louis, USA) respectively in a serum-free culture medium. After treatment 2 hours with 4-PBA, the medium containing IL-25 (100 ng/ml) was added to co-incubate cells. Following treatment, the cells were harvested by 0.2% trypsin. 10 6 cells were suspended in 100ul binding buffer with 5ul Annexin V-FITC and 5ul PI staining solution for 10 min at room temperature. The samples were set to analysed by a FACS array. Data were analyzed by software flowjo (FlowJo flow cytometry analysis software, Tree Star, USA).
Paracellular flux of 4-kDa FITC-DX across confluent monolayers of 16HBE cells was detected as previously 19 . 16HBE cells were seeded at a density of 1 × 10 5 cells in 24-well transwell inserts and confluent monolayers of cells were reached for 3 additional days. 3 mM FITC-DX was added to apical chamber. After incubation for 3 h at 37 °C, samples from apical and basal chambers were harvested for FITC-DX concertration detection every 30 min for 90 min. The rate of FITC-DX flux was calculated by the following formula: where P o is in centimeters per second; F A is basal chamber fluorescence; F L is apical chamber fluorescence; ∆t is change in time; A is the surface area of the filter (in square centimeters); and V A is the volume of the basal chamber (in cubic centimeters). presented as means ± s.d. Significant differences between two groups were determined with the Tukey-Kramer post-test or Dunnett's T3 method. Differences among multiple groups were measured by using one-way ANOVA. P values < 0.05 was considered as a significant criterion.
SIT and 4-PBA attenuated airway hyper-responsiveness and airway inflammation.
Previous studies have proved that SIT could alleviate inflammatory response and clinic symptoms in asthmatic patients 20 . ER stress plays a critical role in pathogenesis of asthma 21 . However, it remains unclear whether SIT inhibits airway inflammation and airway hyper-responsiveness by regulating ER stress. To investigate the effect of SIT in ER stress, the ER stress inhibitor 4-PBA was used as a positive control to evaluate the role of SIT in allergic mouse model. Der f -sensitized mice received a subcutaneous immunotherapy with Der f (Fig. 1a). In our present studies, we observed that Der f exposure led to an increasing of bronchial responsiveness to methacholine in Der f-induced mice, while SIT significantly suppressed bronchial responsiveness to methacholine in Der f-induced mice. At a methacholine dose of 0, 6.25, 12.5, 25, 50 and 100 mg/ml, Penh values in Der f-induced mice treated with SIT (SIT mice) were significantly decreased compared to Der f-induced mice. In presence of 4-PBA, we observed that 4-PBA treatment significantly decreased Penh values in Der f-induced mice treated with 4-PBA at a methacholine dose of 50 and 100 mg/mL compared to Der f-induced mice (Fig. 1b, p < 0.001). SIT inhibited the levels of Der f -specific IgE significantly in SIT mice compared to Der f-induced mice (Fig. 1c, p < 0.001). And 4-PBA treatment did significantly inhibit the levels of Der f-specific IgE (sIgE) in Der f-induced mice (Fig. 1c, p = 0.010). We found (c) Serum Der f-specific IgE was measured using optical density (OD) by ELISA. Total inflammatory cells (d), eosinophils (e), neutrophils (f) separated from BALF were counted by differential cell analysis using Wright's staining. (g) The airway and alveoli tissue sections were stained by H&E (original magnification, ×200). (h) The degree of inflammation in airway was scored by Kruskal-Wallis test. All data were represented as means ± SEM (n = 6, one-way ANOVA, significant differences were defined as p < 0.05). Histological analysis revealed that inflammatory cell infiltration was decreased in the airway, around blood vessels and in the alveoli of SIT mice and 4PBA-treated mice (Fig. 1g). There was a reduction of inflammation index in the lung of SIT mice compared to Der f-induced mice (Fig. 1h, p = 0.018). The inflammation index in 4-PBA-treated mice was significantly decreased by 42.03% (2.83/6.73) compared to Der f-induced mice (Fig. 1h, p = 0.024). Taken together, these results suggested that SIT and 4-PBA attenuated airway hyper-responsiveness and airway inflammation.
SIT suppressed Der f -induced ER stress and PERK activity in mice. The role of ER stress in asthma has been confirmed 1,21,22 . However, the effect of SIT in ER stress remains unelucidated. To investigate whether SIT suppresses Der f-induced ER stress in mice exposed to Der f, ER stress markers in lung tissue were determined by immunofluorescence staining. 4-PBA was also used as a positive control for ER stress. Consistent with our previous studies, the expression and localization of BIP and CHOP predominantly occurred in the cytoplasmic areas of the airway epithelium in Der f-induced mice. We found that SIT significantly inhibited the levels of BIP and CHOP in SIT mice compared to Der f-induced mice (Fig. 2a). The immunofluorescence intensities of BIP and CHOP in lung of SIT mice were reduced by 49.93% (25.86 /51.79) and 43.38% (22.16/51.08) respectively compared to those in Der f-induced mice (Fig. 2b,c, p = 0.019 and p = 0.005). In addition, western blot also exhibited similar reductions of BIP and CHOP in SIT mice compared to those in Der f-induced mice (Fig. 2d-f, p = 0.003 and p = 0.003). Furthermore, we found that 4-PBA significantly repressed the levels of BIP and CHOP in Der f-induced mice, consistent with SIT ( Fig. 2d-f, p = 0.020 and p = 0.003).
PERK (protein kinase-like kinase) is a key ER stress sensor of unfolded protein response 23 . Sustained PERK/ CHOP signaling induces apoptosis process under prolonged ER stress circumstance 24,25 . We showed that the levels of p-eIF2α and p-PERK was increased in Der f-induced mice after Der f exposure. The expression of PERK in Der f-induced mice was not suppressed by neither SIT nor 4-PBA. However, the phosphorylation of PERK in SIT mice and 4-PBA treatment mice was significantly decreased (Fig. 2d, g p = 0.019 and p = 0.009). These results supported the hypothesis that SIT treatment repressed ER stress and PERK activity. have demonstrated that Treg cell ameliorated airway inflammation through IL-10 26 . As previously shown 27 , ER stress drives a regulatory phenotype in human T-cell clones. To investigate the effects of Treg cells and IL-25 in airway inflammation in ER stress, 4-PBA was also used as an ER stress inhibitor in vivo. As demonstrated by flow cytometry, Der f exposure inhibited the frequencies of CD4 + CD25 + Foxp3 + Treg cell with an increasing of IL-25. The frequencies of CD4 + CD25 + Foxp3 + Treg cell in spleen tissue were markedly enhanced in SIT mice compared to Der f-induced mice. SIT and 4-PBA significantly decreased the level of IL-25 in BALF in Der f-induced mice (Fig. 3c-e). We also noted that SIT and 4-PBA suppressed the levels of Th2 cytokines (IL-4, IL-5 and IL-13) in BALF from Der f-induced mice (Fig. S1). The results indicated that SIT inhibited IL-25 and Th2 cytokines with inducing CD4 + CD25 + Foxp3 + Treg cells. ER-stress may be involved in the changes of immune phenotype in Der f-induced mice.
SIT restored airway epithelial barrier dysfunction. Airway epithelial barrier plays a key role in pre-
venting access of the inspired luminal contents to the sub-epithelium 28 . The apoptosis of airway epithelial cells could aggravated allergic features 29 . Therefore, to determine whether SIT affects airway epithelial barrier function in asthma, we observed the level of tight junction protein zonula occludens-1(ZO-1), E-cadherin and apoptosis of airway epithelial cells. We found an increase number of TUNEL-positive airway epithelial cells in Der f-induced mice, while the number of TUNEL-positive airway epithelial cells was decreased by approximately 48.7% in SIT mice (Fig. 4a,b). Moreover, the level of caspase-3 in SIT mice was decreased, but the level of bcl-2 was increased compare with that in Der f-induced mice (Fig. 4e-g, p = 0.005 and p = 0.010). Furthermore, TUNEL-positive airway epithelial cells were decreased by approximately 44.2% in the presence of ER stress inhibitor 4-PBA, with an increasing level of bcl-2 and a decreased level of caspase-3. Tight junction protein ZO-1 and E-cadherin are components of airway epithelial barrier. We observed the exaggerated loss of ZO-1 and E-cadherin in Der f-induced mice. However, SIT treatment promoted the production of ZO-1 and E-cadherin. We also observed that the level of ZO-1 and E-cadherin was further enhanced in the presence of 4-PBA (Fig. 4c,d). Thus, SIT and 4-PBA restored airway epithelial barrier dysfunction, which was induced by exposure to HDM.
IL-25 induced ER-stress response in 16HBE cells.
In present studies, we found that SIT could inhibit IL-25 and alleviate ER stress in lung of Der f-induced mice. To further investigate whether IL-25 directly induces ER stress in airway epithelial cells, we also observed the effect of IL-25 on expression and localization of BIP and CHOP in 16HBE cells. Thapsigargin (Tg), an ER stress inducer, was used as positive control. Confocal microscopic analyses revealed that the expression of ER stress response marker BIP was significantly decreased, and predominantly localized in cytoplasmic areas in IL-25-treated 16HBE cells compared to PBS control cells (Fig. 5a,b). The expression of BIP was also inhibited by 4-PBA in IL-25-treated 16HBE cells (Fig. 5a,b p = 0.007). Previous studies showed that CHOP is an ER stress response factor or a proapoptotic player in response to ER stress 30 . We found that the expression of CHOP was increased by IL-25 treatment, and the translocation of CHOP from cytoplasm to nucleus was induced by IL-25 treatment as well (Fig. 5a,b, p < 0.001). Moreover, IL-25-induced CHOP expression and translocation to nucleus were inhibited by 4-PBA (Fig. 5a,c, p = 0.014). We further observed the effect of IL-25 on CHOP transcription level. In the presence of 1, 10 and 100 ng/ml IL-25, we found that the elevated mRNA expression levels of CHOP in 16HBE cells exhibited a dose-dependent manner in 16HBE cells (Fig. 5d). The elevated mRNA expression levels of CHOP were a time-dependent manner in 16HBE cells treated with 100 ng/ml IL-25 (Fig. 5e). Together, these data indicated that IL-25 induced ER-stress response and triggered the transcription factor CHOP.
IL-25 caused airway epithelial barrier dysfunction with epithelial apoptosis.
The epithelial barrier dysfunction increases the susceptibility of airways in asthmatic subjects to environmental agents 31 . To further identify whether IL-25 impairs epithelial barrier function, we analyzed ZO-1, E-cadherin, TEER, rate of FITC-flux and the apoptosis of epithelial cells. Flow cytometry analysis indicated that the ER stress-induced apoptosis was increased by ER stress activator TG. IL-25 resulted in an increased apoptosis level of epithelial cells, which was suppressed by ER stress inhibitor 4-PBA compared to IL-25-treated cells (Fig. 6a,b, p = 0.004). There was a significant decrease in TEER of Tg-treated and IL-25-treated 16HBE cells compare to control cells (Fig. 6g, p = 0.002 and p < 0.001). Then we also found an increase in rate of paracellular flux in Tg-treated and IL-25-treated cells when compared with control cells. And the effect of Tg and IL-25 on TEER and FITC flux in 16HBE cells could be reversed by 4-PBA (Fig. 6g,h). Western blot analysis showed that Tg increased the abundance of CHOP, BIP, p-eIF2α and p-PERK. Similar observations were made in 16HBE cells treated with IL-25. 4-PBA decreased the abundance of CHOP, BIP, p-eIF2α, and p-PERK (Fig. 6i-l). These data are consistent with our results that IL-25 acts as a regulator of epithelial barrier function. We observed that a significant decrease in ZO-1 and E-cadherin in IL-25-treated 16HBE cells. 4-PBA induced a significant increase in ZO-1 and E-cadherin in IL-25-treated 16HBE cells (Fig. 6c,e).
Together, these results show that IL-25 is a key effector of airway epithelial barrier dysfunction through ER stress-induced apoptosis of epithelial cells and reduced expression of ZO-1 associated with PERK pathway.
Discussion
Allergen immunotherapy has been shown to effectively prevent asthma in patients with allergic rhinitis and the onset of new sensitizations 32 . T-regulatory cells and blocking antibodies play important roles in allergy immunotherapy 7 . However, 5-10% asthmatic patients exhibited poor responds to the currently available treatments such as inhaled corticosteroids and beta-adrenergic agonists. ER stress is implicated in asthma pathogenesis, and the major ER stress signaling pathway involves ER stress sensor PERK and CHOP signaling 33 . The phosphorylation/expression of PERK is one of important makers in canonical unfolded protein response (UPR) -activation. Human neutrophil elastase led to ER stress-induced apoptosis of endothelial cell by activating the PERK-CHOP branch of the unfolded protein response 34 . In this report, we described a novel role for the allergy immunotherapy in HDM-induced airway inflammation that was associated with ER stress. We demonstrated that HDM exposure induced allergic airway inflammation and airway hyper-responsiveness by significantly enhancing BIP and PERK/CHOP signaling. In addition, allergy immunotherapy led to a significant decrease in ER stress and PERK/ CHOP signaling, increasing CD4 + CD25 + Foxp3 + Treg cells, and ameliorated airway hyper-responsiveness and airway inflammation.
IL-25, a distinct member of IL-17 cytokine family, regulates adaptive immunity and augments the allergic inflammation by enhancing the maintenance and functions of adaptive Th2 memory cells 35,36 . IL-17A belongs to Th17 cytokine family, and plays a key role in severe asthma. However, in this study, the expression of IL-17A in SIT mice and Der f had no significant change compared to PBS mice (Supplemental Fig. 1e). As one of Th2 cytokine promoter, IL-25 is produced by a series of cell types such as airway epithelial cell, eosnophils and mast cells 37 . IL-25 can exacerbate airway hyper-responsiveness and inflammatory infiltration in asthmatic patients Figure 5. IL-25 induces ERs in 16HBE cells. (a) Expression of BIP and CHOP were measured by immunofluorescent staining with BIP and CHOP antibody. DAPI was used for staining nucleus. TRICTconjugated secondary abs (red) and FITC-conjugated secondary abs (green) were used for binding confocal laser scanning microscopy (original magnification, ×400). Fluorescence intensity of BIP (b) and CHOP (c) were quantified at 10 random fields. The mRNA levels of CHOP induced by IL25 were detected at a dosedependent manner (d) (0 ng/ml, 1 ng/ml, 10 ng/ml, 100 ng/ml) and a time-dependent manner (e) (0 h, 4 h, 12 h, 24 h). All data were represented as means ± SEM (n = 6, one-way ANOVA with Tukey's post hoc, significant differences were defined as p < 0.05).
with acute exacerbation 38,39 . Previous studies shown that allergy immunotherapy regulated the immune responses from Th2 to Th1/Treg pattern 40,41 . Most importantly, Treg cells is a critical underlying mechanism of allergy immunotherapy in allergic asthma 42 . We found that Der f exposure increased IL-25 and TH2 cytokines IL-4, IL-5 and IL-13 with low CD4 + CD25 + Foxp3 + Treg cells. Furthermore, allergy immunotherapy inhibited the increasing of IL-25 and Th2 cytokines IL-4, IL-5 and IL-13 with induction of CD4 + CD25 + Foxp3 + Treg cells. It is interesting to note that ER-stress may be involved in the changes of immune phenotype in mice induced by Der f. The low production of IL-25 may be one of the important factors contributing to the suppression of allergic airway inflammation and airway hyper-responsiveness in allergy immunotherapy. (c-f) Confocal laser immunofluorescence photomicrographs of ZO-1 and E-cadherin were measured (original magnification, ×400). Epithelial barrier function was detected by TEER measurement (g) and FITC-DX paracellular flux assay (h,i) The expression of p-PERK, PERK, p-eIF2α, BIP and CHOP were measured using immunoblotting. Relative changes in the density of BIP (j), CHOP (k) and β-actin, p-PERK and PERK (l) were analysed. Blots in panel (g) have been cropped, and full blots are presented in Supplementary Fig. S5. Data were represented as means ± SEM (n = 6, one-way ANOVA with Tukey's post hoc, significant differences were defined as p < 0.05).
ScIenTIfIc RepoRts | (2018) 8:7950 | DOI:10.1038/s41598-018-26221-x Epithelial barrier plays a key role in maintaining immune homeostasis at epithelial location. In allergic asthma, the homeostasis balance of the epithelial barrier is characterized by loss of differentiation, reduced junctional integrity, and overwhelming stresses. Disruption of epithelial tight junction has been reported in asthma with low expression of claudin-18 and E-cadherin 43,44 . Moreover, house dust mite exposures impair integrity of the barrier through inhibiting activity of protease, and triggers epithelial immune response 45 . ZO-1 is involved in leak pathway, one of two transport pathways in tight junction associated with impaired epithelial function in allergic rhinitis 46 . E-cadherin also plays key role in tight junction. Indeed, we showed that repeated HDM exposure led to loss of ZO-1, E-cadherin, and induced the apoptosis of airway epithelial cells in allergic mice model. Importantly, we found that allergy immunotherapy increased in expression of ZO-1and E-cadherin without a low apoptosis of airway epithelial cells, resulting in increasing expression of anti-apoptotic Bcl-2 and inhibition of caspase-3.
Excessive ER stress affects oxygen species generation, activates inflammatory response and induces cell apoptosis through CHOP 47 . Inflammatory mediators and activation of cellular stress pathways may impact ER function, which may depend on the cell type 48 . In this study, we found that IL-25 was a potent inducer of ER stress in epithelial cells by increasing BIP and CHOP. We also found that in the presence of 4-PBA, IL-25-induced CHOP expression and translocation of CHOP were decreased. PERK-CHOP signaling regulated ER stress-induced apoptosis 49 . CHOP is regarded as a key inducer of apoptosis 33 . In our studies, IL-25 increased apoptosis level of epithelial cells, which was consistent with ER stress activator Tg-induced apoptosis. Furthermore, IL-25-induced apoptosis of epithelial cells was inhibited by 4-PBA, an ER stress inhibitor. Mechanistically, we showed that 4-PBA blocked phosphorylation of PERK and e-IF2α, suppressed expression of BIP and CHOP. The results indicated that 4-PBA could inhibit UPR -activation. Specifically, we also saw a reduced expression of tight junction marker ZO-1 and E-cadherin, which was consistent with IL-25-induced apoptosis of epithelial cells. The expression of ZO-1 and E-cadherin were significantly increased by 4-PBA in 16HBE cells treated by IL-25. The decreased expression of ZO-1, E-cadherin and increased apoptosis rate in 16HBE cells induced by IL25 may impair epithelial barrier function, which was proved by TEER measurement and FITC flux assay.
In this study, the effectiveness of immunotherapy had been related to less epithelial barrier dysfunction and ER stress through reducing the expression of an epithelial-drived cytokine, IL-25. However, the mechanism that has been associated with reducing expression level of IL-25 in SIT-treated mice need to be studied further.
Taken together, our results demonstrated that SIT alleviate ER stress, apoptosis aggravation and tight junction damage by inhibition of IL-25 expression. Our data also supported the involvement of IL-25 in apoptosis induced by ER stress, reduction of ZO-1 and E-cadherin in human bronchial epithelial cells, which is associated with the PERK/CHOP pathway. Based on these findings, we elucidated not only a novel mechanism that SIT treatment alleviates apoptosis induced by ER stress in airway epithelial by suppressing the production of IL-25, but also discovered a potential therapeutic candidate for allergic asthma.
|
2018-05-22T13:11:52.503Z
|
2018-05-21T00:00:00.000
|
{
"year": 2018,
"sha1": "10370c600278fba2ddfd0774f8c15fd4961cbe07",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-26221-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "451713f2196987617591cfc4ab7fcc5bd8dbb856",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18595723
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the Operative Outcomes and Learning Curves between Laparoscopic and Robotic Gastrectomy for Gastric Cancer
Background Minimally invasive surgery, including laparoscopic and robotic gastrectomy, has become more popular in the treatment of gastric cancer. However, few studies have compared the learning curves between laparoscopic and robotic gastrectomy for gastric cancer. Methods Data were prospectively collected between July 2008 and Aug 2014. A total of 145 patients underwent minimally invasive gastrectomy for gastric cancer by a single surgeon, including 73 laparoscopic and 72 robotic gastrectomies. The clinicopathologic characteristics, operative outcomes and learning curves were compared between the two groups. Results Compared with the laparoscopic group, the robotic group was associated with less blood loss and longer operative time. After the surgeon learning curves were overcome for each technique, the operative outcomes became similar between the two groups except longer operative time in the robotic group. After accumulating more cases of robotic gastrectomy, the operative time in the laparoscopic group decreased dramatically. Conclusions After overcoming the learning curves, the operative outcomes became similar between laparoscopic and robotic gastrectomy. The experience of robotic gastrectomy could affect the learning process of laparoscopic gastrectomy.
Introduction
Minimally invasive gastrectomy is becoming a widely accepted procedure, especially in Asian countries. Laparoscopic gastrectomy offers improved early postoperative outcomes and improved long-term oncologic outcomes that are comparable to those that are achieved with open gastrectomy [1][2][3].
The use of robotic surgery can achieve precise lymph node dissection in gastric cancer, afford surgeons a more comfortable operating environment and decrease mental stress during the surgery. Furthermore, for surgeons with laparoscopic surgery experience, fewer cases are necessary to learn robotic surgery. [4][5].
Several meta-analysis studies have compared the short-term results among robotic, laparoscopic and open gastrectomy [6][7][8][9][10][11]. Our early experience with robotic gastrectomy was consistent with these meta-analysis; we found less operative blood loss and a shorter postoperative hospital stay compared with laparoscopic and open gastrectomy [12].
There have been few reports [5] comparing the learning curves between laparoscopic and robotic gastrectomy. This study was designed to compare the operative outcomes and learning curves between laparoscopic and robotic gastrectomy for gastric cancer patients, including cases that were performed during and after the surgeon's learning curve.
Materials and Methods
Laparoscopic gastrectomy has been performed since June 2006 at Taipei Veterans General Hospital. We have performed 97 laparoscopic gastrectomies; all of which were performed by two surgeons (W. -L. Fang, and J. -H. Chen). Collectively, these surgeons had experience of more than 100 cases of open gastrectomy before they began performing laparoscopic gastrectomy. Among the 97 laparoscopic gastrectomies, 73 patients were operated on by W. -L. Fang.
The da Vinci Si surgical system (Intuitive Surgical Inc., Sunnyvale, CA, USA) was introduced in our hospital in December 2009. Between August 2010 and August 2014, we performed 72 robotic gastrectomies for gastric cancer. All of the robotic surgeries were performed by a single surgeon (W. -L. Fang), who had experience with more than 30 cases of laparoscopic gastrectomy before performing robotic gastrectomy.
The clinicopathologic characteristics, the postoperative outcomes and learning curves were compared between patients who underwent laparoscopic and robotic gastrectomy for gastric cancer. We only enrolled minimally invasive surgery performed by a single surgeon, W. -L. Fang, in the present study. A total of 145 patients were enrolled in the study, including 73 patients in the laparoscopic group and 72 patients in the robotic group. The institutional review board at the Taipei Veterans General Hospital approved this study, and written informed consent was obtained from all of the patients. The pathological stages were classified according to the 7 th edition of the American Joint Committee on Cancer [13].
Indication for laparoscopic and robotic gastrectomy
The indication for laparoscopic and robotic gastrectomy at our hospital was gastric cancer at a clinical stage lower than T3N1M0. Patients who were suitable for endoscopic mucosal resection or endoscopic submucosal dissection were referred to gastrointestinal endoscopists. Patients who had a history of gastric surgery were excluded from the study. Before surgery, the surgeons explained comprehensively both merits and demerits in the two operations to all patients. The decision for which type of surgical approach was made by the patients. The written informed consent was then provided to all patients.
All patients in the two groups were submitted to gastrectomy with at least D1+a (perigastric lymph nodes + No.7 lymph nodes) or D1+b(perigastric lymph nodes + No.7, 8, 9 lymph nodes)for early gastric cancer and D2 lymphadenectomy for advanced gastric cancer.
Surgical procedures
Robotic gastrectomy. Under general anesthesia, the patient was placed in the reverse Trendelenburg position with the legs elevated approximately 15 degrees. The insertion of the trocars and docking with the robotic arms were mentioned in our previous study [12]. The ultrasonic shear was operated by the surgeon's left hand, and the bipolar was controlled by the surgeon's right hand. For patients receiving subtotal gastrectomy with robotic assistance, a 3-to 5-cm vertical incision was made at the upper abdomen. Since July 2013, we started to perform intracorporeal delta-shaped Billroth-I anastomosis with specimens removed from the umbilical wound, and there was no upper abdominal small vertical incision. Billroth I gastroduodenostomy, Roux-en-Y gastrojejunostomy, or uncut Roux-en-Y gastrojejunostomy was performed by the preference of the surgeon. For patients receiving a total gastrectomy, the same technique was used as in laparoscopic gastrectomyRoux-en-Y esophagojejunostomy was performed using a trans-oral anvil delivery system (EEA OrVil). For both subtotal and total gastrectomy, a close-suction drain was placed over the right subhepatic space. For total gastrectomy, an additional close-suction drain was placed over the left subphrenic space.
Laparoscopic gastrectomy. For laparoscopic gastrectomy, the overall operative process in the abdominal cavity is identical to that of robotic gastrectomy. The energy source, ultrasonic shears, was controlled by the right hand of the surgeon. The positions of the surgeon and assistant were different from robotic surgery, with the surgeon standing on the right side or between the legs of the patient, and the first assistant standing on the left side of the patient.
Perioperative management
Nasogastric intubation was performed in the initial cases in the laparoscopic group, while no nasogastric tube intubation was applied in the robotic group or recent laparoscopic group. In our initial experience, the clinical pathway of laparoscopic gastrectomy was close to open gastrectomy, with water started on postoperative day 5 or day 6 and soft diet on postoperative day 9 to day 10. After accumulating more experience, water was usually started on postoperative day 3 or day 4, and a soft diet was started on postoperative day 5 to day 7. If no complication occurred, the patient was discharged.
Statistical analysis
A statistical analysis was carried out using the software Statistical Package for Social Sciences 16.0 (SPSS; SPSS Inc., Chicago, IL, USA). Data are presented as means 6 standard deviations (SDs). Independent Student's t-test was used to compare the continuous variables among the two groups. Categorical data were compared using a chi-square test. Finally, P values less than 0.05 were considered to be statistically significant. Table 1 shows the clinicopathologic characteristics between the laparoscopic and robotic gastrectomy groups. Patients in the robotic group were associated with more extracorporeal anastomosis with Roux-en-Y reconstruction, a higher percentage of D2 lymphadenectomy, and more medical costs compared to patients in the laparoscopic group. The retrieved lymph node number was similar between the two groups. There was no difference in the pathological T category, N category or the tumor-node-metastasis (TNM) stage between the two groups. Table 2 shows the operative outcomes of the two groups. The robotic group was associated with reduced operative blood loss (79.6677.1 mL vs. 116.06135.3 mL, P = 0.049) and longer operative time (357.96107.8 min vs. 319.86113.7 min, P = 0.040) compared to the laparoscopic group. There was no significant difference in the postoperative hospital stay, surgery and non-surgery related morbidity between the two groups. There was one mortality in the laparoscopic group related to duodenal stump leakage. There was one mortality in the robotic group related to gastrojejunostomy leakage. The learning curve of robotic gastrectomy was defined as 25 cases as our previous study [12] and the series of Song et al [14].
Learning curves
In the laparoscopic group, the operative time (228.6686.1 min vs. 393.9670.8 min, P,0.001) and operative blood loss (53.4649.5 mL vs. 164.96159.6 mL, P,0.001) were significantly reduced in the recent laparoscopic group (n = 32) compared to the initial laparoscopic group (n = 41). Hence, we defined the learning curve as 41 cases in the laparoscopic group, which was close to a mean of 42 cases of learning curve in the study of Kim et al [15]. As shown in Table 3, we compared the differences in surgical performance and operative outcomes between the laparoscopic and robotic groups in the learning curve. The robotic group was associated with a longer operative time, less operative blood loss, a larger extent of lymphadenectomy and more medical costs compared with the laparoscopic group. As shown in Table 4, after the learning curve was overcome, the laparoscopic group was associated with more percentage of intracorporeal Billroth-I anastomosis, shorter operative time, and less medial cost compared to the robotic group. There were no significant differences in the surgical performance and operative outcomes between the two groups with regard to the operative blood loss, postoperative hospital stay, extent of gastric resection, extent of lymphadenectomy and retrieved lymph node number.
Discussion
The novelty of the present study is the surgeon's initial experience of laparoscopic and robotic gastrectomy performed almost at the same period, and the results might be helpful to identify the effect of robotic gastrectomy on learning curve of laparoscopic gastrectomy for a beginning surgeon.
For the treatment of advanced gastric cancer, D2 lymph node dissection has been shown to have survival benefits over D1 lymphadenectomy [16]. However, the technical threshold of lymph node dissection during laparoscopic gastrectomy remains high and requires a long learning curve for surgeons who are accustomed to performing open gastrectomy. With the aid of robotic instruments, robotic gastrectomy can shorten the learning period of lymph node dissection and enable the surgeon to perform D2 dissection more easily than laparoscopic gastrectomy, and these surgeons will be able to perform extended lymph node dissections more easily when they return to laparoscopic gastrectomy.
We have started to perform intracorporeal Billroth-I anastomosis since July 2013. Before the 51th case in laparoscopic gastrectomy and the 66th case in robotic gastrectomy, we perform extracorporeal anastomosis with Roux-en-Y. It is of course that extracorporeal anastomosis with Roux-en-Y takes a longer time than intracorporeal Billroth-I anastomosis, which will affect the learning curves both in laparoscopic and robotic gastrectomy. Table 3. Comparison of the surgical performance and operative outcomes between laparoscopic and robotic gastrectomy in the learning curve. However, the learning curves in both groups decrease before starting intracorporeal anastomosis. It seems that the method of anastomosis might not be the main cause of shortening the operative time, and the surgeon's experience might play a more important role. Our results showed that the operative time was longer in the robotic group even after the learning curve. As shown in Table 4, after the learning curve, the laparoscopic group was associated with more intracorporeal Billroth-I anastomosis than the robotic group. Both extracorporeal anastomosis and additional docking time might prolong the operative time in the robotic group. However, there is a trend of decreasing operation time after learning curve in both groups. We believe that after accumulating more experience and similar type of anastomosis, the operative time will decrease gradually and become more similar between the two groups.
Kim et al [5] reported that the experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. However, for the beginning surgeon, could the experience of robotic surgeon have impact on the learning process of laparoscopic gastrectomy? Our data showed the operative time increased according to time sequence between the 25th to the 41th case of laparoscopic gastrectomy and dramatic shortening of the operation time after the 41th case. It is very interesting and what is the reason that could explain the dramatic change of the learning curve. First of all, the two patients with the longest operative time had a high BMI (.30), and one of them with the longest operative time had a huge inflammatory pseudotumor over right lobe of liver and a large lateral segment of liver, which made the operative exposure more difficult and prolonged the operative time. Moreover, we started to perform D2 lymphadenectomy since the 25th case of laparoscopic gastrectomy, which might increase the operative time. Between the 35th and 41th cases of laparoscopic gastrectomy, we started and performed 25 cases of robotic gastrectomy. The interval between the 41th and 42th cases of laparoscopic gastrectomy was 10 months, and we performed 32 cases of robotic gastrectomy during this period. Compared with laparoscopic gastrectomy, it is easier to perform D2 lymphadenectomy in robotic gastrectomy for the beginning surgeon. The reason why patients chose robotic gastrectomy instead of laparoscopic gastrectomy during this period might be influenced by the preoperative explanation of the surgeon because he was still in the learning period of D2 lymphadenectomy in laparoscopic gastrectomy. Surprisingly, as shown in Figure 1, the operative time of laparoscopic gastrectomy decreased dramatically after the 41th case. Decreasing of the operative time for both subtotal and total gastrectomy was observed after the learning curves of laparoscopic and robotic gastrectomy. Our results showed that the experience of robotic gastrectomy could contribute to the overcoming of the learning curve of laparoscopic gastrectomy for the beginning surgeon. Even with support from the National Health Insurance, patients in Taiwan who undergo robotic gastrectomy must pay more than patients who undergo laparoscopic gastrectomy. In the learning curve of robotic gastrectomy, patients who undergo robotic gastrectomy need to pay nearly 1.5 times as patients who undergo laparoscopic gastrectomy ($4259.561046.3 vs. $2787.661600). The standard charging criterion was set up later after the learning curve in our hospital, and patients who undergo robotic gastrectomy at present need to pay nearly two to three times as patients who undergo laparoscopic gastrectomy ($6488.061256 vs. $3083.66890.8). Surgeons may be more comfortable and experience less fatigue while performing robotic gastrectomy compared to laparoscopic gastrectomy, and the patient may receive limited benefits from robotic gastrectomy. Future studies should assess factors related to surgeon's benefit including surgeon's fatigue and comfort during operation. This difference presents an ethical dilemma that should not be ignored. Patients have the right to choose which operative approach they want, but surgeons should honestly and objectively explain both the surgeon's and the patient's benefits associated with each operative method, along with the operative risks to the patients before surgery.
The postoperative hospital stay was longer in the present study than other series. This might be because the clinical pathway for the initial experience of laparoscopic group was similar to that used with the open gastrectomy; water intake on postoperative day 5 or day 6, and soft diet on postoperative day 9 to day 10. We started to let patients try water and start liquid diet earlier as we gained more experience. At present, the time to try water and start a liquid diet is 3-4 days after surgery; and a soft diet is started on postoperative day 5 to day 7. If no complication occurred, the patient is discharged within 10 days after surgery in both laparoscopic and robotic groups. However, the mean postoperative hospital stay in the present study was 10.4 days for the laparoscopic group and 10.2 days for the robotic group even after the learning curve. For patients after the learning curve, most of the patients in the laparoscopic (83%) and robotic group (84%) discharged within 10 days after operation. The reasons for prolonged hospital stay in the two groups are due to surgical morbidity, including delayed gastric emptying, intestinal obstruction, and esophagojejunostomy leakage. In the future, we will try to decrease the surgical morbidity for minimizing the postoperative hospital stay.
In conclusion, the operative outcomes between laparoscopic and robotic gastrectomy become more similar as the surgeon accumulates experience. The experience of robotic gastrectomy could affect the learning process of laparoscopic gastrectomy. Long-term follow-up and prospective randomized studies are required to compare the oncological outcomes and quality of life between laparoscopic and robotic gastrectomy patients.
|
2016-05-04T20:20:58.661Z
|
2014-10-31T00:00:00.000
|
{
"year": 2014,
"sha1": "60f43bfab142a6756921438797ff0d73d36f9f86",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0111499&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60f43bfab142a6756921438797ff0d73d36f9f86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210360045
|
pes2o/s2orc
|
v3-fos-license
|
CONCEPT OF THE AUTHOR’S PERSONALITY WITHIN THE FRAMEWORK OF THE IMAGE SYSTEM AND NARRATIVE STRATEGIES OF THE MODERN WRITER (ON THE EXAMPLE OF LUDMILA ULITSKAYA’S PROSE)
The purpose of the article: The purpose of the article is to identify the peculiarities of the concept of the personality of the author within the framework of the system of images and narrative strategies of the writer (on the example of the prose of Lyudmila Ulitskaya). Materials and methods: The leading approach to the study of this problem is the analysis of key problem issues of the history course. Results of the research: Based on the analysis conducted by the author, it is noted that the personality of the author in the works of Ulitskaya manifests itself both in the form of a biographical personality and in the form of the expression of the author’s perception of the world, and the author does not enter into polemics with her characters and does not give an assessment of either their character or their actions. Applications: This research can be used for the universities, teachers, and students. Novelty/Originality: In this research, the model of the concept of the author’s personality within the framework of the imaging system and narrative strategies of the modern writer (on the example of Ludmila Ulitskaya’s prose) is presented in a comprehensive and complete manner.
INTRODUCTION
Today, one of the most important and significant problems for literary criticism is the study of the author's personality concept in the system of images and narrative strategies of the writer. The relevance of this problem is due, first, to 1) the need for a more detailed study of "signs of the presence" of the author behind all the elements of a work of art; 2) the ambiguity of the theory of reconstructing the author's image based on the author's style and his point of view, presented in the text of a work of art; 3) insufficient scientific elaboration of the specificity issues of the author's personality concept. In addition, this problem reveals many other equally relevant issues, such as, for example, the degree to which the author's personal attitudes influence the specifics of expressing the author's position in a work of art.
What happens to the author throughout his life remains in his memory, not in the depersonalized form of any event, but as a product of moral, value, logical analysis, personal processing, and attitudes. That is why the work of art bears a pronounced imprint of the author's personality, creating unique originality of the text, intuitively captured by the reader, attracting or repelling his attention.
In this sense, particular interest presents the problem of the author's personality concept, presented in the prose works of Lyudmila Ulitskaya, which (the problem) is not fully considered to date. So, among the works devoted to various aspects of creativity, including the specifics of the author's image in the prose works of L. Ulitskaya, the studies of M.V. Bezrukava and Yu.S. Baskova (2017), S.A. Grigory (2012), T.A. Novoselova (2012), T.A. Skokova (2010) and others should be highlighted. However, these works do not fully reveal the problems we are considering.
All of the above determined the choice of the subject/object, as well as setting the goal of the article: the subject is the author's personality concept in the framework of the system of images and narrative strategies of L. Ulitskaya; the object is the prose works of L. Ulitskaya. Accordingly, the purpose of the article is to identify the features of the author's personality concept within the framework of the writer's image system and narrative strategies (using Lyudmila Ulitskaya's prose as an example).
The author's image is a complex text-forming category that forms the unity of all components of a multi-level system of a work of art, as well as denoting, according to N.K. Bonetskoy, "the real moment of any aesthetic perception". The fact that such a perception can be implicit, unconscious, and characteristic of each person is evidenced by phrases such as "I love Pushkin", "I don't like Mayakovsky," demonstrating the attitude not to the works, but to the author. Accordingly, the aesthetic experience includes the "author's image", since the content of the work is comprehended by the author and this meaningfulness "explodes from the inside it's lifelike" the author's "I" is the main core of the work.
METHODOLOGICAL FRAMEWORK
The basis of the study consists of the following principles and methods: 1. The comparative pedagogical method, which includes the study of key problem issues of the history course, competent understanding of which is important for the development of professional competencies of economists; 2. Analysis and synthesis, on the basis of which directions for improving history teaching are highlighted and considered in detail to improve the quality of future economists training;
RESULTS
The author's image is not immediately revealed to the reader; the degree of his disclosure is determined including due to: the focus of the reader's attention, the depth of reading, its completeness, etc. More clearly the author's image is manifested in the repeated appeal to his works. Accordingly, the author's personality is presented in the image of the characters, purely external moments of the work, artistic techniques, writing strategies, etc. Moreover, along with the implicit presence of the author's personality in his work, i.e. with the presence that was not intended by the author himself, one can also speak of the author's strategy (concept) of the work, which forms its ideological center. The author's strategy is a category of literary criticism used to "indicate the way the author relates to the character's image and the reader's image in the process of artistic interaction" (Akimova T.I. 2015, p. 13). The idea of "author's concept" is on a par with such categories as "author's position" and "author's image".
M.M. Bakhtin, considering the author's position in a work of art, distinguishes its following forms (Bahtin M.M. 2017), p. 8-10): 1) the author reveals the character's consciousness as alien, but located next to the author's one; 2) the character owns a self-developing idea; 3) the author enters into a dialogue with the self-developing idea of the character. He also describes how to demonstrate the author's position: 1) the author does not enter polemics with his character; 2) the author avoids a direct assessment of the character; 3) the author argues with the character on an equal footing. A. A. Vorontsova-Maralina distinguishes the following types of authorial presence: 1. Biographical personality. Consideration of the relationship between the creator and the creation lies in the borderline between aesthetic reality and real life. The figure of a human writer, in its secrecy and unknowability, does not allow penetrating into the depths of his inner world, but the author provides this opportunity to the reader.
2. The author of "creator" is an existential center, primary in relation to artistic images, as the creator of a work of art. The author not only receives signals from the outside and gives them a certain form, but he also represents a certain view of reality, the expression of which is the whole work. It is the author's perception of the world that is the semantic and structure-forming center of the work, but it becomes visible to the reader only through the work(Kahn, A., Lipovetsky, M., Reyfman, I., & Sandler, S. (2018).).
3.
A depicting subject is a direct exponent of the author's point of view and defines the author's position, etc.
Before starting consideration of the author's image in L. Ulitskaya works, one should note that the interest for studying her works (in different aspects) is connected not only with the enormous popularity but also with peculiarities of the author's development (unlike most of the colleagues, Ulitskaya entered profession being already very mature person, her first work was written in the age of over fifty). For many writers, professional development is a step-by-step process, and the chronology of their creations reflects personal growth of authors, connected directly with own life experience, mature capability for analysis and reasoning. Often, this is connected (in a sense) with the naive or excessively maximalist "author's position" in early works, the desire to emotionally evaluate one's characters, condemn weaknesses and exalt virtues. Over time, sharpness and emotionality give way to more balanced feelings and differentiated assessments: characters become more complex and multidimensional. And the more mature the author's personality is, the more difficult it is to "discern" it in the work, the less it dominates the reader, imposing own opinion on him.
In the case of L. Ulitskaya, this entire process of personal and professional growth remained "behind the scenes" of her work. She entered it already mature, with great life experience. Ulitskaya herself notes that all her works were written or conceived in the 70-80s of the XX century, and the events described in them belonged to the 50-60s, and this time period allowed her to relive them more than once being more detached, without those "A month before the child's birth, the term of the indefinite trip of Robert Viktorovich, which he extended to the last opportunity, ended, and he received an order to immediately return to the Bashkir village of Davlekanovo, where he should have reached the exile in a hope for the future that still seemed beautiful to Sonya and which Robert Viktorovich strongly doubted". To this very village, the family of Lyudmila Ulitskaya was evacuated, and she herself was born there. In this case, it can be assumed that the author's personality was partially reflected in Tatyana's daughter Sonechka, which is confirmed by her description of "her talents were revealed late" -a characteristic that can be applied to Ulitskaya herself, who devoted many years to biology before recognizing her writing talent(Rubins, M. (2019).).
The story "Sonechka" was published in 1992, and in 1997 the story "Fun funeral" was published. Their comparative analysis allows us to find several intersection points, both superficial and deeper, which can also be considered as a reflection of the author's personality in the plot and narrative strategies of the author. Robert Viktorovich in "Sonechka" and Alik in "Fun Funeral" are two artists, creators, two people about whom one can say: "man in the moon" ( Levantovskaya, M. (2013).).
Their inner world is rich and multifaceted, they are far from material issues, go through life relying solely on their intuition, while both are able to attract people of completely different types -from those who need to be endlessly patronized (Yasya for Robert Viktorovich and Nina for Alik ), to those who ensure their own well-being and survival in a cruel material world. Both characters do not have high moral principles, in the life of each, there were many women who did not quarrel with each other, and even interact well, from time to time providing mutual support. And characteristic of these works is the lack of censure, a negative assessment of adultery, as if everyone around forgives such behavior to a creative person, they understand that this is a necessary condition for "creation". Even in the words of minor characters, such behavior is approved: "Old man, and he died on a woman. Young one, -said one. -And what? Better than rotting in the hospital, -the second responded". Perhaps a reflection of the author's personality is tolerance for human weaknesses in general, as well as for the weaknesses of creative people with whom L. Ulitskaya had to intersect many times throughout her life (Nekrasova, A. (2013).).
Both in "Sonechka" and in the "Fun Funeral", the author used an artistic technique of hanging pictures of the deceased creator by the character who was spiritually closest to him, understood him. For Robert Viktorovich, this is done by his wife Sonia, for Alik -the girl Tishort, a daughter who does not know about blood relationship, but who feels a connection with her father. Only they can hang pictures in the best way, honor the deceased, reconcile with him. And then fame and recognition come to the creators posthumously, their works are appreciated, collectors hunt for them. Perhaps for Ulitskaya as a creator, it is very important that the person's works receive well-deserved recognition, even after the end of his life. Ulitskaya is also actively using the narrative transition from one character to another in the "Fun Funeral", due to which each of the many characters in the story can tell their own story of acquaintance and relations with the main character, the degree of their attachment to him. This approach allows us to imagine the multiplicity of positions and ratings, among which the reader can find the one closest to him, but it is impossible to say for sure which position corresponds to the author's one -the narrators or one of the characters. It should also be noted that in the analyzed works of Ulitskaya there is no gender identification of oneself with one or another character. Where it would be possible to show a conflict between "male" and "female", the author refuses to choose the "side", and sometimes even demonstrates a more emotionally positive, "maternal" attitude to the male character: "Alik himself lay on a wide ottoman, so small and so young, as if the son of himself.
CONCLUSION
According to biographical information, Lyudmila Ulitskaya considers herself a Jew who converted to Christianity, and the topic of Jewry goes through all her works. The internal dialogue of the two religions in the personality of the author, quite possibly, in a somewhat satirical form, was reflected in the conversation between the Orthodox priest and the Jewish rabbi in the story "Fun Funeral" "Here they are, new times: neither Jew nor Hellenic, and in the most direct, in the most direct sense, too ... " -the priest rejoiced. The rabbi stopped, threatened him with a finger: "Well, for you the most important thing is that not a Jew ..." In general, the whole work of L. Ulitskaya is so diverse and multilayered both thematically and in terms of using various expressive means and strategies of narration that it cannot be attributed to a specific category, for example, "women's prose" or "mass literature".
Thus, the author's personality in the works of Lyudmila Ulitskaya manifests itself both in the form of a biographical personality and in the form of expressing the author's perception of the world, while the author does not enter into polemics with her characters and does not give an assessment of their characters or their actions. The narrative strategies of the author are quite diverse. L. Ulitskaya shows the reader the distance between the writer and the characters by including
|
2019-10-10T09:31:33.433Z
|
2019-10-06T00:00:00.000
|
{
"year": 2019,
"sha1": "3a5f35ae904b8978e035ba242456c2702985db0b",
"oa_license": null,
"oa_url": "https://doi.org/10.18510/hssr.2019.74126",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "503c58508222f4186cea6566a8f0c24a7835c309",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
15762452
|
pes2o/s2orc
|
v3-fos-license
|
Atmospheric radiative effects of an in situ measured Saharan dust plume and the role of large particles
This work will present aerosol size distributions measured in a Saharan dust plume between 0.9 and 12 km altitude during the ACE-2 campaign 1997. The distributions contain a significant fraction of large particles of diameters from 4 to 30 μm. Radiative transfer calculations have been performed using these data as input. Shortwave, longwave as well as total atmospheric radiative effects (AREs) of the dust plume are investigated over ocean and desert within the scope of sensitivity studies considering varied input parameters like solar zenith angle, scaled total dust optical depth, tropospheric standard aerosol profiles and particle complex refractive index. The results indicate that the large particle fraction has a predominant impact on the optical properties of the dust. A single scattering albedo of ωo=0.75−0.96 at 550 nm was simulated in the entire dust column as well as 0.76 within the Saharan dust layer at ∼4 km altitude indicating enhanced absorption. The measured dust leads to cooling over the ocean but warming over the desert due to differences in their spectral surface albedo and surface temperature. The large particles absorb strongly and they contribute at least 20% to the ARE in the dusty atmosphere. From the measured size distributions modal parameters of a bimodal lognormal column volume size distribution were deduced, resulting in a coarse median diameter of ∼9 μm and a column single scattering albedo of 0 .78 at 550 nm. A sensitivity study demonstrates that variabilities in the modal parameters can cause completely different AREs and emphasises the warming effect of the large mineral dust particles.
Introduction
As an integral part of the atmospheric aerosol, mineral dust plays an important role in the Earth's climate system.Dust Correspondence to: S. Otto (sebastian.otto@dlr.de)particles modify the transport of the shortwave as well as longwave radiation through the atmosphere by scattering and absorption processes.Depending on their size distribution, chemical composition and shape (determining their optical properties extinction coefficient, single scattering albedo and phase function), and furthermore depending on the vertical position/extent of a dust layer as well as the local surface albedo, mineral dust particles may have a positive (heating of the climate system) or negative (cooling) radiative effect (Claquin et al., 1998;Sokolik and Toon, 1999;Sokolik, 1999;Myhre and Stordal, 2001).
Several studies about the atmospheric radiative effect (ARE) of mineral dust within the atmosphere exist.Quijano et al. (2000a) calculated solar, thermal and total radiative heating rates as well as radiances and the ARE of dust in order to analyse its influence on the radiation budget of dusty atmospheres.Quijano et al. (2000b) reported that the vertical distribution of the dust, its optical properties, the presence of clouds, the surface albedo of the bottom of the atmosphere (BOA) or the sun position have a significant impact on the radiative effect of dust.Total heating rates are always positive, yielding radiative heating of dust layers forced by increasing dust loading.Furthermore, Quijano et al. (2000a) showed that the diurnally averaged radiative impact at the top of the atmosphere (TOA) depends on the mineralogical composition of the dust.The cloudless total ARE of Saharan dust at the TOA was estimated to be negative (∼−35 W m −2 ) over the ocean for a cosine of the solar zenith angle (SZA) of µ o =0.25 and a dust optical depth of τ (0.5 µm)=1.0.A positive ARE (∼+65 W m −2 ) was simulated over the desert for µ o =0.8 and τ (0.5)=1.0.Moreover, the dust absorption increases with increasing total dust optical depth such that this negative radiative effect is reduced and can turn into the positive.Haywood et al. (2001) estimated the total solar Saharan dust impact at TOA to be about -60 W m −2 and the single scattering albedo ω o (0.55) about 0.87.Furthermore, they found an additional heating in the atmospheric column due Published by Copernicus Publications on behalf of the European Geosciences Union.to dust absorption.Myhre et al. (2003) also predicted a significant negative total ARE of mineral dust from the Sahara.The solar impact values ranged from −115 to +8 W m −2 , while the longwave effect was up to +8 W m −2 , and the diurnal mean total forcing was found to be close to −50 W m −2 .Zhang et al. (2003) reported a monthly-mean longwave ARE of +7 W m −2 over desert under cloudless conditions.Weaver et al. (2002) calculated a summertime ARE ranging from 0 to −18 W m −2 over ocean and from 0 to +20 W m −2 over land.On the other hand, Myhre and Stordal (2001) found a small global mean radiative effect from −0.7 to 0.5 W m −2 with different contributions to its sign in certain regions.
This work uses data from the second Aerosol Characterisation Experiment (ACE-2) to estimate the ARE during a dust outbreak over the North Atlantic.ACE-2 was carried out over the North-East Atlantic Ocean between Portugal, the Azores and the Canary Islands during June/July 1997.The main purpose of this experiment was to study the characteristics of natural and anthropogenic aerosols as well as their transformation and removal processes.The ultimate goal was to acquire better estimates of the direct and indirect impact of man-made aerosols on climate.Measurements were performed from several ground-based stations, research aircrafts and a research vessel.A general overview of the campaign and its main results are presented by Raes et al. (2000).The recent paper presents aerosol size distribution measurements in a Saharan dust plume performed onboard the Cessna Citation research aircraft, which was based at Los Rodeos Airport on Tenerife (Canary Islands).The observed aerosol size distributions have been used as input for a radiative transfer model to calculate the ARE of the dust plume.
Section 2 reports on the instrumentation and observations during ACE-2.In Sect. 3 the radiative transfer model package is described as well as the input parameters for the performed sensitivity studies.The optical properties of the mea- The aerosol size distributions presented in this work have been composed using the measurements of five instruments onboard the Cessna Citation aircraft during ACE-2, which are summarised in Table 1.A detailed description of the instrumentation can be found in de de Reus et al. (2000a) andde Reus et al. (2000b).
The altitude profiles of the sub-micrometer and coarse mode aerosol number concentration observed during this flight are shown in Fig. 2. The sub-micrometer aerosol number concentration is the concentration measured by the CPC with the 0.006 µm cut off diameter (0.006−1 µm), and the coarse mode aerosol concentration is the total particle number concentration measured by the FSSP-300 (0.36−31 µm).The data has been averaged over 100 m altitude bins.Both the submicron and coarse mode particle number concentration strongly decreased with altitude within the marine boundary layer up till 1.5 km altitude.Between 2.7 and 5.8 km altitude a relatively high coarse mode particle number concentration was observed.Trajectory analyses show that the air masses in this layer originated from arid regions on the North African continent, suggesting that the particles in this layer are Saharan dust aerosols.Within this Saharan dust layer the coarse mode particle number concentration remained roughly constant at about 10 cm −3 , while the submicron particle number concentration showed a gradual decrease from about 250 cm −3 at 2.7 km altitude to 150 cm −3 at 5.8 km altitude.In the free troposphere above the dust layer a very low coarse mode particle number concentration was observed (below 0.1 cm −3 ).
Furthermore, Fig. 2 shows the observed water vapour and ozone mixing ratios and the relative humidity (RH) and temperature measured by a radiosonde launched from the island of Tenerife at 06:00 UTC on the same day.Both the marine boundary layer and the dust layer show enhanced values of RH.Within the dust layer, RH increased from 4% at 2.5 km to 35% at 5.5 km altitude, which corresponds to a relatively constant water vapour mixing ratio.The temperature profile shows inversions just above and below the dust layer, which inhibit the mixing of the dust air mass with air masses above and below.
Figure 3 depicts the observed aerosol size distributions (by number and volume) on all seven free tropospheric flight legs and on the flight leg in the marine boundary layer.Note that on the top level (11.6 km) only the coarse mode aerosol size distribution was measured, due to a failure of the instrumentation.For the radiation calculations later in this work, the same sub-micron aerosol size distribution as was observed on the level below (10.0 km) was assumed.Therefore, the sub-micron part of the size distribution at 11.6 km altitude is marked grey in Fig. 3.Note also that on the 10.0 km level, the size distributions derived from the OPC and FSSP do not fit very well together, in contrast to the measurements within the dust layer.Since these particles do not contribute much to the dust optical depth (see Sect. 4), this difference has not been investigated in detail.
From Fig. 3 it becomes obvious that coarse mode particles (D p >1 µm) were only observed in the marine boundary layer as well as in the Saharan dust layer.Within the ma- The right profile shows the observed total number concentration (solid blue), extrapolated to the tropopause (>11.6 km) and the ground (<0.9 km).The dashed profiles in the right panel indicate the number concentrations of the standard maritime and free tropospheric aerosol after Shettle and Fenn (1979).
rine boundary layer, the most dominant mode in the aerosol size distribution was the accumulation mode, whereas in the free troposphere Aitken mode particles dominated the size distribution by number.The total aerosol mass in the marine boundary layer and the Saharan dust layer, however, is clearly determined by coarse mode particles.The dominant volume size distribution mode in these layers is the coarse mode.A more detailed description of the aerosol size distributions observed in the Saharan dust layer is given by de Reus et al. (2000a).
Figure 3 also shows the total particle number concentration, which was observed during the flight.Here, the observed particle number concentrations at 0.9 and 10.4 km altitude were extrapolated to sea level and the tropopause (17 km), respectively.For the radiative transfer simulations the observed aerosol size distributions were scaled to the www.atmos-chem-phys.net/7/4887/2007/Atmos.Chem.Phys., 7, 4887-4903, 2007 observed total particle number concentration, as shown in Fig. 3.Note that in the following sections the flight levels are numbered so that the lowest level in the marine boundary layer is level 1 and the top level at 11.6 km altitude is level 8.
Radiative transfer model package and sensitivity studies
In order to calculate atmospheric radiative flux densities, heating rates and, finally, the ARE of mineral dust, the optical properties of the dusty model atmosphere are needed.These are the extinction optical depth, the single scattering albedo and the scattering phase function.In the following the model package and the input parameters for these computations are described as basis for detailed sensitivity studies.
Model description
The model atmosphere with 300 layers and a resolution of z=100 m between bottom and 20 km altitude as well as of z=1 km between 20 and 120 km altitude was chosen in accordance with the Air Force Geophysics Laboratory (AFGL) models following Anderson et al. (1986).The profile measurements of meteorological quantities and/or trace gases can be taken into account.
Spectral range and resolution: 200 nm to 100 µm and a wavelength grid according to a constant wave number interval increment of 10 cm −1 leading to a total number of 4990 spectral intervals having widths as given in Table 2.This choice of the wavelength grid has several reasons: On the one hand, a constant spectral wave number increment is useful, since the gas absorption is calculated in the wave number space with the help of the high-resolution transmission molecular absorption database (HITRAN; see http://cfa-www.harvard.edu/HITRAN/),and a higher resolution of wavelength intervals in the LR (longwave, infrared or thermal range, λ>4 µm) would require an enormous computational effort due to the large number of absorption lines.On the other hand, the energy content in the LR and FLR (far LR, λ>20 µm) is much lower than in the SR (shortwave range, λ<4 µm).Additionally, considering the continuum absorption of water vapour (for λ>500 nm), each absorption line is only allowed to contribute to the absorption for wave number distances ν−ν o ≤25 cm −1 from the line centre ν o .The far-wing effect of the lines is described by the continuum.Hence, a wave number increment of 10 cm −1 in conjunction with an interval cut-off of ±15 cm −1 effectively means that the lines outside a particular interval only add to the absorption inside, if their distance to the interval is ≤15 cm −1 .Thus, their wings influence the absorption in this interval only up to the distance ν−ν o =25 cm −1 .
Literature data for the spectral complex refractive index of dust are only available for wavelengths up to 39 µm.Therefore, all the fine-band radiative transfer calculations have been performed up to 39 µm (4974 intervals), too.
Two radiation sources are considered on the entire wavelength range of the model: the shortwave solar radiation as well as the longwave Planckian radiation from surface and atmosphere.Both sources overlap in the spectral region close to the wavelength of 4 µm, which was defined as the bound between SR and LR in the paragraph before.Note further that the considered energy transport in the radiative transfer calculations only covers the spectral range up to the wavelength of 39 µm.The relative portion of the solar radiation belonging to the spectral range 39−100 µm is negligibly small with ∼10 −3 % compared to the entire spectrum.In contrast, the Planckian radiation curves in a temperature range of 290−330 K contribute ∼5 % in this spectral range being equivalent to ∼7 Wm −2 .The corresponding ARE (in Wm −2 ) evaluated for this particular spectral range is much smaller so that any radiative effect in the wavelength range 39−100 µm can be neglected.Moreover, all longwave radiation leaving the Earth to space is transported through the atmospheric window (about 8−13 µm).
The spectral resolution of 1 cm −1 of the MOD-TRAN ETR extraterrestrial solar spectrum (http: //rredc.nrel.gov/solar/spectra/am0/modtran.html) is taken as upper atmospheric boundary condition averaged to the spectral grid of the model.
The Rayleigh scattering of air molecules is considered after Nicolet (1984), and the gas absorption is computed using the HITRAN of 2004: The spectral absorption cross sections of all HITRAN gases with available mixing ratio profiles are calculated at the 301 atmospheric levels according to their temperature and pressure under the assumption of the Voigt line shape and for the wavelength intervals corresponding to the constant wave number resolution of 10 cm −1 .An interval cut-off of ±15 cm −1 is taken into account, so that every absorption line covers a maximum spectral range of ±25 cm −1 from line centre.This corresponds to the definition of the water vapour continuum and to the fact that for larger cut-offs the far wings of far lines outside the interval strongly overestimate the absorption inside.This is because the Voigt line shape is only an approximation for the real line shape, which returns for the far wings the incorrect Lorentzian profile in the lower atmosphere.
Table 3. Spectral bands for averaging the dust optical properties.
Thus, spectral absorption coefficients as well as spectral level transmittances have been computed, which were averaged over each individual wave number interval in order to compute the corresponding mean layer optical depths.
Aerosol size distributions, given size-bin-resolved or by modal parameters, can be considered to obtain the aerosol's optical properties via Mie computations.This simplification of spherical aerosol particles, see e.g.Lafon et al. (2006), is justified focussing on radiative flux density computations.However, particle non-sphericities will be considered in a following work.Further, standard aerosol optical properties can be chosen (Shettle and Fenn, 1979).
The discrete-ordinate-method after Stamnes et al. (1988) was used with four streams for the comprehensive calculations.Liou et al. (1988) demonstrated the attractiveness of the delta-four-stream approach in terms of accuracy and speed.The following radiative quantities as function of altitude z and wavelength λ were computed: the upward (+) and downward (-) radiative flux density (irradiance) F ± (z, λ) as basis for the shortwave and longwave radiative heating rates as well as the total heating rate h t (z):=h s (z)+h l (z) where is the net flux density as well as c p the specific heat and ρ the air density.Moreover, spectrally integrated quantities were calculated, e.g. the ARE according the Eqs.( 2) and (3) in Sect.3.3.
Input parameters for the radiative transfer model
Meteorological quantities: The measurement profiles of temperature and pressure onboard the aircraft were similar to the tropical AFGL standard atmosphere.Extrapolations towards the tropopause had to be performed to adjust the measured and standard profiles.The height of the tropopause on the day of the measurements was 17 km, according to the temperature profile observed by a radiosonde launched at 06:00 UTC on 8 July 1997 from the island of Tenerife.The relative humidity, as observed by this radiosonde, has also been used as input for the model, see Fig. 2. Aerosol data: The measured size distributions of the dust's number concentration at the eight flight levels with the according altitudes (see Fig. 3) were assigned to model height levels.Size-averaged, size dependent as well as band averaged (Table 3) optical properties were calculated at these levels using Mie scattering theory and a certain spectral complex refractive index as basis for these calculations (Figs. 6,7,8).For simulating the radiative transfer the measured aerosol size distributions were normalised with respect to the total number concentration of 1 cm −3 , and the calculated normalised optical properties were linearly interpolated to the intermediate levels of the model atmosphere.Then, they were scaled by the measured total number concentration (Fig. 3) resulting, finally, in the spectral and height dependent scattering and absorption optical depth, the single scattering albedo as well as the asymmetry parameter.
Gas absorption: The tropical AFGL standard volume mixing ratio profile of ozone was replaced by the measured profile data (Fig. 2) in the lower atmosphere, and from the measured relative humidity a water vapour volume mixing ratio profile was computed.At higher altitudes the tropical AFGL standard profiles were used.2006) reported about the effect of the variability of the refractive index with respect to dust optical properties as well as that the single scattering albedo of mineral dust is significantly influenced by this quantity.Thus, one has to keep in mind the variabilities of the refractive index in literature data for mineral dust aerosols (dotted curves in Fig. 4) following Patterson et al. (1977), Carlson and Benjamin (1980), Sokolik et al. (1993) and Sokolik et al. (1998).Since no direct measurements of the refractive index were available from ACE-2, first, a mean real and imaginary part, see the two solid lines in Fig. 4, were extracted from a wide range of the literature data by applying a moving average leading to an imaginary part of 0.006 at 550 nm similar to 0.005 used by Formenti et al. (2000).Second, for sensitivity studies, maximum and minimum spectral real and imaginary parts of the refractive indices were calculated from these literature data as upper and lower envelope of the various curves.Third, the dustlike (black dashed graphs) refractive index following Shettle and Fenn (1979) based on Volz (1972) and Volz (1973) was used, since it is, contrary to the other cited data, defined over the entire wavelength range of the radiative transfer model from 200 nm to 39 µm.
Spectral surface albedo: The albedo of desert (sand) was taken from the MOSART database (http://eic.ipo.noaa.gov/IPOarchive/SCI/atbd/msoCD9FD.pdf) and the albedo of ocean from Schröder (2004).Both albedos are illustrated in Fig. 5 where, for better clarity, the albedo of the ocean surface has been multiplied by a factor of 5.
Calculated quantities, reference cases and sensitivity studies
From the input parameters the following optical properties were calculated for the measured Saharan dust as function of altitude and wavelength: i) the scattering, absorption and extinction coefficients, ii) the single scattering albedo, iii) the asymmetry parameter of the scattering phase function and iv) the layer optical depths.These quantities will be analysed and interpreted in Sect.4.1.
Since the airborne measurements were carried out over the Atlantic Ocean, one part of the simulations refers to the albedo of the ocean.However, as mentioned in the introduction, the underlying surface can have a significant influence on the radiation field in the dusty atmosphere.Therefore, two reference cases have been defined: the ocean and desert case.Both cases were defined for a cloudless tropical atmosphere including gas absorption, Rayleigh scattering and the measured dust particles as aerosols only.
For the wavelength dependence of the real and imaginary parts of the complex index of refraction the moving average data (Fig. 4) have been chosen.Note that these two reference cases differ in the spectral albedo (Fig. 5) as well as in the emission temperature of the surface (22 • C in the case of ocean, 35 • C for desert).If the dust is entirely neglected in both cases, the atmosphere is referred as "clear sky".Unless otherwise noted, all simulations were performed for a solar zenith angle (SZA) of 0 • in order to obtain the maximum radiative effect.Thereby, the so-called atmospheric radiative effect (ARE) at TOA was calculated through as well as at BOA following in Wm −2 where F + (z, λ) and F − (z, λ), in Wm −2 nm −1 , are the upward and downward irradiances of a considered scenario, which is compared to a references case (index r).The above spectral integrations in the Eqs.( 2) and (3) were performed over the SR ( F with index s) as well as LR (index l) and were summed to obtain the total effect (index t).While F s,l,t (TOA) characterises the radiation loss of the Earth-atmosphere system (EAS), the sum F s,l,t (A):= F s,l,t (BOA)+ F s,l,t (TOA) describes this radiation deficit for the atmosphere only.Thus, F s,l,t (A)>0 means a cooling and <0 a warming of the atmosphere.
With the help of the vertical shortwave, longwave as well as total radiative heating rates h s,l,t (z) of an individual scenario, from the Eq.(1) in Sect.3.1, the deviation heating rates in units of Kd −1 were computed in comparison to the reference cases (r).
In order to study the AREs of the in situ measured ACE-2 mineral dust the following scenarios have been considered compared to the two reference cases containing the dust: 1. Neglect the measured dust -in order to explore its pure optical effect.
2. Consider a tropospheric standard maritime aerosol in addition to the dust -in order to learn about the effects of a dusty atmosphere directly over the ocean or over desert but close to the coast.
3. Vary the total spectral optical depth of the dust -in order to simulate a lower or larger load of the measured dust particles.
4. Change the solar zenith angle -to demonstrate the diurnal variation of the impact of the dust.
5. Use different sets of spectral complex refractive indices compared to the spectral mean of Fig. 4 -due to the uncertainties with respect to this highly relevant quantity as indicated in this figure.
6. Neglect dust particles with diameters larger than 4 µm in the measured size distributions -in order to work out especially the portion of the large particles of the dust on its radiative effect.
Results
In the following τ means the optical depth, g the asymmetry parameter, ω o the single scattering albedo and n I the imaginary part of the complex refractive index.In order to indicate their dependence on wavelength, its value in micrometres is added in parentheses, e.g.ω o (0.55).
Optical properties of the measured mineral dust
In the lower part of the atmosphere during the ACE-2 campaign large particles (D p >4 µm) were observed (see Fig. 3, levels 1-4).In Fig. 6 (top) one can recognise that these larger particles are characterised by a lower solar single scattering albedo: ω o (0.55)∼0.76 for the levels 2-4 in the dust layer as compared to ω o (0.55) up to 0.96 (+26.3 %) for the levels 5-8 where no large particles were present.This corresponds to the value of 0.73 measured during ACE-2 within the dust plume ( Öström and Noone, 2000).Figure 6 (top) also demonstrates that scattering dominates the shortwave extinction, while there is mainly absorption in the longwave spectral range.This opposite behaviour is less pronounced, if large particles are present: They decrease the shortwave and increase the longwave scattering fraction, compare level 2/level 8 with/without these large particles.Figure 6 (bottom) illustrates the size-averaged asymmetry parameter versus wavelength for all flight levels.One can see a primarily forward scattering in the shortwave range with Fig. 6.Solid lines: Spectral size integrated single scattering albedo (top) and asymmetry parameter (bottom) at the eight flight levels derived from the measured size distributions for the mean complex refractive index (Fig. 4).Dashed lines: Difference of the solid lines for the mean imaginary part and the calculated curves for the minimum imaginary part (lower envelope of the dotted red lines in Fig. 4).
values up to g(0.55)∼0.81for the levels 2-4, while backward scattering becomes more dominant in the longwave spectral region.The levels 5-8 show lower values down to g(0.55)∼0.65 (−19.8%)due to the absence of the large particles.Further, the larger particles at the lower levels 2-4 increase g in the entire spectral range but more dominant in the longwave.
In the marine boundary layer (level 1) there were both large particles and a higher fraction of particles with diameters 0.08<D p <0.4 µm resulting in a single scattering albedo ω o (0.55)∼0.9 and an asymmetry parameter g(0.55)∼0.6,which are larger and lower than in the levels 2-4, respectively.That means that these optical properties depend significantly on the ratio of smaller and larger particles, which differed strongly with altitude.and asymmetry parameter (bottom) at flight level 3 within the dust plume, size integrated cumulatively up to the particle diameter D p using the mean complex refractive index shown in Fig. 4. Dashed lines: Difference of the solid lines for the mean imaginary part and the calculated curves for the minimum imaginary part (lower envelope of the dotted red lines in Fig. 4).
Figure 7 (top) and (bottom) illustrate the cumulative sizeaveraged single scattering albedo and asymmetry parameter as function of the particle diameter D p at the level 3 within the dust plume averaged over the spectral bands summarised in Table 3. Cumulative means that the size integration over the measured number size distributions considers all particles up to the diameter D p , so that the contribution of respective particle sizes can be investigated.Both Figs. 7 indicate a significant influence of the large particles.They increase the asymmetry parameter and, thus, the forward scattering in the entire spectral range as well as the single scattering albedo in the longwave bands.In the shortwave bands they decrease the single scattering albedo: In band 2 (0.2−0.7 µm) ω o increases from a value of 0.73, when taking into account the entire size distribution, up to 0.83 (+13.7%), if particles with diameters below 3 µm are considered.Hence, the coarse mode particles can decrease/increase the shortwave single scattering albedo/asymmetry parameter as also reported by Defresne et al. (2002) and Wang et al. (2006).This is highly important for comparisons to values derived from scattering and absorption measurements (e.g. by nephelometer as well as absorption photometer, respectively), if the measurement system has a cut-off diameter of ∼3 µm (Haywood et al., 2003) meaning that a significant fraction of optically efficient particles is lost.
There is also a significant uncertainty with respect to the imaginary part of the complex refractive index (Fig. 4).The previous calculations were performed for the mean imaginary part (red solid curve) with n I (0.55)=0.0063.The same computations were made for the lower envelope of all red dotted literature data leading to n I (0.55)=0.0033.The resulting single scattering albedo as well as asymmetry parameter were subtracted from the calculated values for the mean imaginary part, and these differences were plotted as dashed curves in the Figs.6 and 7.These four figures demonstrate that the asymmetry parameter is less sensitive to the decrease of the imaginary part compared to the single scattering albedo.In the shortwave range the single scattering albedo increased by up to 0.15 (Fig. 6, top), while the asymmetry parameter only decreased by up to 0.05 (Fig. 6, bottom).Thereby, ω o (0.55) rises from 0.76 to 0.84 (+10.5%) and g(0.55) declines from 0.81 to 0.79 (−2.5 %) for the dust aerosol.In the longwave range the changes in the optical properties are larger with | ω o |<0.45 as well as | g|<0.1 and significant differences in the various flight levels.In the Figs.7 (top) and (bottom) the variations of the size cumulative optical properties of the measured dust due to the lower imaginary part of the refractive index are shown for the different spectral bands as dashed curves.For band 2 and a particle diameter of 3 µm the single scattering albedo/asymmetry parameter increases/decreases from 0.83/0.76 to 0.91/0.74(+9.6/-2.6%).Even lower values of the imaginary parts, e.g.suggested by Haywood et al. (2003) with n I (0.55)=0.0015, can produce even larger values for the single scattering albedo and lower values for the asymmetry parameter.
The significant influence of large particles on the extinction can be seen in Fig. 8 (top) showing the cumulatively size integrated extinction coefficient averaged over the spectral bands of Table 3 for flight level 3 as function of the particle diameter D p .In addition, the curves have been normalised to the maximum observed particle size to obtain a function between 0 and 1, from which one can read off the portion of the respective diameter on the extinction coefficient in percent: Particles larger than 3 µm contribute up to 50% in the shortwave and 90% in the longwave bands.Note that the maximum of extinction is observed in the dust layer (levels 2-4), see Fig. 8 (bottom), and the extinction is most dominant in the shortwave bands 2-6.
Figure 9 (top) illustrates the spectral dust extinction optical depth as a function of the altitude.In the longwave the largest contributions are caused by layers below 7 km, e.g.inside the dust plume with the maximum extinction occurring at 4 km altitude (Fig. 8,bottom).For the UV wavelengths the lowermost 1 km thick atmospheric layer contributes as much as 65% to the total optical depth (Fig. 9, bottom).This is due to the high number concentration at level 1 at 900 m altitude (Fig. 3), which has been adopted for all model layers below.The calculated total dust optical depth was computed with values of τ (0.2)=1.18 and τ (0.55)=0.66.These simulated optical depths will be compared in the following with independent measurements performed during ACE-2: -The dust optical depth was measured during ACE-2 on the slope of the volcano "El Teide" at 3570 m (comparable to flight level 3 of the present work) using a sun photometer (Formenti et al., 2000) leading to a mean optical depth of τ (0.5)=0.136±0.01 on 8 July.Model calculations using the airborne measured size distributions yield a value of τ (0.5)=0.179 at 3600 m, which is in agreement with the sun photometer data within the expected uncertainties, e.g., that airborne and groundbased measurements were performed at different geographical locations.
-Considering the entire dust column from the surface to the 11.6 km, Smirnov et al. (1998) derived τ (0.5) of 0.37±0.04for the averaged total optical depth.The ACE-2 airborne size distributions yield τ (0.5)=0.31 for the dust plume extending between 0.9 and 5.5 km.
From the aircraft observations at 900 m it is concluded that (see Fig. 2) a second large particle dust plume existed below this particular altitude.Since no in situ particle measurements were made, the size distribution data have been extrapolated homogenously for altitudes below the 900 m level.In this manner one obtains τ (0.55)=0.66 representing a scenario as it may be observed closer to the Saharan source regions.
Fig. 10.Profiles of the shortwave, longwave as well as total radiative heating rates for the reference case: measured ACE-2 dust over ocean (solid) and desert (dashed).Formenti et al. (2000) retrieved monomodal size distributions between the diameter bounds 2 and 20 µm from spectral optical depths measured by sun photometers on 8 July on El Teide.Assuming the complex refractive index of 1.55−0.005i they found ω o (0.5)=0.87±0.29 and g(0.5)=0.73±0.15confirming the values computed above.
The AREs of the measured mineral dust plume
In the following the sensitivity studies 1-6 are investigated as defined in Sect.3.3.The corresponding AREs calculated at TOA as well as BOA over ocean and desert are listed in Table 4 and discussed in the Sects.4.2.1 to 4.2.6.
Figure 10 shows the shortwave, longwave as well as total heating rates for the two reference cases ocean and desert.The heating in the shortwave and cooling in the longwave region lead to a positive total heating inside the dust plume and in the lowest kilometre of the atmosphere.Due to the higher surface albedo over the desert the solar heating increases weakly above the altitude of 1 km and strongly below.The longwave radiative flux divergence is mainly affected in the lower troposphere below the altitude of 1 km with a significant positive heating close to the surface.Here, the longwave cooling over ocean turns into a heating over desert.
From the previous profiles of the reference heating rates the deviation heating rates h s,l,t (z), see the Eqs.( 1) and (4), were calculated over ocean (solid) and desert (dashed) within the scope of the sensitivity studies 1-6 and are presented in the Figs.11 to 15.
Note that the profiles presented in the Figs. 10 to 15 exhibit a certain degree of vertical structure.This can be traced back to the variabilities in the input data (Figs. 2 and 3).Profiles of the deviation from the total radiative heating rates of the reference cases ocean (solid) as well as desert (dashed), h t (z), if the measured dust is neglected completely (no dust) or a standard tropospheric aerosol of Shettle and Fenn (1979) is added with surface number concentrations 10 000, 20 000 and 30 000 cm −3 (ShFe10000/20000/30000).
Radiative effect of the measured dust
This scenario neglects the measured mineral dust totally.The simulations yield a negative h t due to reduced absorption as follows from the curve "no dust" in Fig. 11.This effect is stronger in the desert case, since a higher amount of radiation reflected from the surface is not absorbed, and the behaviour corresponds to the calculated AREs at the bottom and top of the atmosphere, see Table 4 and sensitivity study 1: The neglected dust gives rise to an increased ARE at BOA due to reduced absorption, with a larger value of F t (BOA) over ocean (+202 W m −2 ) than over the desert (+160 W m −2 ).However, the behaviour at the TOA is quite different over the two surfaces: A negative total ARE with F t (TOA)=−53 W m −2 was computed over the ocean, since the reflecting dust is not present, but a positive value +107 W m −2 over the desert due to the lack of the absorbing dust.This means a total radiative cooling/warming of the EAS in the case of the dusty atmosphere over the ocean/desert surface.These values are similar to Quijano et al. (2000a), Haywood et al. (2001) and Myhre et al. (2003).Considering the atmosphere only, the dust always produces a warming due to F t (A)=−149/−267 W m −2 .
F l (BOA/TOA) indicate that the dust heats/cools the atmosphere below/above the dust layer in the longwave.
Presence of an additional standard maritime aerosol
Figure 11 presents the influence of an additional standard oceanic aerosol after Shettle and Fenn (1979) on the radiative heating of the dusty atmosphere.Its total number concentration is assumed to decrease exponentially with Atmos.Chem.Phys., 7, 4887-4903, 2007 www.atmos-chem-phys.net/7/4887/2007/Fig. 12. Profiles of the deviation from the total radiative heating rates of the reference cases ocean (solid) as well as desert (dashed), h t (z), as function of the scaled measured dust optical depth.
increasing altitude (see Fig. 3 and compare it to the measured number concentration of the Saharan mineral dust) and is varied through the surface values of 10 000, 20 000 and 30 000 cm −3 .These aerosols, mainly containing particles <1 µm, are added to the observed dust leading to a positive radiative heating due to the increase of the total aerosol optical depth (Fig. 11).The values of F t (A) calculated from Table 4 (sensitivity study 2) are always negative affirming this atmospheric warming effect, which is stronger over the reflecting desert and increases with the number concentration of the standard aerosol.
The maritime aerosol particles scatters strongly resulting in enhanced multiple scattering (backscattering) but also leading to increased total absorption by the combined aerosol.Thus, less radiation reaches the Earth's surface with F t (BOA)<0 but more the top of the atmosphere: F t (TOA) exhibits positive values over ocean, indicating a cooling of the EAS being larger, the more the surface number concentration of the oceanic aerosol is increased.Over desert the warming impact ( F t (TOA)=−15.5)turns into a cooling (+5.4) when raising the load of the backscattering standard aerosol.
Dusty atmosphere by varying the spectral total dust optical depth
The total load of the measured dust is varied by scaling the originally calculated total optical depth τ (λ) to the values τ new (0.55)=0.25/0.5/1.0/2.0/5.0 and τ new (λ)=2τ (λ) with τ new (0.55)=1.32.These cases of weaker and stronger dust loads than observed during ACE-2 resulting in h t 's as displayed in Fig. 12.In comparison to the reference cases, a larger optical depth causes enhanced positive total heating due to increased absorption, and the relative increase as well as decrease of h t is approximately equal for ocean Fig. 13.Profiles of the deviation from the total radiative heating rates of the reference cases ocean (solid) as well as desert (dashed), h t (z), for various solar zenith angles between 10 • and 60 • .and desert.However, the changes in the shortwave irradiances at TOA are different.Table 4 (sensitivity study 3) shows that the total upward radiation at TOA increases with larger total optical depths over ocean but decreases strongly over desert, since mainly solar radiation, reflected from the surface, is multiply scattered and absorbed within the dust in the latter case.Moreover, in the ocean case this behaviour is only observed up to a certain value of the optical depth.For larger values the reflected radiation at TOA decreases again, consider the value F t (TOA)=+5 W m −2 in the case of "τ (0.55)=5.0", which is substantially lower than the +27 W m −2 for the lower optical depth "τ (0.55)=2.0".The reason for this behaviour is that a larger total optical depth also gives rise to enhanced scattering and, thus, longer multiple scattering paths, along which radiation is absorbed.This effect has also been reported by Quijano et al. (2000a).
Influence of the solar zenith angle
An increase in the SZA has two opposite radiative effects within the atmosphere.On the one hand, the optical path of the direct solar light through the atmosphere increases, so that relatively increasing absorption is expected.On the other hand, this incident solar radiation decreases absolutely with increasing SZA.The resulting total radiative effect is negative.This cooling is shown in Fig. 13 representing the influence of the SZAs between 10 • and 60 • .
Since the radiative transfer model deals with two source terms in the entire wavelength range, the solar spectrum also contributes to all longwave integrations, see the Eqs.( 2) and (3).Therefore, the values of F l (BOA) and F l (TOA) in Table 4 (sensitivity study 4) show small changes in the longwave radiative transfer for the various SZA for both ocean and desert, however, the solar radiative transfer plays the decisive role.Table 4 indicates that less upward/downward Fig. 14.Profiles of the deviation from the total radiative heating rates of the reference cases ocean (solid) as well as desert (dashed), h t (z), as function of the complex refractive index.
total radiation is observed at TOA/BOA relative to the reference cases for increasing SZA due to less incident radiation.Further, one can recognise significant higher negative values of the F t (TOA) over the desert compared to the ocean.This emphasises the warming effect of the dust aerosols over the desert.
Effect of the spectral complex refractive index
The dotted curves in Fig. 4 illustrate the uncertainties in the complex refractive index of mineral dust.Therefore, maximum and minimum spectral real as well as imaginary parts of the literature data (dotted curves) were considered in all their combinations to compute AREs and heating rates.In addition, the "dust-like" complex refractive index (black dashed) following Shettle and Fenn (1979) was used, since this is the only source of data, which covers the entire spectral wavelength range considered in this study.
The results of the h t simulations (Fig. 14) demonstrate that the imaginary part has the more significant effect to the total radiative heating.Smaller values of the imaginary part cause less absorption and, thus, a reduced heating and vice versa.When a higher fraction of larger particles is present (between 2 and 7 km altitude), an increase of the real part leads to a lower heating.However, for higher fractions of small particles (below 2 km altitude), a higher real part yields a positive contribution to the radiative heating.The "dustlike" refractive index results in a larger heating.
Table 4 (sensitivity study 5) shows the significant influence on the ARE, which the complex refractive index can have.A lower imaginary part causes a higher atmospheric radiative cooling of the EAS with positive F t (TOA) and vice versa.This behaviour is quite stronger over the desert compared to the ocean simulations.Profiles of the deviation from the shortwave, longwave and total radiative heating rates of the reference cases ocean (solid) and desert (dashed), h s,l,t (z), if the large dust particles with diameters larger than 4 µm are neglected.
Absence of the large dust particles
The influence of the large particles on the radiative transfer is investigated separately in this subsection.To do this all particles in the measured size distributions having diameters larger than 4 µm were neglected.The re-computed optical properties are expected to show decreased forward scattering as well as absorption and, hence, less extinction.This leads to a significantly reduced total heating as the simulations of the deviation heating rates in Fig. 15 demonstrate.For the reference cases (Fig. 10) the values of the total heating rates at 5 km altitude are +4.0/+5.0K d −1 over ocean/desert.When neglecting the large particles (Fig. 15), the heating rate is reduced to about +2.6/+3.2K d −1 .Hence, these particles contribute with 35/36% to the dust plume's total heating rate.Figure 15 also shows that the radiative heating is mainly determined by absorption properties of the large particles in the shortwave.
In order to calculate the contribution of the large particles to the total ARE at the TOA over the ocean/desert, firstly consider sensitivity study 1 representing the AREs of the entire dust in comparison to the dustless atmosphere with absolute values | F dust t (TOA)| of 53/107 W m −2 (Table 4).Starting from the reference case simulations, including all measured particles, one obtains F no large t (TOA) of 10/71 W m −2 when neglecting the large particles (sensitivity study 6 in Table 4).Thus, the fraction yields the contribution of these particles to the ARE of the entire measured dust over ocean/desert.Although their number concentration is up to three orders of magnitude smaller than of the sub-microns (Fig. 2), they contribute at (5) for the large particle dust column from 0.9−5.5 km (see Fig. 3).
Fine mode (i=1) Coarse mode (i=2) 0.73 0.67 least ∼20% to the ARE at TOA.The large particles always cause a total warming of the EAS, since they absorb radiation, which can not be transmitted to the TOA.Within the atmosphere only, they always result in a warming due to F t (A)=−81/−143 W m −2 over ocean/desert (Table 4).
The role of large mineral dust particles
Ground and satellite based remote sensing instruments detect radiation transported through atmospheric columns, so that such measurements represent the vertically integrated state of the atmosphere containing gases, clouds and aerosols.These components differ in their individual volume fractions.From the ACE-2 measured size-bin-resolved number size distributions a column integrated bimodal volume size distribution V c (lnD p ), having a fine (I 1 : D p <1.2 µm) and a coarse (I 2 : D p >1.2 µm) particle mode, was derived with modal parameters defined through in order to express V c analytically as The modal parameters for the large particle dust column from 0.9−5.5 km (Fig. 3) are listed in Table 5.To explore the role of the large particles for the atmospheric radiative effect (ARE) over ocean and desert, the data were used as follows.Consider homogeneous dust layers in this altitude range with realisticly varied coarse mode median diameters D p,2 by shifting it to smaller values, from D p,2 =9.45 µm of the original ACE-2 dust plume (ACED) to diameters of 2.0, 4.0 and 8.0 µm (cases ACEDI to ACEDIII) assuming constant standard deviations σ i and that the total particle volume 5 for the ACED and ACEDI-III compared to the size dependent optical properties (black) for the wavelengths 0.55, 1.02 and 10.0 µm, which were calculated via Mie scattering theory using the mean complex refractive index shown in Fig. 4. The extinction efficiencies (top) are multiplied by the factor of 10, the single scattering albedos (center) and the asymmetry parameters (bottom) by 100.concentration of the ACED remains fixed.The corresponding surface size distributions were adopted as constant at all model layers within the dust plumes to compute the optical properties.In addition, the scenario ACEDIV is defined by the ACEDII but with a scaled optical depth so that τ (0.55) equals the value for ACED (see Table 6).
Figure 16 shows the surface size distributions for the dust cases as defined before, which are needed to compute the optical properties.Note that for a constant total volume concentration the total particle surface increases with decreasing D p,2 .The plots (top), (center) and (bottom) of Fig. 16 also depict the extinction efficiency, the single scattering albedo and the asymmetry parameter of a single spherical particle for three wavelengths (0.55, 1.02 and 10.0 µm) as function of its diameter, calculated using Mie scattering theory and the mean complex refractive index of mineral dust (Fig. 4).This helps to interpret the optical impact of differently sized dust particles.From Fig. 16 (top) one can learn that the shortwave optical depth, i.e. the size integrated extinction efficiency, turns out the be larger the more D p,2 is shifted to smaller values.However, the optical depth in the terrestrial window region is predominantly affected by a larger D p,2 .Figure 16 (center) shows that the shortwave single scattering albedo increases if D p,2 is reduced, while the shortwave asymmetry parameter (bottom) decreases with reduced D p,2 .This situation is reversed in the longwave range for ω o .For example the column single scattering albedo of the ACED was calculated to about ω o (0.55)=0.78.
Table 6 presents the results of the ARE calculations at TOA for the dust scenarios as defined above.The cooling/warming over the ocean/desert is the weaker, the larger the D p,2 is.Therewith the role of the large particles is not degraded, since the total optical depth increases from case ACEDIII to ACEDI.If τ (0.55) of the ACED has been fixed as in the case of the ACEDIV, this scenario demonstrates the warming effect of large dust particles and underlines their key role, for both desert and ocean regions: The reflected/transmitted light at TOA/BOA increases for the ACE-DIV with a reduced D p,2 resulting in positive AREs and atmospheric radiation losses F t (A)>0.Moreover, ACEDII and ACEDIV indicate that, relative to the ACED, a fixed D p,2 but different total optical depths can lead to significant opposite AREs.On the other hand, one particular ARE may be caused by various dust aerosols with different contributions of the fine and coarse mode particles to the optical properties.This important fact has to be kept in mind when assessing radiative budget quantities from dust particle size distribution data or remote sensing dust size distribution information vice versa.Note that the complex refractive index of dust particles and their non-sphericity may depend on their size leading to even more complex situations.
Conclusions
Airborne measured number size distributions of mineral dust particles from the Sahara, observed on 8 July 1997 during ACE-2 over the Canary Islands, were presented and used for comprehensive radiative transfer simulations.The optical properties of the dust layer were calculated via Mie scattering theory.It turned out that the asymmetry parameter g and the single scattering albedo ω o vary strongly with altitude between g(0.55 µm)=0.65−0.81 and ω o (0.55)=0.75−0.96.Within the dust layer, containing large particles with diameters D p >4 µm, g(0.55)=0.81and ω o (0.55)=0.76 were found, which is in good agreement with other measurements performed during ACE-2 ( Öström and Noone, 2000).The radiative transfer simulations show a dominant influence of the observed large particles on the extinction.
For the radiative transfer simulations of atmospheric radiative effects (AREs) two typical scenarios were considered: The measured dust over an ocean and desert surface, which differ significantly in the spectral albedo and temperature of the surface.The ACE-2 Saharan dust causes a cooling over ocean but a warming over desert.In particular, the large particles always heat the local atmosphere due to absorption and contribute at least 20% to the ARE.The magnitude of their warming effect is significantly influenced by the particle's spectral complex refractive index.
The measured size distributions were vertically integrated to derive the modal parameters of an assumed bimodal column volume size distribution leading to a coarse mode median diameter of 9.45 µm and a column single scattering albedo in the visible ω o (0.55) of 0.78.The monomodal column volume size distribution calculated by Formenti et al. (2000) during ACE-2 shows a similar value of a median S. Otto et al.: Atmospheric radiative effects of Saharan dust diameter with ∼5.5 µm for 8 July 1997 but a somewhat larger column single scattering albedo of ω o (0.5)=0.87.
The column size distribution was used for sensitivity studies varying the coarse mode median diameter but leaving constant either the total dust particle volume concentration or the mid-visible optical depth.The results confirm the warming effect of the large particles and show significantly changed radiative effects as function of the variability of the coarse median diameter and total dust load.
The main purpose of this work was to explore the radiative effects of a Saharan mineral dust layer and the role of large dust particles.Quantitative radiative transfer simulations demonstrate that the variabilities and uncertainties in the spectral complex refractive index of the particles have a significant influence on the AREs.Therefore, the aim of future studies has to be to measure the large particles when present and to quantify the refractive index of mineral dust samples as function of wavelength and particle size.Beyond, for both the interpretation of optical measurements as well as radiative impacts of mineral dust the particle's non-sphericity as a function of size is expected to play an important role.These are crucial factors for more reliable simulations of dust optical properties and the radiative transfer through a dusty atmosphere.Additionally, in situ measurements, as e.g.lidar observations, of optical properties, surface albedo as well as shortwave and longwave radiation are required for a realisitic closure.In particular, multi-wavelength Raman lidar observations have to be exploited to obtain the vertical extinction profile of the dust at several wavelengths for use in radiative transfer simulations.Such in situ data have recently been carried out during the SAMUM experiment (http://samum.tropos.de/) in summer 2006 in Morocco.
Edited by: A. Hofzumahaus
Fig. 1 .
Fig. 1.Flight track (red) of the Cessna Citation on 8 July 1997 during ACE-2 over the Canary Islands.
Table 1 .
Aerosol instrumentation (and its size ranges) onboard the Cessna Citation aircraft during the ACEfunction of particle size, wavelength and altitude as well as the results of the sensitivity calculations are presented in Sect.4, which finishes with detailed discussions of the role of large dust particles.Finally, the conclusions of this work are summarised in Sect.5.2 Instrumentation and observationsOn 8 July 1997 a research flight was performed about 50−200 km off the coast of Africa.A map of the flight track is shown in Fig.1.During this flight seven isobaric flight legs were flown in the free troposphere between 2.7 and 11.6 km altitude, in a stacked pattern.Each flight leg lasted for about 15 min, which corresponds to a horizontal distance of about 150 km.On the transfer back to Tenerife a longer isobaric flight leg was conducted in the marine boundary layer at about 0.9 km altitude.
Fig. 2 .
Fig. 2.From left to right: Measured profiles of the sub-micrometer and coarse mode number concentration, the temperature and relative humidity as well as the volume mixing ratio of ozone and water vapour.
Fig. 3 .
Fig. 3. Observed number (solid) and volume (dotted) aerosol size distributions at eight isobaric flight levels with respective altitudes.The right profile shows the observed total number concentration (solid blue), extrapolated to the tropopause (>11.6 km) and the ground (<0.9 km).The dashed profiles in the right panel indicate the number concentrations of the standard maritime and free tropospheric aerosol afterShettle and Fenn (1979).
Fig. 4 .
Fig.4.Real (re) and imaginary (im) parts of the spectral complex refractive indices of mineral dust.The dotted lines show various curves from literature data, and the solid curves were calculated from the dotted lines using a moving average.The "dust-like" refractive index fromShettle and Fenn (1979), covering the entire spectrum of the radiative transfer model, is dashed black dashed.
Fig. 5 .
Fig. 5. Spectral surface albedo of desert (sand) and ocean.For clarity, the values for ocean were multiplied by a factor of 5.
Fig. 7 .
Fig. 7. Solid lines: Band averaged single scattering albedo (top)and asymmetry parameter (bottom) at flight level 3 within the dust plume, size integrated cumulatively up to the particle diameter D p using the mean complex refractive index shown in Fig.4.Dashed lines: Difference of the solid lines for the mean imaginary part and the calculated curves for the minimum imaginary part (lower envelope of the dotted red lines in Fig.4).
Fig. 8 .
Fig. 8. (top) Size-cumulative band averaged extinction coefficient (analoguous to Fig. 7) normalised to the value of the largest diameter.(bottom) Size integrated extinction coefficient as function of the altitude.
Fig. 9 .
Fig. 9. Spectral dust extinction optical depth calculated from the measured size distributions for the mean complex refractive index in Fig. 4. (top) As function of the altitude at the model levels 0−20 km.(bottom) As in the top plot but normalised to the value at surface level (0 km).
Fig
Fig.11.Profiles of the deviation from the total radiative heating rates of the reference cases ocean (solid) as well as desert (dashed), h t (z), if the measured dust is neglected completely (no dust) or a standard tropospheric aerosol ofShettle and Fenn (1979) is added with surface number concentrations 10 000, 20 000 and 30 000 cm −3 (ShFe10000/20000/30000).
Fig
Fig. 15.Profiles of the deviation from the shortwave, longwave and total radiative heating rates of the reference cases ocean (solid) and desert (dashed), h s,l,t (z), if the large dust particles with diameters larger than 4 µm are neglected.
Fig. 16 .
Fig. 16.Surface concentration size distributions (coloured) considering the data of Table5for the ACED and ACEDI-III compared to the size dependent optical properties (black) for the wavelengths 0.55, 1.02 and 10.0 µm, which were calculated via Mie scattering theory using the mean complex refractive index shown in Fig.4.The extinction efficiencies (top) are multiplied by the factor of 10, the single scattering albedos (center) and the asymmetry parameters (bottom) by 100.
Table 2 .
Widths of the spectral intervals of the model package in different spectral regions.
Table 4 .
Atmospheric radiative effects (AREs) F =F −F r for the sensitivity studies 1-6 where the F 's are the spectrally integrated upward (+)/downward (-) flux densities at TOA/BOA (upper/lower values) in W m −2 after the Eqs.(2) and (3).The reference cases (r) are defined as the clear sky atmospheres over ocean (surface temperature T s =22 • C) and desert (35 • C) containing the measured dust.
Table 5 .
Modal parameters of a bimodal column volume size distribution derived from the ACE-2 measured number size distributions according the Eqs.
Table 6 .
Atmospheric radiative effects (AREs) F t in W m −2 for the dust scenarios ACEDI-IV as defined in the main text compared to the ACED varying the coarse mode median diameter D p,2 (cases ACEDI-III) and the total optical depth (case ACEDIV).The F 's were calculated at TOA/BOA (upper/lower values) following the Eqs.(2) and (3) where the reference case is always the ACED.All computations were performed over the ocean (T s =22 • C) and the desert (35 • C) using Mie scattering theory as well as the mean complex refractive index of Fig.4.
|
2018-05-08T17:59:10.793Z
|
2007-09-21T00:00:00.000
|
{
"year": 2007,
"sha1": "5df7bcdb4142fdfb066e3cb2887f614f42307dbf",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5194/acp-7-4887-2007",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5df7bcdb4142fdfb066e3cb2887f614f42307dbf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
220652006
|
pes2o/s2orc
|
v3-fos-license
|
Socket preservation with demineralized freeze-dried bone allograft and platelet-rich fibrin for implant site development: A randomized controlled trial
Aim: This in vivo study compared clinical, histological, and radiological differences in bone formation in human extraction sockets grafted with demineralized freeze-dried bone allograft (DFDBA) and platelet-rich fibrin (PRF), with nongrafted sockets and bone–implant contact (BIC) at 3 and 6 months after implant placement. Settings and Design: Randomised controlled trial. Materials and Methods: The study comprised thirty posterior teeth sockets in either arch in patients ranging from 25 to 60 years. The patients were divided into two equal groups – Group I: control group wherein no graft was placed and the extraction socket was left to heal normally and Group II: test group in which DFDBA and PRF were placed after extraction. 12–16 weeks after extraction, a trephine biopsy was done just prior to implant placement, followed by implant placement. Cone-beam computed tomography (CBCT) at 3 and 6 months after implant placement was done to assess BIC. Statistical Analysis Used: Descriptive and Inferential statistical analysis was done. Parametric test: Independent t-test was used for intergroup analysis and dependent t-test for intra-group analysis. Results: Lower buccal bone levels were seen in the control group versus test group at all intervals though moderately significant. Lingual bone levels significantly reduced at all the three intervals for the control group as compared to the test group. Ridge width in both groups reduced in a time span of 6–7 months without any significant difference. Better bone conversion was noted in the preserved sockets. The preserved sockets also showed better BIC 3 months after implant placement and loading. Conclusion: Indigenously developed DFDBA material shows promising results as an osteoinductive material.
recession, and papilla fill, improving chewing efficiency and enhancing psychological benefit for the patient. [2,3] Thus, placement of dental implants and ridge preservation through graft materials can successfully rehabilitate an edentulous site. [4][5][6] The present study was undertaken with the objectives of evaluating and comparing clinical and histological differences in bone formation in human sockets grafted with a mixture of demineralized freeze-dried bone allograft (DFDBA) and platelet-rich fibrin (PRF), with nongrafted sockets, and determining the bone-implant contact (BIC) ratio with cone-beam computed tomography (CBCT), at 3 months after implant placement and 3 months after loading.
Source of data
This study enrolled patients aged 25-60 years of both genders reporting to the Department of Prosthodontics, Department of Oral and Maxillofacial Surgery, and Department of Implantology, M S Ramaiah Dental College and Hospital, Bengaluru, for extraction and replacement of missing teeth with dental implants. The timeframe of the study ranged from December 2014 to September 2016.
Sample size and study design
In a study involving a histologic analysis, it was observed that about 65 ± 10% of patients who received ridge preservation using freeze-dried bone allograft and a collagen membrane developed more bone versus 54 ± 12% in the patients who received extraction alone. [7] Extrapolating these results to the present study, to obtain 80% power and 95% confidence level, a sample size of 30 was chosen. • Group I: Control group -no graft was placed, and extraction socket was left to heal normally • Group II: Test group, i.e., the socket was preserved with DFDBA, and PRF was placed after extraction.
The study design was a randomized controlled clinical trial [ Figure 1]. The inclusion, exclusion, and exit criteria were as follows.
Inclusion criteria
• Patients without any systemic disease with tooth indicated for extraction eg: fractured tooth, nonvital tooth without the possibility of endodontic treatment, endodontic treatment failure. • Patients' choice of replacement with implant-supported fixed prosthesis • Extraction socket with four intact walls immediately after the extraction of tooth.
Exclusion criteria
• Surgical complications during extraction including loss of buccal or lingual/palatal cortical plate • Development of oral or systemic disease • Presence of any chronic systemic disease such as osteoporosis and diabetes mellitus • Heavy smokers (over ten cigarettes/day) • Chronic treatment with any medication known to affect bone turnover such as heparin, cyclosporine, b i s p h o s p h o n a t e s, a n d ch e m o t h e r a p e u t i c drugs -methotrexate.
Exit criteria
Voluntary withdrawal and development of oral or systemic disease.
The objectives of the study were to assess the following: 1. Clinical ridge height and width after extraction with and without socket preservation 2. Bone formation after extraction with and without socket preservation by means of histological analysis 3. Bone-implant contact at 3 months after implant placement and 3 months after implant loading in nongrafted and grafted sockets.
Patients
In this prospective clinical study, patients with thirty sites reported to the Department of Prosthodontics f o r r e p l a c e m e n t o f m i s s i n g p o s t e r i o r t e e t h (maxillary/mandibular). All the patients were evaluated as per standard norms and were selected according to the inclusion and exclusion criteria. The participants had the ability to understand the proposed treatment and its prognosis and provide informed consent, in English, without the aid of ad hoc translation. These patients were randomly allotted to two groups using concealed envelope allotment method where they were blinded to the outcome of the envelope picked. The study was presented to the Ethical Committee of M S Ramaiah Dental College and Hospital Ethical committee number-MSRDC/ EC/2014-15/05).
Presurgical procedure
All the patients were subjected to routine blood investigations -hemoglobin%, Bleeding Time/Clotting Time, and glycated hemoglobin. A prophylactic regimen included amoxicillin (2 g) 1 h preoperatively and 500 mg three times daily for 5 days postoperatively. None of the patients reported or demonstrated any allergy to amoxicillin. Prior to extraction, impressions and diagnostic casts were made. On the study cast, modeling wax of 0.5 cm × 0.5 cm was added in two layers on the crown of the tooth indicated for extraction to stabilize a small piece of wire on it to define a standard reference point. An auto-polymerizing resin material was used to fabricate the template/stent on this study model including at least one tooth anterior or posterior to the indicated tooth. From this metal wire, ridge height measurements were made clinically, till the cervical level of the tooth (0 mm). Two more measurements were marked cervical to the first marking, at 2 mm and 6 mm distance, with the help of a Williams probe. After the height measurements were accomplished, the width was measured at 2 mm and 6 mm levels with a bone gauge.
Platelet-rich fibrin preparation (for Group 2)-procedure 5 ml of whole venous blood was collected from each patient at the time of implant placement, in sterile vacutainer tubes of 6-ml capacity without anticoagulant. The vacutainer tubes were then placed in a centrifugal machine REMI R-4C (REMI Laboratory Instruments, Mumbai, India) at 3000 revolutions per minute (rpm) for 10 min at room temperature after which it settled into the following layers: red lower fraction containing red blood cells, upper straw-colored cellular plasma, and the middle fraction containing the fibrin clot. The upper straw-colored layer was removed, and the middle fraction which is the PRF was collected up to 2 mm below the lower dividing line. [8] Surgical extraction procedure All procedures were performed by a single experienced clinician. A preprocedural rinse was performed (0.12% chlorhexidine gluconate for 1 min), and the lower third of the face was scrubbed with povidone iodine. Local anesthetic (Lox 2% Adr, 1:200,000) was administered, and the tooth was extracted with minimal trauma using periotomes. Flap elevation was done to ensure primary closure. [9] The use of extraction forceps was limited to minimum, to preserve the socket walls. The integrity of the remaining socket walls was assessed. The sockets were thoroughly debrided with a socket curette and irrigated well with saline. In Group 1, the sockets were closed with primary closure with minimal tension (with 3-0 nonresorbable silk sutures). In Group 2, the DFDBA graft material (DFDBA 500-1000 μ particulate, gamma irradiated, lyophilized graft material -0.5 cc vial for each test group socket, obtained from Tata Memorial Hospital, Mumbai) mixed with PRF was packed into the socket (followed by suturing identical to that of the control group). The patients were recalled after a week for assessment and suture removal.
Preimplant procedure
Following the delayed two-stage implant placement protocol, 12-16 weeks later, socket fill was assessed with an intraoral periapical radiograph. At the same time, ridge height and width measurements were calculated again, using the stent, as done before the implant placement was begun.
Trephine biopsy for histological assessment
All surgical placements of implants were done by a single experienced operator. After administration of local anesthetic, full-thickness mucoperiosteal flaps were raised and initial pilot drill was replaced by a trephine of same diameter to remove the core from the required site. A 2 mm × 6 mm trephine-latch type, with an internal diameter of 2 mm, attached to a contra-angle micro-motor hand piece was positioned at the center of every socket with the use of a surgical guide, with copious chilled saline irrigation followed by a periapical radiograph to ensure correct orientation. The trephine core was gently teased out of the drill directly into 10% neutral buffered formalin, and the containers were appropriately labeled. The samples were processed for histologic evaluation.
Implant placement procedure
After the trephine biopsy, the osteotomy site was prepared with sequential drilling to receive Hi-Tech implant of appropriate dimensions, as determined from the preoperative CBCT. Following placement of implant and cover screw, flaps were closed achieving primary closure. Antibiotics and appropriate analgesics were prescribed. Postoperative instructions were given. Postoperative healing was uneventful. After a week, the sutures were removed.
Histological and histomorphological analysis
This is a gold standard to determine bone cells and its activities. [10] The specimens were labeled and submitted for histopathological evaluation. To avoid bias, the entire procedure was blinded and reported by more than one pathologist. The trephine core biopsies were first decalcified using 5% nitric acid solution. Ascertaining the cores were soft, they were taken for tissue processing. Post processing, the cores were embedded longitudinally in paraffin blocks. Subsequently, sections of 4 µ were sectioned from the paraffin block using an automatic Leica tissue microtome and were stained with H and E stain. Thirty histologic sections were examined under the binocular Olympus microscope (Olympus Corp., Tokyo, Japan) and were photo-micrographed using the Jenopik ProgRes camera at ×4 magnification. Following which, the photomicrographs were uploaded to Motic image analyzer software version 2.0 (china group co. ltd, Shenzhen, China) to calculate the percentage of bone, trabecular space, and graft. The obtained percentages were tabulated, and statistical analysis was performed using paired t-test within the groups and using unpaired t-test between the groups.
Histologically, the grafts appeared as homogenous masses with irregular borders, devoid of osteocytes and no evidence of osteoblastic/osteoclastic activity [ Figure 2].
Radiological assessment
Three-dimensional (3D)-imaging techniques have the advantages of negligible magnification, relatively high contrast images, various views, and reduced dose of radiation to the patient as compared to other imaging modalities. [11] Therefore, CBCT was used to conduct this study.
All implants were clinically stable without mobility or any signs or symptoms of inflammation at the time of CBCT. A single experienced oral and maxillofacial radiologist The BIC was assessed using the following formula [12] in buccal, lingual, mesial, and distal surfaces of the implant:
Rate of BIC%
Length of the implant covered by the bone Actual length o of the implant 1 u 00 The average of the mesial and distal BIC measurements was taken, while the palatal and buccal BIC measurements were calculated individually [ Figure 3].
Statistical analysis
This was performed using two-tailed t-test (independent) to find the significance of study parameters on a continuous scale between two groups (intergroup analysis) on metric parameters. Two-tailed t-test (dependent) was used to find the significance of study parameters on a continuous scale within each group.
Statistical methods
Descriptive and inferential statistical analysis was carried out in the present study. Results on continuous measurements were presented on mean ± standard deviation (minimummaximum), and results on categorical measurements were presented in number (%). Significance was assessed at 5% level of significance. The following assumptions on data were made. 1. Dependent variables were normally distributed 2. Samples drawn from the population were random.
independent t-test (two tailed) was used to find the significance of the study parameters on continuous scale between two groups (intergroup analysis). Two tailed t-test (dependent) was used to find the significance of study parameters on a continuous scale within each group.
Statistical software
The statistical software namely SPSS version 15.0 (SPSS, Chicago, USA) was used for analysis of data, and Microsoft Word and Excel were used to generate graphs, tables, etc. Table 3] c. T h e Re f 1 d i s t a n c e b e f o r e i m p l a n t placement on the buccal aspect increased with moderate significance (P = 0.023) in the control group (9.40 ± 2.13) as compared to the test group (7.67 ± 1.80) and with moderate significance (P = 0.031) in the control group (9.73 ± 2.58) as compared to the test group (8.0 ± 1.46) on the lingual aspect also, showing more ridge height loss on the buccal and lingual aspects in the control group [ Table 4].
The study comprised thirty sockets
In addition, ridge width at Ref 3 showed no statistically significant difference before extraction (P = 0.535), before implant placement (P = 0.346), and 3 months post implant placement (P = 0.366) [ Table 4]. In both groups, ridge width reduced in a time span of 6-7 months but did not show any significant difference between the groups [ Table 4]. 3. Histological evaluation -As shown in Table 5, the percentage of bone formed and trabecular space showed insignificant difference in both groups. Qualitative statistical analysis was done using paired Student's t-test within groups and unpaired Student's t-test between groups. The intraobserver reliability was shown to be good as the paired Student's t-test showed no significant difference. 4. Radiological evaluation -A quantitative analysis was carried out as described below. a. Mesial + distal BIC percentage showed no significant difference between the control and the test groups at 3 months and postloading, that is, at 6 months. The intraobserver reliability was good [ Table 6] b. As shown in Table 7, buccal BIC percentage showed no significant difference between the control and the test groups at 3 months and at 6 months and within the groups at the same intervals. The intraobserver reliability was good c. As shown in Table 8, palatal/lingual BIC percentage showed no significant difference between the control and the test groups at 3 months and at 6 months and within the groups at the same intervals. The intraobserver reliability was good.
DISCUSSION
The cascading consequences of edentulism can be minimized with high-quality oral health care being given right from the time the first tooth is extracted. Implant-supported fixed prosthesis has greatly altered the prognosis of oral rehabilitation with an artificial prosthesis. Literature points to several studies that have emphasized on the preservation of bone through the use of various grafts, fillers, and scaffolds. [4][5][6] When these socket preservation measures are carried out at the time of extraction, it is hypothesized that primary implant stability would be improved due to increased BIC, with all other factors remaining constant. This is best assessed by means of a 3D imaging technique.
The present study was conducted to determine if any significant difference in terms of clinical, radiological, or histological parameters exists, when an extraction socket is grafted with DFDBA and PRF as compared to the ungrafted socket.
The results of this study with respect to the ridge height showed more ridge height loss before extraction on the lingual aspect in the control group (8.07 ± 2.46) as compared to the test group (6.40 ± 1.18) and more ridge height loss before implant placement and 3 months post implant placement on the buccal and lingual aspects in the control group as compared to the test.
Our findings were similar to those reported by Neiva et al. [13] and in variance to the study conducted by Iasella et al., [7] who stated that preserved sites gained height on the buccal aspect. In our study,v both groups lost bone height and width, with the control sites losing more than the test sites. In addition, in both the groups, ridge width reduced in a time span of 6-7 months without any significant difference between the groups. Iasella et al. [7] too found width reduction in both test and control groups with lower loss of ridge width in the test group as compared to the control group. There are innumerable factors that affect ridge width and height dimensions clinically such as age, [14] gender, [14] compression of the sockets, elevation of flap, [15] particle size of the graft material used, and site of extraction.
Another factor that may have an effect on the bone remodeling process after preservation is flap elevation.
However, according to Wood et al., [15] the order of magnitude of bone resorption caused by flap reflection alone is about 1 mm because it results in poor blood supply, more bone loss, delayed wound healing, and compromise soft-tissue appearance.
Too small a particle size of the graft material will cause faster resorption and too big will not contribute in socket healing. A study on primates indicated that small-particle bone grafts (100-300 µm) present a more favorable osteogenic response than large-particle bone grafts (1000-2000 µm). Our study used a particle size of 500-1000 µm. [16] The location of the tooth is not a consideration if the socket has four surrounding walls. [17] All our patients, whether in test or control group, had sockets with four surrounding walls.
Froum et al. [17] opined that the size and the type of the bone defects following tooth extraction often presents similar healing environments with four remaining socket walls and therefore, it does not have an impact which tooth is taken as a comparison as long as the socket has four surrounding walls. All our patients, whether in test or control group, had sockets with four surrounding walls.
Histologically, this study exhibited an increased amount of bone formed in the test group than the control group after a 12-16-week healing period, without any significance. Only 1.5% residual graft particles were seen in the test group, showing significant conversion of this indigenously developed graft material into bone. This was in line with studies by Beck and Mealey [18] and Froum et al. [17] It is our deduction that the DFDBA in conjunction with PRF acted as an osteoconductive material, resulting in greater and faster bone conversion in 12-16 weeks. PRF is a known osteoconductive agent providing growth factors, thereby enhancing the role of graft material. [8] This study on test group and control group showed over 78% BIC, which is higher than 58%-60% of BIC present around successful dental implants according to Lian et al. [19] According to Naitoh et al., [12] the mean rate of BIC on the labial side was 78.3% with grafts and 65.3% without bone grafts. Our study did not substantiate this where BIC was constant irrespective of the use of grafts. The overall results of this study disprove the null hypothesis that grafted and nongrafted sockets will show no difference in bone loss or BIC with equal conversion to bone. It also proves the alternate hypothesis correct that sockets with grafts will show less bone loss, better BIC, and better conversion to bone as compared to nongrafted sockets.
Limitations of the study
• The sockets studied were of different regions • The age of the donor as well as the time lag between death of the donor and harvesting of the material, may determine the amount of Bone Morphogenic Protein (BMP) that is activated. The same was not considered in this study • More standardization in terms of each arch being studied separately with a split-mouth design would enhance the findings of this study.
CONCLUSION
Grafting an extraction socket with an osteoinductive material appears beneficial for the patient in terms of quality and quantity of bone formed and could improve the prognosis of subsequent implantation. Indigenously developed DFDBA material combined with PRF showed promising results as an osteoinductive material.
|
2020-07-21T13:32:16.705Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "41b823ed68e94f902010a077b84daa92472f37b8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jips.jips_2_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccd52f21869ad6e56860fd14a338f213eaf01b77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254621986
|
pes2o/s2orc
|
v3-fos-license
|
Can Lactoferrin, a Natural Mammalian Milk Protein, Assist in the Battle against COVID-19?
Notwithstanding mass vaccination against specific SARS-CoV-2 variants, there is still a demand for complementary nutritional intervention strategies to fight COVID-19. The bovine milk protein lactoferrin (LF) has attracted interest of nutraceutical, food and dairy industries for its numerous properties—ranging from anti-viral and anti-microbial to immunological—making it a potential functional ingredient in a wide variety of food applications to maintain health. Importantly, bovine LF was found to exert anti-viral activities against several types of viruses, including certain SARS-CoV-2 variants. LF’s potential effect on COVID-19 patients has seen a rapid increase of in vitro and in vivo studies published, resulting in a model on how LF might play a role during different phases of SARS-CoV-2 infection. Aim of this narrative review is two-fold: (1) to highlight the most relevant findings concerning LF’s anti-viral, anti-microbial, iron-binding, immunomodulatory, microbiota-modulatory and intestinal barrier properties that support health of the two most affected organs in COVID-19 patients (lungs and gut), and (2) to explore the possible underlying mechanisms governing its mode of action. Thanks to its potential effects on health, bovine LF can be considered a good candidate for nutritional interventions counteracting SARS-CoV-2 infection and related COVID-19 pathogenesis.
Introduction
The human coronavirus SARS-CoV-2, causing coronavirus disease 2019 (COVID- 19), has rapidly spread around the globe since its discovery in December 2019. In March 2020, the World Health Organization (WHO) has declared the novel coronavirus outbreak a global pandemic. Up until today, SARS-CoV-2 has killed more than 6.6 million people and infected at least 642 million individuals worldwide while the numbers are still increasing [1]. In most countries, the COVID-19 pandemic has overloaded healthcare services and has led to huge economic losses. Clinical symptoms of COVID-19 vary from asymptomatic infection to severe lung failure requiring hospitalization and mechanical ventilation [2]. SARS-CoV-2 transmission typically occurs through lung droplets and the incubation period is about 6.6 days [3]. The most common symptoms of COVID-19 include dry cough, fever, shortness of breath, muscle pain and fatigue [4,5]. About 30% of patients have long-term symptoms after a SARS-CoV-2 infection persisting longer than a month after COVID-19 [6]. This is called 'Long COVID'. Around 18% of COVID-19 patients suffer from gut-related complaints like diarrhea, nausea/vomiting, or abdominal pain [7].
Since early 2020, nutrients and bioactive compounds have been studied for potential roles in prevention and/or complementary treatment of disease symptoms [8][9][10] alongside cence and inflammaging. Immunosenescence is an ongoing accumulation of senescent immune cells slowing down the clearance of pathogens, whereas inflammaging is a low grade inflammation happening during aging causing immune dysfunction [41]. Recent data from COVID-19 studies indicate that immunosenescence and inflammaging are key drivers of the high mortality rates in seniors [42].
A range of harmful stimuli over a lifetime can lead to inflammaging, including viruses, pathogenic microorganisms, a poor diet, mental stress, toxins, specific drugs or a sedentary lifestyle [43]. People have to constantly adapt to these harmful stimuli and with age people tend to have a more sedentary lifestyle. As a result, the immune balance may shift to a more pro-inflammatory state and age-linked disorders may emerge and an inability to respond effectively to viruses and other pathogens. Adding a SARS-CoV-2 infection to the inflammaging process is like pouring fuel on a burning fire, resulting in hyperinflammation, a cytokine storm, and severe disease ( Figure 1).
Figure 1.
With age, the human body becomes less resilient and more vulnerable to (severe) disease due to inflammaging and gut dysbiosis. In healthy adults the immune system is fit, with the proand anti-inflammatory processes in balance, and the beneficial and pathogenic microorganisms in balance. The body is resilient to disease. During aging, the body has to respond to all kinds of harmful stimuli including viruses, pathogenic bacteria, a poor diet, toxins, and drugs, leading to inflammaging and gut dysbiosis; shifting the balance towards higher vulnerability to diseases like COVID-19, heart disease, diabetes, cognitive decline and frailty.
Performance of the immune system is strongly linked to performance of the gut microbiome and the other way around [44], because the gut houses over 70% of the body's immune cells. With age, the number of beneficial microbes in the gut declines, the microbiome is less diverse and disbalanced due to external triggers like viruses, a poor diet, pathogenic microorganisms and drugs (e.g., antibiotics) [45][46][47]. This dysbiosis influences the native immune response and may lead to inflammaging, an inability to protect against pathogens and digest food properly, and to unhealthy aging [45]. Therefore, one can imagine that a gut dysbiosis might increase the possibility of a SARS-CoV-2 infection. In COVID-19 patients, the composition of the gut microbiome was shown to be different [48,49]. The number of beneficial bacteria like Faecalibacterium prausnitzii was correlating negatively with COVID-19. Furthermore, gut dysbiosis alongside gut inflammation raises levels of ACE2, a cell surface receptor targeted by SARS-CoV-2 increasing the risk of infection [35,36]. Figure 1 graphically summarizes the proposed aging processes that lead to vulnerability to (severe) diseases like COVID- 19. In the battle against COVID-19, lifestyle changes can counteract inflammaging and reduce both chronic inflammation and elevated cytokine levels. Physical activity especially in combination with dietary modifications reduces inflammatory markers rapidly [50]. In this context, specific nutrients have gained plenty of attention during this pandemic as well-tolerated and cheap alternatives to drugs in order to prevent or fight disease With age, the human body becomes less resilient and more vulnerable to (severe) disease due to inflammaging and gut dysbiosis. In healthy adults the immune system is fit, with the pro-and anti-inflammatory processes in balance, and the beneficial and pathogenic microorganisms in balance. The body is resilient to disease. During aging, the body has to respond to all kinds of harmful stimuli including viruses, pathogenic bacteria, a poor diet, toxins, and drugs, leading to inflammaging and gut dysbiosis; shifting the balance towards higher vulnerability to diseases like COVID-19, heart disease, diabetes, cognitive decline and frailty.
Performance of the immune system is strongly linked to performance of the gut microbiome and the other way around [44], because the gut houses over 70% of the body's immune cells. With age, the number of beneficial microbes in the gut declines, the microbiome is less diverse and disbalanced due to external triggers like viruses, a poor diet, pathogenic microorganisms and drugs (e.g., antibiotics) [45][46][47]. This dysbiosis influences the native immune response and may lead to inflammaging, an inability to protect against pathogens and digest food properly, and to unhealthy aging [45]. Therefore, one can imagine that a gut dysbiosis might increase the possibility of a SARS-CoV-2 infection. In COVID-19 patients, the composition of the gut microbiome was shown to be different [48,49]. The number of beneficial bacteria like Faecalibacterium prausnitzii was correlating negatively with COVID-19. Furthermore, gut dysbiosis alongside gut inflammation raises levels of ACE2, a cell surface receptor targeted by SARS-CoV-2 increasing the risk of infection [35,36]. Figure 1 graphically summarizes the proposed aging processes that lead to vulnerability to (severe) diseases like COVID-19.
In the battle against COVID-19, lifestyle changes can counteract inflammaging and reduce both chronic inflammation and elevated cytokine levels. Physical activity especially in combination with dietary modifications reduces inflammatory markers rapidly [50]. In this context, specific nutrients have gained plenty of attention during this pandemic as well-tolerated and cheap alternatives to drugs in order to prevent or fight disease without inducing any adverse effects. Especially, LF, a mammalian milk protein, could be a powerful tool to support health by counteracting infection, balancing the immune system and stimulating a healthy microbiota and epithelial barrier function.
Lactoferrin's Protective Effects against SARS-CoV-2 Infection
LF has been intensely studied over the last decades and is viewed as a key protein to fight infection, aid the immune system, and circumvent iron deficiencies [20,21,28,[51][52][53][54][55][56][57][58][59][60][61]. A PubMed search (accessed 1 August 2022) revealed over 120 published clinical trial papers of which the majority (>105 trials) focused on anti-viral, immune or iron-related physiological effects of LF. However, LF also has been shown to fulfil anti-microbiological, microbiotaand intestinal barrier-related functions ( Figure 2). In this narrative review the focus is on these six physiological effects of LF that especially support health of the 2 most affected organs in COVID-19 patients, the respiratory and the gastrointestinal tract. Since 2020, LF has been extensively studied in relation to SARS-CoV-2 infections leading to 43 original, mostly preclinical and some clinical study publications. without inducing any adverse effects. Especially, LF, a mammalian milk protein, could be a powerful tool to support health by counteracting infection, balancing the immune system and stimulating a healthy microbiota and epithelial barrier function.
Lactoferrin's Protective Effects against SARS-CoV-2 Infection
LF has been intensely studied over the last decades and is viewed as a key protein to fight infection, aid the immune system, and circumvent iron deficiencies [20,21,28,[51][52][53][54][55][56][57][58][59][60][61]. A PubMed search (accessed 1 August 2022) revealed over 120 published clinical trial papers of which the majority (>105 trials) focused on anti-viral, immune or iron-related physiological effects of LF. However, LF also has been shown to fulfil anti-microbiological, microbiota-and intestinal barrier-related functions ( Figure 2). In this narrative review the focus is on these six physiological effects of LF that especially support health of the 2 most affected organs in COVID-19 patients, the respiratory and the gastrointestinal tract. Since 2020, LF has been extensively studied in relation to SARS-CoV-2 infections leading to 43 original, mostly preclinical and some clinical study publications. The physiological effects of LF intimately depend on the tertiary structure of LF. LF is mainly extracted from bovine milk and used in numerous commercial products such as infant formula, nutritional supplements, and functional foods. LF is sensitive to denaturation induced by temperature and other physicochemical stresses like high pressure and drying. Recently several reviews have appeared highlighting that extraction and powder formation processes of LF-containing products have to be optimized, like avoiding/minimizing heat treatment and drying, to minimize its undesired denaturation [62,63].
Iron-Binding and Absorption
Iron deficiency anemia affects nearly 20% of the world population [64]. It may also play a major role in multiple organ dysfunction syndrome in COVID-19 as the virus damages hemoglobin thereby releasing iron into the bloodstream causing oxidative stress and cell damage [39,65]. A meta-analysis of 189 studies involving more than 57,000 COVID-19 patients across all ages, showed an increased amount of ferritin, an iron-binding protein, among COVID-19 patients with severe symptoms compared to moderate cases, and in non-survivors versus survivors [66]. Ferritin is known to play a critical role in inflammation by contributing to the development of a cytokine storm [66]. Iron deficiency, and elevations in serum ferritin can persist for around 2 months after the onset of COVID-19 in some patients [67]. Viral replication depends on host cell iron enzymes, some of which are involved in transcription, viral mRNA translation, and viral assembly [68]. SARS-CoV-2 infection induces a pro-inflammatory cytokine storm, including IL-6 [69], which in turn dysregulates iron homeostasis leading to an intra-cellular iron overload [70]. Such an iron overload increases viral replication, thereby enhancing the seriousness of the infection The physiological effects of LF intimately depend on the tertiary structure of LF. LF is mainly extracted from bovine milk and used in numerous commercial products such as infant formula, nutritional supplements, and functional foods. LF is sensitive to denaturation induced by temperature and other physicochemical stresses like high pressure and drying. Recently several reviews have appeared highlighting that extraction and powder formation processes of LF-containing products have to be optimized, like avoiding/minimizing heat treatment and drying, to minimize its undesired denaturation [62,63].
Iron-Binding and Absorption
Iron deficiency anemia affects nearly 20% of the world population [64]. It may also play a major role in multiple organ dysfunction syndrome in COVID-19 as the virus damages hemoglobin thereby releasing iron into the bloodstream causing oxidative stress and cell damage [39,65]. A meta-analysis of 189 studies involving more than 57,000 COVID-19 patients across all ages, showed an increased amount of ferritin, an iron-binding protein, among COVID-19 patients with severe symptoms compared to moderate cases, and in non-survivors versus survivors [66]. Ferritin is known to play a critical role in inflammation by contributing to the development of a cytokine storm [66]. Iron deficiency, and elevations in serum ferritin can persist for around 2 months after the onset of COVID-19 in some patients [67]. Viral replication depends on host cell iron enzymes, some of which are involved in transcription, viral mRNA translation, and viral assembly [68]. SARS-CoV-2 infection induces a pro-inflammatory cytokine storm, including IL-6 [69], which in turn dysregulates iron homeostasis leading to an intra-cellular iron overload [70]. Such an iron overload increases viral replication, thereby enhancing the seriousness of the infection [71]. Collectively, these data highlight the potential involvement of iron and related proteins in COVID-19 pathology. LF is an iron-binding protein belonging to the transferrin family of proteins. One of its main physiological effects is related to iron absorption [72]. Usually, LF is only partially saturated and has an iron saturation of about 15-20%. Apo-LF is iron-depleted LF (<5% iron saturation) whereas saturated LF is known as holo-LF [39]. Thanks to its ability to bind iron, it plays an antioxidant role within the body. In cases of infection and excessive iron in the body, there may be overproduction of reactive oxygen species (ROS), leading to oxidative stress and causing significant cell damage. LF disrupts the production and elimination of these ROS by preventing oxygen and iron from binding [73][74][75].
LF-bound iron present in milk and other dairy products becomes available to the human body through the intestinal uptake of LF via the LF receptor, making LF a nutritional iron source with similar or even better efficiency as inorganic iron salts. Via this route, LF can confer protection against anemia, especially in populations at risk of iron deficiency like COVID-19 patients [76][77][78][79]. In addition to the well-characterized iron-binding activities, LF has been shown to modulate expression of major iron proteins, such as ferritin and ferroportin, in preclinical studies as well as in human intervention trials [70,80]. In COVID-19 patients, early oral administration of LF decreases serum ferritin levels [69]. As iron and iron-related proteins play a role in COVID-19 pathogenesis, LF might be of interest as a nutraceutical agent. Furthermore, apo-LF avidly can bind iron, making it unavailable to SARS-CoV-2 that requires iron for viral replication and for its functions [81]. Therefore, iron chelation therapy using LF in COVID-19 has been suggested to be an additional approach to arrest viral replication with the prerequisite of adequate understandings on the patient's iron status, including iron, ferritin, and hemoglobin levels (Table 1) [81,82]. Table 1. Physiological effects as part of COVID-19 pathogenesis (left column) and counteracting ironrelated effects of LF clinically demonstrated in COVID-19 patients or other populations (right column).
COVID-19 Pathogenesis LF's Iron-Related Effect
Iron deficiency risk increases LF increases iron absorption thereby lowering the risk of iron deficiency 2 Ferritin and IL6 levels increase LF decreases ferritin and IL6 levels 1 Intracellular iron overload increases viral replication LF decreases the intracellular iron level 2 resulting in reduced viral replication 1 Virus attacks hemoglobin leading to iron and oxygen release thereby inducing oxidative stress LF chelates iron thereby reducing oxidative stress 2 1 Effect detected in COVID-19 patients or 2 in other populations.
Anti-Viral Activity
Although LF has several biological benefits, the host-protective effects against pathogens including viruses, bacteria, and fungi are regarded as one of its most beneficial [83]. Several reviews have highlighted the in vitro anti-viral effects of LF against pathogens that cause common infections such as influenza, the common cold, summer cold, gastroenteritis, polio, and herpes. In these cases, LF inhibits mainly viral attachment or entry into the target cells [29,53,83,84]. Lately also the number of in vivo studies indicating the protective effects of LF against common viral infections including SARS-CoV-2 have increased [29,85]. A recent meta-analysis of 6 LF intervention studies in infants (4 studies) and adults (2 studies) reported a significant risk reduction of developing respiratory tract infections when using LF (dosage range in adults from 200-2000 mg/d) [86]. Four independent randomized trials in infants from China, Japan and the US demonstrated that LF in infant formula is a promising intervention to prevent acute respiratory tract illness or infection [87][88][89][90]. In adults, LF was studied in relation to direct measures on viral infectious diseases as a common cold and a summer cold. In the study of Vitetta et al., a daily combination of 400 mg LF and 200 mg Ig-enriched whey protein was given to individuals that frequently suffer from cold episodes. This 90-day intervention significantly reduced the number of cold episodes [91]. Although results are promising, it is not yet clear if LF alone would have Nutrients 2022, 14, 5274 7 of 23 similar effects on common cold. However, another study investigated the effect of LF alone on infectious disease in the summer season in Japan. To investigate this, doses of 200 mg and 600 mg of LF and a placebo were administered to healthy Japanese adults for 12 weeks in a double-blind study [92]. Although the prevalence of infectious diseases, including summer colds, were not significantly different, the duration of total infectious diseases and in particular that of the summer cold were significantly shorter than in the placebo group. In summary, in adults LF in combination with Ig-enriched whey protein may affect the number of common colds, whereas LF alone (200 mg) may affect the duration of infectious diseases (especially summer colds) in a dose-dependent manner.
In addition to viruses causing respiratory tract infection, LF targets other viruses causing gastroenteritis like norovirus, rotavirus and enterovirus [83,85]. Norovirus is an important pathogen that causes a majority of gastroenteritis outbreaks worldwide across all ages. So far, three independent studies were conducted to analyze the effects of LF against norovirus. Surprisingly, these studies were all done in children. The oral administration of LF at a dosage of 0.5 g/day for 6 months in norovirus-infected young children (12-18 months) has led to a decrease in duration and severity of gastroenteritis-related symptoms compared to placebo, but no reduction in diarrhea incidence is reported [93]. In another study, the daily oral administration of LF to children reduced the incidence of noroviral gastroenteritis [94]. Lastly, a survey in nursery school children consuming 100 mg LF-containing products including yogurt and dairy drinks indicated a lower incidence of norovirus-like gastroenteritis in children who regularly consumed LF products compared to the control group [95]. Because to date there is no adequate treatment for noroviral gastroenteritis, LF seems a promising candidate to help prevent infection, and further studies, especially also in adults, are warranted to establish more reliable evidence.
Rotavirus and enterovirus infections have been analyzed in three studies comparing the effects of LF administration at a daily dosage between 70 and 100 mg versus placebo [83,85]. In a twelve-week study, 100 mg/day LF reduced the severity of rotaviral gastroenteritis although there was no significant benefit in reducing infection incidence. The addition of recombinant human LF and lysozyme (both derived from a recombinant rice [96]) to a rice-based oral rehydration solution reduced the duration of acute diarrhea in children whose rotavirus was identified as a pathogen [97]. In young children receiving 70 mg/day LF over 1 year in a day care setting, no differences in the prevention of enterovirus or rotavirus infection or serum IFN-γ and IL-10 were observed [52]. In children receiving 100 mg/day in a day care setting, absences due to vomiting were reduced [52]. In summary, LF doses of minimally 100 mg per day seem effective in reducing gastroenteritis-related symptoms as a result of viral infections.
Based on the abovementioned extensive antivirus properties of LF against a wide range of common viruses, it can be hypothesized that LF may be used as a potential nutraceutical/food ingredient for the prevention and/or adjunct treatment of COVID-19. Recent evidence indeed suggests that LF may have such potential by inhibiting virus attachment, internalization and replication (Table 2) [29,53]. In addition to ACE2, heparan sulfate proteoglycans (HSPGs) can be recognized by SARS-CoV-2 as a receptor for cell attachment [98]. Binding of the viral spike proteins to HSPGs leads to virus enrichment on the cell surface facilitating the subsequent binding with ACE2. In vitro, LF can prevent SARS-CoV-2 infections by blocking the interaction between the virus and HSPG receptors on the cell surface in an ACE2-independent fashion [98,99]. Via this mechanism LF significantly interferes with viral anchoring, preventing high viral concentration on the cell surface, as well as the contact with the specific entry receptor, namely ACE2, which would result in full infection.
After the initial contact with the host cells via HSPG, SARS-CoV-2 then rolls onto the cell membrane and scans for ACE2, its specific receptor, to bind and lead subsequent cell entry. ACE2 is well expressed in the epithelial cells of the nose, providing an important point of entry for SARS-CoV-2, whereas ACE2 expression in the lower lung is restricted to alveolar epithelial type II cells. This difference in ACE2 expression level in the respiratory tract is mirrored by the SARS-CoV-2 infection gradient, with nasal epithelial cells being primary targets for SARS-CoV-2 replication in the early stage of infection [100]. In addition to the nasal cavity, also the mouth may be an important initial entry point as the spike protein of SARS-CoV-2 binds to ACE2 in salivary glands and SARS-CoV-2 has been consistently detected in the infected patient's saliva [101,102]. ACE2 is also expressed on other cell types, such as in the esophagus, ileum, myocardium, kidney and urothelial cells at least in part explaining why the infection often is not limited to the lungs [30]. Tissue tropism is dependent on the SARS-CoV-2 variant as the SARS-CoV-2 Omicron variant has a tissue tropism towards the upper respiratory tract [103].
LF has been shown to inhibit virus infection of a range of different SARS-CoV-2 variants including the Delta variant, one of the most virulent and potentially more deadly virus strains [104,105]. LF seems more potent than human LF [104,105]. The effects seem to be correlating to LF itself or its proteolytic product lactoferricin B (residues 17-41) but not to other dairy proteins in whey because Wotring et al. did not observe in vitro efficacy for the latter samples against SARS-CoV-2 [104]. When there was efficacy for a sample, it was correlated with the fraction of LF, suggesting that the anti-viral activity was from LF alone or its proteolytic product lactoferricin B, which is the N-terminal positively charged region in LF [104,105]. It is of note that the iron-binding property of LF had no effect on the early stage of virus binding to the host cells in vitro [104], indicating that the iron-chelating property of LF affects mainly virus replication and inflammatory steps thereafter.
LF can affect the replication of the virus, because the anti-viral activity of LF is synergistic with the anti-viral drug remdesivir in cell culture [105]. Furthermore, LF combined with diphenhydramine, an antihistamine used for allergy symptoms, can reduce the replication of the SARS-CoV-2 by 99% in vitro suggesting that LF in vivo may shorten the recovery time [106,107]. Individually, LF and diphenhydramine each inhibited SARS-CoV-2 virus replication by about 30% in vitro [106]. This may be due to the fact that in COVID-19 patients, the virus "hijacks" stress-response machinery, including sigma receptors, in order to replicate in the body. Interfering with that signaling appears to be the key to inhibiting the virus's potency. Data from the experiments show that combinations of highly specific sigma receptor binding products, such as diphenhydramine and LF have the potential to prevent virus infection and decrease recovery time from COVID-19 [106]. Thus, anti-viral effect of LF against SARS-CoV-2 variants is at least in part mediated through preventing the virus from binding to the target cell surface and inhibiting virus replication, which would be predominantly effective during the early phase of the virus infection in the salivary glands and throat when given orally, and in the nose and upper respiratory tract when given as a nasal spray [83]. Orally supplemented LF might also have potential in the more distal parts of the gastro-intestinal tract, as SARS-CoV-2 is known to also cause gastro-intestinal complaints in 18% of COVID-19 cases [7].
Anti-Microbial Activity
Xu et al. [108] recently mapped the research hot spots and development trends regarding the antibacterial effect of LF. Based on this analysis, it is clear that the development of LF as a natural antibacterial is a rapidly evolving research area [108]. The COVID-19 outbreak indirectly led to an annual increase in the number of publications.
LF has a strong affinity to iron that is essential for cell growth and proliferation. Thanks to its capacity to sequester iron, it deprives microbes of this essential element for their growth and development [23]. It therefore has recognized anti-microbial properties ( Table 3). It is actually the first discovered and one of its most well-known characteristics of LF [109]. LF reduces the growth of a wide variety of microorganisms, including gramnegative and gram-positive bacteria, fungi, and protozoa [57,[110][111][112]. It effectively inhibits the growth of Candida tropicalis, Escherichia coli, Helicobacter pylori, Legionella pneumophila, Staphylococcus aureus, Salmonella typhi, Streptococcus and Trichomonas vaginalis. Table 3. Physiological effects as part of COVID-19 pathogenesis (left column) and corresponding anti-microbial effects of LF (right column). In addition to the bioactivity of intact LF, studies have shown that some peptides formed from LF also have potent activity against specific pathogenic bacteria whereas they do not inhibit specific beneficial bacteria like bifidobacteria and lactobacilli [113]. The latest research showed LF's anti-fungal effect against pathogenic Streptomyces scabiei due to its short bioactive peptides, and LF might be more effective than human LF [114].
COVID-19 Pathogenesis LF's Anti-Microbial Effect
LF and LF-derived peptides probably kill bacteria in 4 different ways: 1. The high iron affinity limits iron availability to microorganisms [115]. 2. It also specifically interacts with lipopolysaccharides in the cell membranes of specific microbes, thereby causing fatal damage [116]. 3. The interaction of LF with the bacterial membrane also induces the activities of other antibacterial factors, including lysozyme [84]. 4. Finally, LF can exert antibacterial activity by inhibiting enzyme function [117].
Many in vivo preclinical studies showing anti-microbial effects have been reviewed by Teraguchi et al. [118]. The promising results obtained in these preclinical models led to clinical trials. Especially the role of LF in human milk and infant formulas has much clinical evidence [119]. Currently, LF is used as an additive ingredient in various foods, such as infant formula, but also yogurt, skimmed milk, and beverages. In vivo, LF added to formula milk or directly supplemented can prevent sepsis in high-risk preterm infants [22]. Preterm infants are at risk for sepsis causing high mortality and morbidity despite treatment with antibiotics. A recent meta-analysis of 12 controlled trials showed that LF supplementation decreased late-onset sepsis and shortened the hospital stay, indicating that LF may act against certain pathogens in these preterms [22].
In adults, Okuda et al., confirmed the antibacterial activity of LF in inhibiting colonization by Helicobacter pylori in humans [112]. This is one example of the applications of LF as an anti-microbial agent in humans. Various other studies have demonstrated that oral administration of LF can reduce bacterial and fungal infections in the gut [120,121].
According to a systematic review, the majority of patients with COVID-19 received antibiotics (71.9%) to prevent bacterial co-infections [122]. Bacterial co-pathogens are usually commonly identified in viral respiratory infections and are important causes of morbidity and mortality. However, in a meta-analysis of 24 studies the overall prevalence of bacterial infection in patients infected with SARS-CoV-2 was 6.9%, whereas in critically ill patients it was more common (8.1%) [122]. Secondary bacterial infection happened in 14.3% of patients. The authors of the meta-analysis concluded that the majority of these patients may not require empirical antibacterial treatment [122]. To date it is unclear how many of these patients consumed LF-containing foods or supplements. LF's antibacterial effect may still be relevant for these patients to prevent these co-infections and/or secondary infections or support treatment. However, this needs further clinical substantiation.
It is of note that LF, as a naturally occurring protein in saliva, is supposed to provide microbial homeostasis in the oral cavity [123]. Aging is known to affect the composition of the oral microbiome causing dysbiosis, increased infections, and persistent low-grade inflammation, which may ultimately compromise overall health [124]. In saliva, LF expression is affected by an inflamed oral mucosa that can contribute to occurrence of an oral dysbiosis [125]. Furthermore, LF is expressed at a lower level in saliva as we age [126], making elderly more vulnerable to (infectious) diseases like COVID-19. The use of food products with LF like dairy products increases the level of LF in the oral or nasal cavity, thus strengthening the first line of defense against pathogenic bacteria and viruses [123].
Immune Modulation
Known as a natural antibiotic and anti-viral agent, LF is also an important component that bridges the native and adaptive immune systems of mammals and plays roles in protecting human cells at all stages of life [21,123,[127][128][129]. With its positively charged N-terminus, LF can bind to negatively charged cell surfaces (e.g., proteoglycans) thereby regulating the native and adaptive immune responses and influencing the expression of pro-and anti-inflammatory cytokines [21]. Preclinical studies demonstrated a role of LF in affecting the expression of several chemokines (e.g., IL-8), anti-inflammatory (e.g., IL-10) and pro-inflammatory cytokines (e.g., TNF-α, IL-12) [21]. For instance, in colostrum-deprived neonatal piglets receiving a formula enriched with LF, the in vivo IL-10 production was increased in spleen [130]. After LF treatment in an experimental colitis model, pro-inflammatory factors (TNF-α, IL-1β, and IL-6) significantly decreased, while anti-inflammatory factors (IL-10 and TGF-β) were maintained or even enhanced [131].
The effects of administration of LF on the cytokine response in humans was reported in several studies. In a non-blinded study, Ishikado et al. reported an increase of the anti-viral IFNα in healthy volunteers with 319 mg/d liposomal LF [132]. In post-menopausal women, given 250 mg/d LF, the expression of pro-inflammatory markers (IL-1β, TNF-α, IL-6, IL-12, and C-reactive protein) decreased while the anti-inflammatory IL-10 increased [133] ( Table 4). In pregnant women, affected by anemia of inflammation, 200 mg/d LF administration decreased IL-6 and increased hematological parameters [70,80].
LF also affects the adaptive immune system by promoting: (1) the maturation of T-cell precursors into competent helper cells, and (2) the differentiation of immature B-cells into antigen presenting cells [21,134] (Table 4). Clinically, Mulder et al. reported effects on cells from the adaptive immune system [73]. With a dosage of 200 mg/d LF a significant increase in total T-cell activation, T-helper cell activation and cytotoxic T-cell activation was reported. Furthermore, LF knock-out mice have a deficient B-cell and intestinal development, and are more susceptible to periodontitis and experimental colitis [135,136]. Table 4. Physiological effects as part of COVID-19 pathogenesis (left column) and counteracting immune effects of LF (right column).
COVID-19 Pathogenesis LF's Immune Effect
Prominent early features of COVID-19 include a pronounced reduction in B cells important in defense against SARS-CoV-2 LF has a profound modulatory action by the differentiation of immature B-cells into efficient antigen presenting cells Immune system may over-react sending in neutrophils, T-helper-(CD4) and cytotoxic T-cells (CD8) that release pro-inflammatory cytokines, especially IL-1 and IL-6 LF increases total T-cell activation, T-helper cell activation and cytotoxic T-cell activation, and suppresses cytokines levels including IL-6 and TNF-α The 'cytokine storm' damages normal lung cells more than the virus it targets leading to acute respiratory distress syndrome (ARDS) In a model of pulmonary acute respiratory distress syndrome (ARDS) in granulomatous inflammation, LF can reduce pulmonary pathological features Some cytokines, including IL-6, IL-10, and TNF-α, have been described as biomarkers related to severe SARS-CoV-2 infection LF supplementation decreases levels of cytokines including IL-6 and TNF-α, and increases IL-10 Many genes involved in the innate immune response, including endogenous LF, may participate in SARS-CoV-2 clearance. LF is highly elevated (up to 150 folds) in SARS-CoV-1 patients in comparison with healthy volunteers and influenza virus infected patients [137,138]. Furthermore, some cytokines, including IL-6, IL-10, and TNF-α, have been described as biomarkers related to severe SARS-CoV-2 infection [139][140][141][142]. According to bibliometric analysis published by Xu et al. [108], the "cytokine storm" is one of the main pathogenesis mechanisms of SARS-CoV-2 virus-induced COVID-19. Therefore, inhibiting the cytokine storm may be a good approach toward combating COVID-19 infection [143]. As mentioned earlier, LF exerts immunomodulatory actions by inducing the T-cell activation, suppressing the levels of cytokines including IL-6 and TNF-α, upregulating ferroportin and transferrin receptor 1, and down-regulating ferritin, pivotal actors of iron and inflammatory homeostasis [70,132,144] (Table 4). Consequently, LF inhibits intracellular iron overload, an unsafe condition enhancing in vivo susceptibility to infections, as well as anemia of inflammation [70]. In the study of pulmonary acute respiratory distress syndrome (ARDS) in granulomatous inflammation model [74], it was found that LF can reduce or eliminate cytokine excess and pulmonary pathological features caused by Mycobacterium tuberculosis [145,146]. In addition, LF was also able to diminish hyperacute immunopathology developed in murine models of Mycobacterium tuberculosis infection [147,148]. LF inhibits SARS-CoV-2 infection in different cell models with multiple modes of action, including enhancing interferon responses [105,149] (Table 4). Overall, these recent research studies may make LF an exciting clinical candidate for the treatment or prevention of SARS-CoV-2 in the future.
Microbiota Modulation
The efficacy of LF as a selective modulator of the microbiome was confirmed in several tests in vitro, in animal models and in human studies [57,150,151]. LF successfully altered the microbiota of the gut by eliminating pathogenic microorganisms and increasing the beneficial bacteria, such as bifidobacteria and lactobacilli that restored the state of eubiosis and protected against the serious consequences of dysbiosis. Several studies showed that intact as well as the proteolytic fragments derived from LF selectively stimulate the growth of beneficial bacteria and act as a selective microbiota modulator thereby preventing the proliferation of pathogens that cause diarrhea, such as Salmonella or rotavirus [59,151,152].
Even though COVID-19 is mainly a respiratory disease, there is accumulating evidence indicating that the gut is involved as well. According to a recent meta-analysis of 60 studies comprising more than 4000 patients, about 18% of COVID-19 patients suffer from gastrointestinal complaints like diarrhea, nausea/vomiting, or abdominal pain [7]. Associations between gut microbiota, ACE2, and inflammatory markers in COVID-19 patients indicate that the gut microbiota is involved in the extent of disease severity possibly via modulating host immune responses. Furthermore, the gut dysbiosis after disease resolution could contribute to lasting symptoms, emphasizing a need to understand how the gut microbiota are involved in COVID-19 [48,49,153,154]. A recent proof of concept trial showed that a well-studied probiotic, Lactobacillus LGG, is correlated with extended time to development of COVID-19 infection, decreased incidence of symptoms, and changes in the gut microbiota when used within a week after exposure [155]. If LF induces a similar response in COVID-19 patients needs further research.
Intestinal Barrier Function
LF is able to promote cell proliferation and differentiation in the gastrointestinal tract. High concentrations of LF stimulate intestinal epithelial cell proliferation, whereas low LF concentrations stimulate intestinal differentiation [156]. Healthy gut epithelial cells act as a transcellular barrier, while the tight junctions between cells provide a paracellular barrier against translocation of microbes and substances from the lumen [59]. In a preclinical model, oral administration of LF significantly enhanced tight junction protein expression (Claudin-1, Occludin, and ZO-1) and lowered intestinal permeability, suggesting an improvement in intestinal barrier function [131,157,158]. In humans, oral recombinant human LF supplementation reduced a drug-induced increase in gut permeability and hence may provide a nutritional tool in the treatment of permeability-associated illnesses [159]. An intact gut epithelial barrier is critical for normal physiological functions and decreasing bacterial translocation. These data suggest a role for LF in supporting gastrointestinal health.
Even though SARS-CoV-2 is thought to mainly transmit via lung droplets, it can invade enterocytes, causing symptoms and acting as a reservoir. Gut symptoms can be the first clinical sign in children [160]. It recently has been shown that the epithelial barrier function is compromised in COVID-19 patients leading to an inflammatory response [161]. Moreover ACE2, the main SARS-CoV-2 receptor, contributes to the maintenance of epithelial barrier function and its expression is negatively correlated with SARS-CoV-2 virus load [48,162]. As mentioned earlier, LF stimulates the growth of beneficial gut microbiota and the proliferation/differentiation of enterocytes with direct anti-inflammatory activities [150,156]. These strengthen mucosal immunity and the gut epithelial barrier. These LF effects have not been examined in COVID-19 patients, but this could be considered as interesting approach in the battle against SARS-CoV-2 infection, as they have positively affected recovery from other coronaviruses.
Lactoferrin Intervention Studies in COVID-19 Patients
Considering the results obtained from (pre)clinical studies, several human trials are published and 9 are in progress to investigate the anti-SARS-CoV-2 actions of LF for both prevention and adjunct therapy of COVID-19 [13]. First of all, an ex vivo study reported that LF may inhibit SARS-CoV-2 entry into nasopharynx and oral mucosa of COVID-19 patients by either directly binding to the viral particles or blocking the virus (co-)receptor present on the host cell [163]. This suggests that LF may already play a role in the early stage of infection. The evidence collected so far from 4 human studies is limited [69,85,164,165] (Table 5). However, some encouraging results are reported, especially in relation to the duration of the infection and the decrease of symptom severity. The interventions included LF and recombinant human LF. In 3 studies LF was encapsulated, because LF is a protein and subject to digestion in the gastrointestinal tract after oral administration. Two studies compared non-protected LF with encapsulated LF [132,166]. These studies indicate that encapsulation of LF improves the absorption.
Italy
The time required to achieve a negative SARS-CoV-2 PCR result was significantly lower. The effectiveness on symptom resolution was progressively higher in older adults.
A prospective observational study in 75 COVID-19 patients in Spain demonstrated that the combined oral administration of liposomal LF and zinc solution for 10 days allowed a complete and prompter recovery of all treated patients within the first 5 days of treatment. The same treatment, but at a lower dose seems to exert a potential preventive effect against COVID-19 in healthy people directly related to the affected patients [ [69,165]. Furthermore, Rosa et al. showed that a significant correlation existed between age and effectiveness of LF in reducing the days of symptoms. The effectiveness of this treatment on symptom resolution was progressively higher in parallel with increasing age [165]. This fact could be associated with the hormonal control of human LF synthesis that decreases with age [168]. Elderly and in particular those suffering from Alzheimer's disease show lower salivary human LF levels [126,169]. A few immunological outcomes improved within the LF-supplemented group (IL-6, D-Dimer), but others did not (TNF-α, IL-10, adrenomedullin) [69]. These findings are promising as monitoring of the course of COVID-19 infection revealed high levels of IL-6 and IL-10, but not always TNF-α, as markers of morbidity and mortality of patients [142]. In contrast, an Egyptian trial showed no significant differences regarding reduction in symptoms and immune parameters between 54 COVID-19 patients with mild-to-moderate symptoms receiving approved Egyptian COVID-19 management protocol and patients receiving the same treatment plus a nonencapsulated LF with a dosage of 200 mg/d [167]. This negative outcome might be due to the limitations of the trial: short duration of treatment (7 days) and/or limited sample size (18 patients/group). A recent systematic review covering studies of a diversity of viral respiratory tract infections, including SARS-CoV-2 concluded that LF (dosage range from 200-1000 mg/d) may help to decrease symptom duration and severity in SARS-CoV-2 infections, although the results between included studies are inconsistent according to the authors [85]. Further studies with larger samples as well as longer-term trials to understand the role of LF in treating SARS-CoV-2 are required.
It is of note that 9 ongoing trials are registered in the WHO international trial registry platform focusing on LF intervention in COVID-19 patients (Table 6). Therefore, new data will become available in the future, allowing a more conclusive judgement on LF's potential benefits as supporting nutritional intervention. Overall, even if larger clinical trials are required, these recent intervention trials indicate that early treatment of COVID-19 patients with LF could be one of the best strategies to avoid the disease onset, progression and severity, especially in patients with an advanced age. These results combined with the high tolerance consistently shown in the studies, makes the LF supplementation an interesting intervention for further investigations.
Information Gaps and Research Opportunities
Key questions that are not yet understood include determining the dose required, the best form of LF to use (e.g., intact or LF peptides, (non-)encapsulated, (minimally) processed, and capsules, liquid or powder) and the best route of administration (oral or intranasal). In the human intervention trials only intact LF has been used so far in a dose ranging from 200 mg to 1000 mg. The formats vary from capsules with LF being encapsulated into liposomes or non-encapsulated to a spray form. Thus far trials with encapsulated LF have shown promising results, whereas the one trial using non-encapsulated LF did not. Increasing LF's stability and/or its absorption might improve clinical outcome. Encapsulation might be one way, but other food options are also possible like combining it with milk-derived osteopontin in a dairy drink [170]. Two routes of administration were tested, oral and intranasal. However, it is too early to draw any conclusions on the preferred route based on current evidence.
Most of the evidence being cited in support of LF's ability to deal with COVID-19, is based on preclinical research. The human trials that have assessed LF as a COVID-19 treatment, have some limitations (short duration of treatment, limited sample size, no placebo group and a low dosage). Further work needs to be done to truly understand how LF can be applied in varying COVID-19 cases. It is important to note that gold standard clinical studies are required to generate results that have high confidence. Regulators around the world tend to set a high threshold for claims that a product can maintain health, treat or cure a disease. This means that for regulators to approve a health claim or a disease reduction claim on LF, they would likely require evidence from multiple large randomizedplacebo-controlled clinical trials relevant to the local population. The population to be studied would have to be clearly defined, because clinical findings in one subset of the population (e.g., elderly) will not necessarily translate to another part of the population (e.g., young adults). Although LF displays the many bioactivities (anti-viral, anti-bacterial, immune-supporting, microbiota-and barrier function modulation) described in this review, there is currently a lack of robust human clinical research to suggest it can specifically prevent or cure COVID-19. However, 9 trials are ongoing which no doubt will generate more data in the future, allowing a more conclusive judgement on LF's potential benefits as supporting nutritional intervention.
Whilst it is understandable that consumers want to proactively seek solutions for themselves, it is important that they do not resort to yet unproven COVID-19 treatments. Although the recent vaccines provide hope for reducing the risk and severity of infections, the authors strongly reiterate the advice of many governments and health organizations that there is currently no concrete scientific evidence to suggest that there is any treatment that is guaranteed to specifically prevent or cure COVID-19.
In order to further build the science behind bovine LF's effect on the immune system and/or pathogen infection, future well-designed randomized placebo-controlled studies could focus on middle-aged or elderly people to see if LF either increases a vaccine response or if it ameliorates a response to a "controlled stressor" (inactivated pathogen) or a real SARS-CoV-2 infection. Middle-aged or elderly volunteers are selected as target population because LF was shown to be more effective in the older population than the younger one. The vaccine response model may shorten the timeframe of the trial and decrease the number of participants needed. It also decreases the risk that volunteers will not conform to the study specifications or will drop out altogether. By inducing symptoms or disturbing the physiological balance through the introduction of a controlled stressor, it is possible to speed up validation, while requiring fewer participants. With this approach, it might be possible to get a health benefit of LF substantiated more quickly and cost-efficiently.
Conclusions
The existing evidence suggests daily intake of LF may have protective effects against SARS-CoV2 infection as is illustrated in Figure 3. Especially, this review highlights the multiple possible physiological effects of LF (iron binding, anti-viral, anti-bacterial, immunesupporting, microbiota-and barrier function modulation) in the battle against COVID-19. LF intake may protect the host from viral infections by prevention of binding and entry of various SARS-CoV-2 variants and sequestering iron needed for viral replication. Furthermore, it indirectly affects virus attacks by having immune-, microbiome-, and intestinal barrier-modulatory characteristics. These six physiological effects of LF mostly support gut-and lung health of people at risk or suffering from COVID-19. Most of the scientific evidence originates from preclinical and ex vivo studies. However, the results of the human intervention studies support the preclinical findings and demonstrate a potential anti-viral and immune-supportive effect of 200 mg-1000 mg LF involving multiple sites of the human immune system, covering the innate and adaptive immune system. In particular LF influences: (1) expression of pivotal biomarkers of COVID-19 like IL-6; (2) B-cell differentiation, total T-cell activation, T-helper cell activation and cytotoxic T-cell activation, (3) intestinal permeability strengthening the intestinal barrier function, (4) incidence and length of common and summer colds as an indication of an improved defense against viral infections in the respiratory tract, which again is also supported by (5) the effects of LF on the length and severity of symptoms in COVID-19 patients. However, the available evidence needs to be expanded with well-designed human intervention studies in order to confirm its protective effects against SARS-CoV-2 and determine the dose required, alongside the best form of LF to use (e.g., intact or LF peptides, encapsulated or non-encapsulated), and to establish the best route of administration (oral or intranasal). From results to date of human studies measuring immune responses, it is not possible to provide an indication what part of the immune system is effective in the protection against viral infections of the respiratory tract. However, 9 new intervention studies are on their way to further study the protective role of LF in the battle against COVID-19. The near future may therefore provide new data behind the potential benefits of LF supplementation as a supporting nutritional measure.
|
2022-12-14T16:18:02.639Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "341d5baa542e77c62c5d27d14e28bf38d5f31c57",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/14/24/5274/pdf?version=1670665109",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2853edc6fcbab73285accba294a33b1e8b764082",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14690782
|
pes2o/s2orc
|
v3-fos-license
|
Oral health of visually impaired schoolchildren in Khartoum State, Sudan
Background Although oral health care is a vital component of overall health, it remains one of the greatest unattended needs among the disabled. The aim of this study was to assess the oral health status and oral health-related quality of life (Child-OIDP in 11-13-year-old) of the visually challenged school attendants in Khartoum State, the Sudan. Methods A school-based survey was conducted in Al-Nour institute [boys (66.3%), boarders (35.9%), and children with partial visual impairment (PVI) (44.6%)]. Two calibrated dentists examined the participants (n=79) using DMFT/dmft, Simplified Oral Hygiene Index (OHI-S), dental care index, and traumatic dental injuries (TDI) index. Oral health related quality of life (C-OIDP) was administered to 82 schoolchildren. Results Caries experience was 46.8%. Mean DMFT (age≥12, n=33) was 0.4 ± 0.7 (SiC 1.6), mean dmft (age<12, n=46) was 1.9 ±2.8 (SiC 3.4), mean OHIS 1.3 ± 0.9. Care Index was zero. One fifth of the children suffered TDI (19%). Almost one third (29%) of the 11–13 year old children reported an oral impact on their daily performances. A quarter of the schoolchildren (25.3%) required an urgent treatment need. Analysis showed that children with partial visual impairment (PVI) were 6.3 times (adjusted) more likely to be diagnosed with caries compared to children with complete visual impairment (CVI), and children with caries experience were 1.3 times (unadjusted) more likely to report an oral health related impact on quality of life. Conclusions Visually impaired schoolchildren are burdened with oral health problems, especially caries. Furthermore, the 11-13 year olds' burden with caries showed a significant impact on their quality of life.
Background
The prevalence of blind children globally is estimated to be 1.4 million, three-quarters of whom live in the poorest regions of Africa and Asia [1]. In low-income countries, the prevalence of childhood blindness may be as high as 1.5 per 1000 children [2]. Such a high prevalence, alongside poor management of resources may result in huge impacts. Childhood blindness impacts negatively on longevity, with up to 60% of blind children dying within one year of losing their eye sight [3]. Earlyonset blindness may impact psychomotor, social, and emotional development thus adversely affecting the visually impaired young child [1].
Childhood blindness in developing countries is a result of acquired factors such as measles, ophathalmia neonatroum, traditional eye medicine, and especially corneal scarring related to malnutrition and vitamin A deficiency [1]. A study conducted in five camps for internally displaced people in Khartoum, Sudan, reported a prevalence of 1.4 per 1000 children suffering from blindness. In this case, the reported leading cause was corneal opacities (40%), from vitamin A deficiency, trauma, or measles. Opacities were followed by amblyopia (32.5%) [4].
Oral health and dental care of the disabled has generally been poorer than the general population [5]. High DMFT/dmft scores were manifest in groups of visually impaired schoolchildren in India (6-12 years, mean DMFT of 4.87) and Riyadh, Saudi Arabia (6-7-years, mean dmft of 6.58, SD 2.02 and 11-12 years, mean DMFT score of 3.89,SD 2.67) [6,7]. Poor oral hygiene, gingivitis and periodontal diseases have been reported among visually impaired children in studies from India [6][7][8], Iran [9], and Turkey [10]. Mann et al. suggested that this can be due to their inability to visualize the plaque on tooth surfaces resulting in inadequate plaque removal and therefore the progression of dental caries and inflammatory disease of the periodontium [11]. Shetty et al. [8] proposed other factors such as lack of manual-visual coordination and parental supervision, and the child's reduced concern for his/her appearance [8]. There are very few studies addressing the impact of the severity of visual impairment on oral health of blind children. While all studies were in agreement that children with partial visual impairment have better oral hygiene than those with complete visual impairment, caries experience was not significantly different among the two groups of blind children [10].
However, visually challenged children show better oral health scores when compared to children with other sensory, physical or intellectual disabilities [12][13][14].
Prevalence of traumatic dental injuries varied among children with visual impairment: 9% in Saudi Arabia, 23.1% in Sao Paulo, Brazil, 24.6% in Kuwait to 32.5% in India [15][16][17][18]. Yet, these prevalence figures are less than those reported in young children with physical and mental disabilities [19].
All in all, the impact of visual impairment on oral health is not conclusive in the literature. This study aimed to assess the clinical oral health status in terms of: dmft/DMFT index, Oral Hygiene Index, Traumatic Dental Injuries and dental treatment needs, and secondly to assess the oral health-related quality of life using the Child-OIDP questionnaire in 11-13-year-old attendees of Al-Nour Institute for the visually challenged children in Khartoum State, the Sudan. Moreover, the study aimed to examine the relationships between these clinical variables, sociodemographics and visual impairment.
Methods
A school-based survey was conducted at Al-Nour Institute for the visually challenged in Khartoum-Bahri (Khartoum north), the only school teaching Braille in Khartoum, Sudan. The Sudanese school system is composed of two levelsprimary and secondary. The primary schools include classes from grade 1 to grade 8. Al-Nour school follows the primary school model. Being the only school providing Braille, the age of children attending happens to be very wide (6 -18 years). As of 2010 this mixed public school had 92 pupils -61 (66.3%) boys and 31(33.7%) girls. Of the whole school population, 33 pupils were boarders (35.9%).
Field work was conducted between November and December 2010. Data were collected through clinical examination, face to face interviews (personal data and The Child Oral Impacts on Daily Performances (Child-OIDP) questionnaire) and school records.
According to the International Classification of Diseases (Update and Revision 2006) [20], there are four levels of visual function, namely normal vision, moderate visual impairment, severe visual impairment and blindness. Moderate visual impairment combined with severe visual impairment is grouped under the term "low vision": low vision taken together with blindness represents all visual impairments. In this study we have categorized visual impairment into level 2 and level 3 to describe partial visual impairment and level 4 (blindness) to represent complete visual impairment.
Obtained from the school records were the age and visual status, thus categorizing the children into 'complete' and 'partial' visual impairment; CVI and PVI respectively.
Clinical examination
Two calibrated paediatric dentists (AT, AE) performed the clinical examination under adequate natural light using a plane mirror and a blunt explorer. Caries was measured using the DMFT/dmft index according to WHO criteria [21]. Dental caries experience was DMFT>0 or dmft>0. It was detected at the cavitaion level only (detectable softened floor, undermined enamel or softened wall). Criteria of "catching" or "retention" of the explorer was not used to detect caries. An explorer was used to remove large debris and to aid in assessing the oral hygiene.
Oral hygiene was assessed using the Simplified Oral Hygiene Index (OHI-S) of Green and Vermillon (1964) [22] -including both components; the Debris index and Calculus index [22]. Score was recorded from six index teeth (all first molars (4), upper right and lower left central incisors (2)) per child. Labial surfaces were examined for all teeth with the exception of the lower molars, where the lingual surfaces were examined. Codes for the Debris index were as follows: 0= Absence of debris or extrinsic stain, 1= debris covering not more than one third of the tooth surface, 2= debris covering more than 1/3 but not more than 2/3 of the tooth surface regardless of the presence of extrinsic stain, 3= Soft debris covering more than two thirds of the examined tooth surface.
The calculus index was scored as follows: 0= No calculus present, 1= Supragingival calculus covering not more than a third of the exposed tooth surface, 2= Supragingival calculus covering more than one third but not more than two thirds of the exposed tooth surfaces or the presence of individual flecks of subgingival calculus around the cervical portion of the tooth or both, 3= Supragingival calculus covering more than two third of the exposed tooth surface or a continuous heavy band of subgingival calculus around the cervical portion of the tooth or both. Accordingly the oral hygiene of each child was classified as good, fair, or poor. Scores for OHI-S values were as follows: poor (≥ 2), fair (1.0 -1.9) and good (≤ 0.9). For the bivariate analysis the OHI-S the poor and fair categories were combined to describe 'poor' oral hygiene.
Dental trauma was measured using the traumatic dental injuries (TDI) index [23]. Codes of the TDI were as follows: code 0=no TDI, code 1=treated TDI, code 2=enamel fracture only, code 3=enamel/dentine fracture, code 4=pulp injury, code 5=tooth missing due to trauma. A code of 9 was given if for any reason a tooth or tooth space could not be scored, or did not warrant a code of 0 to 5.
Treatment needs were categorized into two groups: Urgent treatment need, defined as pain inside the mouth, possible pulpal involvement, or broken or missing restorations with decay, and non-urgent treatment need, defined as any or all of the following: no pain in the mouth; decay present but most likely not involving the pulp; broken restorations with no decay or marginal discoloration, gingivitis or periodontal involvement [24].
The Child Oral Impacts on Daily Performances (Child-OIDP) Questionnaire
Oral health-related quality of life was measured using the eight-item Child-OIDP questionnaire [25], validated previously in a Sudanese child population [26]. This inventory has the ability to provide information on condition specific impacts whereby the respondent attributes the impacts to specific oral conditions or diseases; thus contributing to the needs assessment and the planning of oral health care services [27]. In the classical questionnaire, tested on Sudanese children in 2008, the participating children were first presented with a list of 16 impairments; toothache, sensitive teeth, tooth decay (hole in teeth), exfoliating primary teeth, tooth space (due to a non-erupted permanent tooth), fractured permanent tooth, colour of tooth, shape or size of tooth, position of tooth, bleeding gum, swollen gum, calculus, oral ulcers, bad breath, deformity of mouth or face, erupting permanent tooth and missing permanent tooth. Criterion and concurrent validity for the 8 item Child-OIDP inventory was demonstrated in that the mean Child-OIDP-sum score increased as children's selfreported oral health changed from good to bad and from satisfied to dissatisfied. These results were all statistically significant. The questionnaire was re-introduced to 10 of the students to test reproducibility, and the weighted Kohen's Kappa was 1.0 for all variables In this study, participants were not prompted on all the 16 impairments. Those that were dropped were: color, shape and size, position, deformity of mouth or faceassuming that the participants could not make a fair judgment based upon their visual challenge.
From the presented impairment list, the schoolchildren selected those that they experienced in the past 3 months. Then, they were asked about the frequency and severity of each of the 8 Child-OIDP items, e.g. 'Has your oral health affected your eating habits, speaking, mouth cleaning, relaxing, maintaining your emotional state, smiling, schoolwork and contact with people in the past three months?' If the schoolchild responded positively, he/she was asked about the frequency and severity of each impact, e.g. "How often did this happen? How severe was it?' A single impact frequency scale for individuals affected on a regular basis was used. The frequency and severity of impacts were scored on a 3 point Likert scale (1-3) as follows: Frequency scores (1) being once or twice a month, (2) three or more times a month, or once or twice a week (3) three or more times a week. Severity scores; 1= little effect, 2= moderate effect and 3= severe effect. Lastly, the children were asked to mention the impairments they thought caused the impact on each performance. A maximum of 3 impairments per impact were recorded.
Children were asked about their perception of and satisfaction with their oral health status. "How do you perceive your oral health?" Possible answers were: 0-I do not have an idea, 1-very good, 2-good, 3-bad, 4-very bad. "Are you satisfied with your oral health?" Possible answers were: 0-I do not have an idea, 1-very satisfied, 2-satisfied, 3-not satisfied, 4-not satisfied at all. Children were asked whether they had visited a dentist in the past.
Information on the children's parental education and occupation was collected. Children were asked to choose from one of the following categories for education if the parent was alive: No education, primary education, secondary education, tertiary education/university, preschool Quranic education, do not know. These were further combined into two groups: Not educated (no education, primary), and educated (all others). Children who answered 'don't know' were excluded. For occupation the categories were as follows: Not working, housewife (for mothers), teacher, government employee, employee in private sector, business, student, other. This information was combined into two groups, not working (housewife, and not working) and working (all others).
Ethical clearance was obtained from the University of Khartoum ethical committee and consent was obtained from the school authorities and parents. All students who required dental treatment were referred to University based paediatric dental clinics for dental care.
Statistical methods
These were conducted using SPSS 17.0 (SPSS Inc., 2009). Frequencies, means and crude percentage agreement were computed for descriptive purposes. Cohen's Kappa (n=10) was applied for test-retest reliability. Binary logistic regression was applied to assess the relationship of oral health variables with socio demographic and visual impairment. The model with caries experience as an outcome is the only one that was statistically significant (p < 0.05).
Sample profile
The response rate was 85% (n=79) for the clinical examination, and 89% (n=82) for the Child-OIDP questionnaire. Those children who dropped out were absent from school on the visit day. The clinically examined sample (n=79) consisted of 55 (69.6%) boys and 24 (30.3%) girls. Half of them (n=51, (55.4%)) suffered complete visual impairment. One third lived on campus (n=33, 35.9%). Age range of the study participants was 6-18 years with a mean age of 11.8±SD 3.1. Table 1 describes the socio-demographics of the sampled population Kappa results: during data collection, a group of 10 children (more than 10% of the study sample) was reexamined by both investigators (AE, AT) to assess interexaminer reliability for dental caries diagnosis. The mean inter-examiner agreement (kappa's value) was 0.91.
Caries experience
Caries experience was 46.8%. Mean DMFT (age≥12, n=33) was 0.4 ± 0.7 and significant caries index (SiC) for permanent teeth was 1.6. Mean dmft (age<12, n=46) was 1.9 ± 2.8 and significant caries index for primary teeth was 3.4. Caries experience for deciduous teeth (dmft) was 23.9% and for permanent teeth was 19.6%. The decayed (D,d) component formed the largest contribution to dmft and DMFT. No significant differences were found when comparing DMFT of boys to girls (mean DMFT was 0.34 ± 0.75 and 0.39 ± 0.92 respectively). Similarly, when comparing boarding and non boarding students, mean DMFT was 0.36 ± 0.82 versus 0.36 ± 0.80, respectively, with no significant difference. Care Index (FT/DMFT and ft/dmft) was zero for all.
A quarter of the schoolchildren (25.3%) required an urgent need for treatment. Deep dental caries with possible involvement of the pulp or related dental abscesses constituted more than 90% of these urgent treatment needs.
In bivariate and multivariate analysis visual impairment was significantly associated with caries experience ( Table 2). After adjusting for all variables (Table 2), it was found that children with partial visual impairment (PVI) were 6 times (OR 6.3 95% CI (1.7-22.7)) more likely to be diagnosed with caries over their counterparts with complete visual impairment (CVI).
Oral hygiene
Mean OHI-S was 1.3 ± 0.9 for the whole sample. OHI-S values were grouped into poor (≥ 2), fair (1.0 -1.9) and good (≤ 0.9). Of the whole sample only 21.5% had poor oral hygiene (OH), 43% had fair OH and 35.4% had good OH. The prevalence frequency percentages of OHI-S categories (good, fair, and poor) were compared according to gender, visual impairment, caries experience, residence and dental trauma. This revealed that majority of boarders (41.9%) had significantly poorer oral hygiene (OHI-S) when compared to non-boarders (p =0.001) and more boys had poor oral hygiene than girls (p=0.03). No significant determinants of OHI-S were found in this sample ( Table 2). "Not satisfied at all" 3.7 Past dental visit "Never visited" 92.4 .000 "Yes" 7.6 P-value reports the significance of the Chi-square test comparing the prevalence between groups. Difference considered significant at P < 0.05 (Chi-square test).
The Child Oral Impacts on Daily Performances (Child-OIDP)
Only 15.9% of the whole examined population (n=13) reported an oral impact on their daily performance. The most reported impairment associated with oral health related impact on daily performance was toothache followed by sensitivity. As for the age group 11-13, 8 out of 28 (29%) reported oral health related impacts on quality of life. The most common reported impairments were toothache followed by exfoliating teeth. Bivariate analysis did not show any significant associations between the impact on quality of life and the clinical and non clinical parameters.
Bivariate analysis was run for all examined outcomes -Caries, OHI-S, TDI and C-OIDP impact. However, the logistic regression model was used only for caries. The examined variables could not be fit into a statistically significant regression model with the other outcomes, implying that the measured variables did not explain the outcome.
Discussion
Al-Nour Institute, being the only comprehensive school for the visually impaired in Khartoum, the capital of the Sudan, is formed of a diverse group of students in terms of ethnicity and socio-economic status. Although the authors are aware that the sample population of this report may not be representative of all blind children in Sudan, it should be emphasized that the subjects of the study belonged to the only teaching institution for the blind children in the capital Khartoum. In this population, the proportion of children with caries experience was found to be higher (twice as much) than the reported proportion among non-disabled 12-year-old Sudanese school children (24%) [28]. On a global level, the proportion of caries-free children (53.2%) in this study was higher than those reported from comparable population in Turkey (26.4%) [10], India (1.5%) [8] and Kuwait (35.5%) [13]. Differences in the proportion of caries-free children could be attributed to differences in dietary patterns and accessibility to sweet snacks of these populations. Shetty and co-authors, in the latter study, stated that the higher consumption of sweets and in between snacking, in addition to the daily serving of a sweet dish at school could be the reason of the very high proportion of blind children with decayed teeth [8]. On the other hand, the caries severity (DMFT) of the study participants was found to be similar to the reports from the most recent study from Sudanese schoolchildren (DMFT 0.4 SD 0.92) [28]. Other studies on visually impaired children reported higher caries severity (DMFT) [6,12,13]. Caries severity reported in this study might have been diluted in the wide age range of this sample. Variations in examination procedures may also be a contributing factor. For instance, Reddy and Sharma [6] used sharp probe for examination while no probe was used for caries detection in the present study. Shyama et al. [13] examined subjects facing a window, not under direct sunlight as in the current study. While Significant caries index (SiC) for children above 12 years old in this study was below 3, thus in-keeping with global oral health goal for 12year-olds for the year 2015 [29], SiC for children below 12 years old was above 3 (3.4). However, there is no WHO target for the SiC levels for primary teeth of children less than 12 years old. Since there is no such a target, no comparison of the study finding with a standard value was possible. An interesting finding in this study was that children with PVI were more likely to be diagnosed with caries as opposed to their counterparts (CVI). Although there is scarce evidence on the relationship between the degree of blindness and caries experience, Desai et al., in 2001 reported a significant inverse association between the level of independence for selfcare activities in children with disabilities and number of decayed teeth and DMFT/dmft index [30]. This finding, which is challenged by the findings of the current study, supports the assumption that children with CVI-being less independent than children with PVI-have more carious teeth. Moreover, a recent study in 2012 by Bekiroglu et al. revealed no significant association between the degree of blindness of 7-16 years old visually impaired students and their caries experience [10].
In this study, most of the children were found to have a fair standard of oral hygiene. However, considerable percentage of boarders in this study had significantly poorer oral hygiene when compared to non-boarders. In contrast to our findings, most studies of visually impaired children report fair to poor levels of oral hygiene [8,9,12]. In addition to the common factors such as lack of manual-visual coordination of the blind child and the child's reduced concern for his/her appearance [6], The suboptimal levels of oral hygiene of those living in campus in this study population could be attributed to lack of assistance or supervision of care givers during performance of oral hygiene practices.
Traumatic dental injuries are common among schoolchildren with a prevalence ranged from 6.19% to 58.6% [31]. Baghdady et al. [32], found a 5.1% prevalence of TDI among non-disabled schoolchildren in Khartoum province. However, no published data are available regarding TDI among attendees of Sudanese special needs schools. Children with disabilities are considered to be at a higher risk for TDI where prevalence rate as high as 32.5% have been previously reported among visually impaired schoolchildren [18]. In the present study, the significantly higher occurrence of traumatic dental injury among completely blind children (CVI) was in accordance with Bhat et al. [18] and O'Donnell [33]. Relatively high prevalence of TDI among this group (19%) could be attributed to the lack of social inclusion policies both inside and outside the school environment. Although the school floor was levelled all around, several pillars supporting the buildings lined the playground. These clearly formed a risk for accidents. Even though more than 70% of TDI were categorized as mild dental trauma, it is worth mentioning that none of the traumatic dental injuries was treated or even seen by an oral health professional.
Dental care is the most frequently unmet health care need for children with special health care needs [34]. In this study population, almost quarter of the children needed urgent dental treatment. These findings reflect serious lack of access to dental treatment. Barriers to receiving dental care such as cost of the service, transportation, lack of trained and experienced dentists are commonly cited in the literature [35]. Other barriers to equal access to dental treatment for individuals with disabilities include inadequate facilities due to restricted financial resources and complex treatment needs requiring special care or general anaesthesia [36]. All reasons similarly apply to the Sudanese context.
The findings of the present study demonstrated an extensive unmet dental treatment needs (dental caries and dental trauma). Dental care was not a priority in the school. The investigators in this study met with the school teachers and educated them on the importance of oral health care and provided practical guidelines to daily oral hygiene practices and dental treatment options. This report is expected to draw the attention of the authorities to this deprived group, to establish oral preventive and curative programs.
In the literature, a number of oral health-related quality of life (OHRQoL) measures have been developed to assess and describe the oral impacts on people's quality of life. Five of these instruments were designed to assess the OHRQoL in children specifically. These include the following questionnaires: Child Perception Questionnaire (CPQ [11][12][13][14], the Michigan OHRQoL scale, the Child Oral Health Impact Profile (Child-OHIP), the Early Childhood Oral Health Impact Scale (ECOHIS) and the Child Oral Impact on Daily Performance (Child-OIDP). None of the tools have considered children with disability. For this reason, and for the fact that Child-OIDP was the only tool validated in Arabic and in the Sudan, it was used in this study. Findings from this study emphasize the necessity to construct a questionnaire for oral health related quality of life in children with special needs.
The Child-OIDP questionnaire was adapted at the level of impairment selection. Psychometric properties and further scoring of outcomes were not studied because only 15.9% (n=13) reported an oral impact on their daily performance. The low response could also have been a result of the questionnaire not being designed for challenged children.
The strength of this study is in the representativeness of this survey to school children with visual impairment in Sudan. This is the first report examining oral health of the visually impaired in Sudan. A limitation in the study was the use of DMFT to measure caries experience. This index usually underestimated caries because it measures only frank cavitations [37]. Having summed up DMFT and dmft was another study limitation. However, the main interest was to report on caries experience and its examined determinants, and in that respect the summation was appropriate.
Conclusion
The findings of this study showed that the caries experience of the visually challenged schoolchildren to be high, with those with partial visual impairment more likely to be diagnosed with caries. This population has extensive dental treatment needs and extremely deficient dental care index. As for the age group 11-13, a significant reported oral health related impacts on quality of life was evident.
Recommendations
Relevant oral health promotion and treatment programs need to be established urgently. More attention has to be directed by the oral health authorities to establish school-based dental care programs comparable to those in elementary schools of the non-disabled children.
|
2017-06-22T14:48:02.935Z
|
2013-07-17T00:00:00.000
|
{
"year": 2013,
"sha1": "3550ca72feeaae47081986bcb3b20fde0446344b",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-13-33",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a82e50154eec4a284f1676118986115b8d829de8",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249163884
|
pes2o/s2orc
|
v3-fos-license
|
Jurnal Presipitasi Regional Case Study Application of Water Conservation Concept in X Apartment Bogor Regency
The increase of population in Bogor Regency is directly proportional to the rapid development of residential building construction, one of which is apartment buildings. This resulted in the need for clean water also increased. Water conservation is an effort to maintain the availability of clean water in sufficient quality and quantity to serve the current and future needs of clean water. The application of the concept of water conservation in Apartment X in Bogor Regency can be done using WAC 2 (Water Fixtures) and WAC 3 (Water Recycle). Apartment X building has a building population of 998 people with daily clean water needs of 91,545 liter/day. The wastewater generated by the apartment building is 73,236 liter/day. It is estimated that the implementation of water conservation could save the use of clean water by 28.08% or around 25,701.42 liter/day.
Introduction
The development of various fields in Indonesia has increased rapidly. One area that is experiencing rapid growth is the construction of residential buildings. One of the factors that influence the development of an area is the population in that area. According to data owned by the Bogor Regency Central Statistics Agency (2020), Bogor Regency in 2019 recorded having a population of 4,699,282 people. The development carried out in Bogor Regency includes various types of buildings, including hotel and apartment buildings. Apartment X Building is a residential building consisting of 21 floors and located in Pabuaran Village, Cibinong District, Bogor Regency. The increase in the number of hotel and apartment buildings has increased the population accompanied by an increase in the need for clean water. The circumstances nowadays is that Bogor Regency is experiencing lack of ground water availability, that it is almost impossible to only depend on one source of water. One way that could be done to overcome the increasing water demand is by applying the concept of water conservation. Therefore this research is done to design a system where the new building is able to maintain water availability and quality efficiently.
Water conservation (WAC) is an effort made to maintain the continuity of water availability in terms of quantity and quality both now and in the future. According to the Green Building Consultant (2021), the WAC concept that can be applied consists of 2 prerequisite WAC categories and 6 WAC categories. The prerequisite WAC categories in question are WAC P1 (Water Metering) and WAC P2 (Water Calculation/Calculation of Water Use) where these two prerequisite WAC categories must be applied before implementing other WAC categories. The six WAC categories in question are WAC 1 (Water Reduction), WAC 2 (Water Fixtures/Water Features), WAC 3 (Water Recycling), WAC 4 (Alternative Water Resources), WAC 5 (Rainwater Harvesting), and WAC 6 (Water Efficiency Landscaping) (Green Building Consultant, 2021). In addition to the WAC P1 and WAC P2 concepts that must be applied at the beginning of building construction, the water conservation concept that wants to be applied to increase the percentage of water savings in Apartment X is the WAC 2 and WAC 3 concepts. The use of the WAC 2 concept which is the use of water features can save the use of clean water is due to the high efficiency of the plumbing equipment used, while the application of WAC 3, namely water recycling, can save the use of clean water by utilizing treated wastewater, this water becomes an alternative water source so that for flushing purposes it is not necessary to use a clean water source from the LOCAL WATER COMPANY.
Several studies have been carried out on the application of water conservation aspects in several places with the same type of building, such as the application of WAC 2 and WAC 3 at the Menara Cibinong Tower E Apartments which can save water use up to 33% or 305.88 m 3 /day (Wahyudi et al., 2019), the application of rainwater harvesting at the Cibinong Tower which can save 3.48% of clean water (Anantika et al., 2019), water savings in the Panghegar Resort Dago Golf-Hotel& Spa Building which can save 34% of usage water or the equivalent of 141,690 liter/day (Rinka et al., 2014), as well as the application of rainwater utilization and water recycling in Apartment X which can save 54.22% of water use or 51,089.72 liter/day in the dry season and 31.75% or 29,748.46 liter/day (David et al., 2019).
The result of this study is expected to show the amount of water use that can be saved after implementing water conservation in the form of installing water-saving plumbing equipment and reusing gray water that has been treated at the sewage treatment plant.
Methodology
This research method consists of several stages, namely calculating the need for clean water, wastewater generation, water-saving efficiency with the application of WAC 2, and water-saving efficiency with the application of WAC 3. The calculation of water needs begins with calculating the building population. The calculation of the building population is done by dividing the effective area of the room by the standard area of the individual room (Neufert, 2002). Clean water needs are calculated by multiplying the building population with the standard for clean water requirements (SNI 03-7065-2005). The value of clean water requirements obtained is used to obtain the amount of wastewater generated. The wastewater produced is 80% of the clean water needs with a percentage of 75% gray water and 25% black water (Hardjosuprapto, 2000). The equations used in calculating the population, clean water needs, clean water generation, and gray water and black water types of wastewater can be seen in equations 1-5. There are two types of water conservation concepts that will be applied in this plan, which are WAC2 (Water Fixture) and WAC3 (Water Recycle) points. These two concepts were chosen because the availability of water from clean water sources in the planning area is threatened with a clean water crisis and requires buildings to implement water conservation efforts to save on clean water use from LOCAL WATER COMPANY. The installation of water fixtures is one of the efforts to save the output of clean water sourced from the LOCAL WATER COMPANY while the water recycling implemented can reduce the use of clean water from the taps by using processed wastewater instead for flushing purposes. The water feature in question is a plumbing device that can save on clean water use such as a water closet flush valve or faucet with high efficiency. Water-saving plumbing equipment such as water closet tanks, urinals, lavatory, faucets, and showers used in this planning uses brand X plumbing equipment. It's because the X brand plumbing equipment uses water-saving technology compared to conventional plumbing equipment. The things that are considered in the use of brand X plumbing equipment are in terms of price and availability in the market. The comparison of the amount of water used in water-saving plumbing equipment and conventional plumbing equipment and water consumption standards can be seen in Table 1. and Table 2. The calculation of the amount of water that can be saved by using water-saving plumbing equipment can be done by equation 6. The amount of water that can be saved = Conventional water usage -water-saving equipment usage (6) Occupancy is the percentage of plumbing equipment usage while the usage factor indicates the number of plumbing equipment used. There are differences in the units for the use of water closets, urinals, showers, and water taps where for the use of water closets and urinals the average unit of use/person/day is used, and for the use of showers and faucets the units of use are minutes/usage. This is because the use of water closets and urinals is calculated from the number of temporary flushes for faucets and showers calculated from the length of time the water comes out during the use of the plumbing device. The gray water produced will be processed in the Sewage Treatment Plant (STP) for further reuse for flushing purposes in the water closet and urinal plumbing equipment. The reuse of treated water is included in the application of WAC point 3, namely, water recycling. According to GBCI (2013), sources of gray water that can be recycled are wastewater that comes from the use of the lavatory, ablution faucets, showers, as well as pool water, and other water. The function of recycled water is for flushing purposes. Before calculating the amount of recycled water produced, the flushing requirement is calculated by multiplying the water needs for plumbing equipment by the frequency of daily plumbing use. The amount of flushing requirement obtained can be used to calculate the amount of wastewater recycling by multiplying the flushing requirement by the number of uses and the number of populations. The equation used in the calculation of the amount of wastewater recycling is presented in equations 7-8.
Total Population =
Flushing Needs = Plumbing equipment water needs x Plumbing equipment daily usage (7) Recycled Water = Flushing needs x Total Usage x Total Population (8) The calculation results of the application of WAC2 and WAC3 points are used in calculating the amount of water conservation efforts that are applied. The results of this calculation can show the amount of water that can be saved by using the concepts of WAC2 and WAC3 as an effort to implement water conservation in the X Apartment building. The equation for calculating the amount of water conservation efforts can be seen in equations 9-10.
Result and Discussion
Apartment X building is a residential building consisting of 20 floors plus 1 top floor, where the 1st floor is intended for shops, 2 nd to 20 th floors served as residential areas and there is a top floor that has a prayer room and hall that can be accessed by general guests (non-residents). X apartment is 65 meters high (Apartment X, 2021). The location of apartment X is presented in Figure 1. Apartment X building has a total number of 291 residential units and 14 shophouses. The building area of Apartment X is 647.80 m 2 . The shophouse units in Apartment Building X consist of 3 types, namely type C22, C33, and C44, while the residential units consist of 8 types, namely types T22, T33, T44, T45, T55, T66, T77, and T88. The types of shophouses and residential units in the X Apartment building are distinguished according to the area of the unit. The floor designation in Apartment Building X in detail can be seen in Table 3. The estimated population in Apartment Building X is calculated using equation 1 and then the data is used to calculate the need for clean water. Referring to data sourced from the Central Bureau of Statistics of Bogor Regency (2020), the percentage comparison of the male and female populations is 50.4% for men and 49.6% for women. The percentage of male and female population comparisons becomes a reference in calculating the need for non-room plumbing equipment for public bathrooms located on the top floor. This percentage can be used to find each male and female population and then determine the number of plumbing equipment needed by comparing the population with the minimum plumbing equipment requirement standard listed in SNI 8153:2015. The calculation of the total population in the X Apartment Building is carried out using equation 2 with the calculation results of 998 people. The results of the calculation of non-room plumbing tools and the total population of the X Apartment building are presented in Table 4. and Table 5. The source of clean water used by Apartment Building X comes from the Regional Drinking Water Company (LOCAL WATER COMPANY) of Bogor Regency. The clean water drainage system planned is divided into two types of clean water, namely primary clean water and secondary clean water. Primary clean water is clean water sourced from Tirta Kahuripan and is used for plumbing equipment such as showers, faucets, lavatory, and jet sprays, while secondary clean water is water that comes from the processing of the Sewage Treatment Plant (STP) which will later be used for flushing needs on plumbing equipment such as water closet tanks and urinals. Calculation of the amount of daily clean water needs in Apartment Building X can help determine the estimated amount of clean water needed to meet the needs of clean water in Apartment Building X. Equation 2 is used to determine the amount of clean water needed in the building. The results of the calculation of the need for clean water in Apartment Building X can be seen in Table 6. Planning for clean water flow in Apartment Building X is equipped with 4 tanks consisting of 2 groundwater tanks (GWT) and 2 roof tanks (RT). GWT functions as a storage tank for clean water from water treatment plant. The clean water is not directly pumped to the RT because the supply of clean water from the water treatment plant is not always stable, so a storage tank is needed to maintain the availability of water to meet the needs of clean water in the building. RT is a water storage tank located at the top of the building which functions as a water storage tank for the peak water needs of the building. The capacity and dimensions of the RT are affected by fluctuations in water usage at certain hours in the building, while the GWT capacity is influenced by the daily water requirements of the building. The flow of water starts from the primary clean water source, namely Tirta Kahuripan, primary clean water is then stored in GWT1, which has a capacity of 109.85 m 3 with dimensions of 9 m (p) x 4.5 m (l) x 3 m (t), before pumped to RT1. The pump used in the flow of clean water from GWT1 to RT1 is an SP 17-10 type pump with a capacity of 305.17 L/min with an efficiency of 69% and a pump power of 4.92 kiloWatt. Primary clean water stored in RT1, with the capacity of 32.04 m 3 with dimensions of 6 m (p) x 3 m (l) x 2 m (t), is then channeled to plumbing equipment such as faucets, jet spray, kitchen sinks, lavatory, and showers. The use of plumbing equipment will eventually produce wastewater in the form of gray water and black water. These two types of wastewater require treatment first because gray water will be reused for flushing purposes and black water must meet the requirements of domestic wastewater quality standards before being channeled into water bodies. Domestic wastewater quality standards are regulated in the Minister of Environment and Forestry Regulation Number 68 of 2016 which can be seen in Table 7. Domestic wastewater resulting from household activities contributes to surface water pollution by 78.9%. Domestic wastewater in question is non-toilet wastewater originating from the use of water for bathing, washing, and kitchen purposes (Busyairi et al., 2020). This non-toilet domestic wastewater, also known as gray water, will be channeled to the sewage treatment plant (STP) using a gravity system. Gray water wastewater that has been treated in the STP, the gray water type wastewater is turned into secondary clean water which will be used for flushing purposes. This secondary clean water is stored in the GWT2 tank, which has a capacity of 49.43 m 3 with dimensions of 6 m (w) x 3 m (l) x 3 m (h), then it will be pumped to RT2, which has a capacity of 14.42 m 3 with dimensions of 4 m (p). ) x 2 m (l) x 2 m (t), before flowing into plumbing equipment that uses a flush such as water closet tanks and urinals. The pump used for secondary water flow from GWT2 to RT2 is an SP 9-13 type pump with a capacity of 137.3 L/minute with a pump efficiency of 69.8% and a pump power of 1.94 kiloWatt. This type of plumbing equipment closet tank and urinal will also produce waste in the form of black water type wastewater. This type of black water wastewater will flow into the septic tank. The septic tank used in the treatment of black water wastewater is a biofive septic tank with an anaerobic system with dimensions of 9 m (p) x 1.1 m (l) x 2.15 m (t). The septic tank which has two biofilter media, namely PVC Honeycomb and Bio ball, can convert fecal sludge into a liquid that can be flowed into waterways because this tank is also equipped with a disinfectant tube (Biofive, 2021). The schematic of water flow in Apartment Building X can be seen in Figure 2.
Wastewater generation resulting from activities using plumbing equipment can be estimated using equations 3-5. The recapitulation of the calculation of wastewater generation in Apartment Building X is presented in Table 8.
Application of Water Conservation 2 (Water Fixture)
The installation of water-saving plumbing equipment is an effort to implement WAC2 points to reduce water use. The installed water-saving plumbing equipment is expected to reduce water use compared to conventional plumbing equipment usage. The calculation of conventional plumbing equipment water usage and water-saving plumbing equipment presented in Table 9. Based on this table the percentage of water savings from urinals reaches 88% or 162.38 liter/day occupies the largest position compared to other plumbing equipment.
The results of the calculation of the application of WAC2 it is known that the use of water using conventional plumbing is 14,243.2 liter/day and by using X brand water-saving plumbing is 9,831.32 liter/day. Based on equation 6, it is found that the amount of water that can be saved by using a watersaving plumbing device is 4,411.88 liter/day. The percentage of water savings obtained is 30.98%.
Application of Water Conservation 3 (Water Recycle)
Water conservation efforts in Apartment Building X, apart from implementing WAC2, are water recycling which is included in WAC3 points. The recycled water is gray water, which comes from the use of plumbing equipment such as lavatory, kitchen sinks, and waste water from the floor drain. The generation of gray water wastewater can be seen in Table 7. The application of water recycling can help water conservation efforts because the availability and stability of the supply of clean water can be maintained (Lahji, 2015). STP is a system that can process domestic wastewater into class 2 clean water which can be used as water for flushing needs or directly discharged into receiving water bodies (Topare et al., 2019). The type of STP used is STP Biotechno which is made of fiberglass with a maximum efficiency of 90%. STP biotechno is suitable to be applied to apartments and is safe to use to treat gray water wastewater into clean water that is suitable for flushing needs or flowed directly into waterways (Biotechno, 2021). STP biotechno involves two systems, namely aerobic and anaerobic systems. These two systems when combined can be an effective combination to reduce organic levels such as BOD, COD, TSS, Ammonia, TDS, and Total Coliform in water with affordable operational costs (Busyairi et al., 2019). Processing that occurs in STP biotechno involves a rotating biological contactor (RBC). RBC is a biological water treatment that uses attached culture. At this stage, Bio-chemical Oxygen Demand (BOD) and Chemical Oxygen Demand (COD) are set aside. Processing efficiency using this RBC reaches 80-85% (Rahmawati et al., 2019). The advantage of using RBC in waste treatment is that RBC does not require a large area and the oxygen supply is easily obtained naturally from the rotation of the disc (Rahmawati et al., 2019).
The amount of waste water recycling is influenced by the frequency of flushing and the need for flushing in a building. Flushing frequency shows how often the toilet and urinal are used in one day. The use of a water closet and urinal type of plumbing device is usually associated with urination and defecation. The frequency of urination varies from person to person but ranges from 6-8 times per person per day, while for defecation the frequency ranges from 1-3 times per person per day (Choerunnisa, 2020). The frequency of flushing can be distinguished according to the type of building population. The frequency of flushing for WC for building occupants is 5 times/day/person, and for building visitors/guests is 2 times/day/person for WC and the same amount for urinals (David et al., 2019). Occupants are people who live or have the status as owners of residential units and shop houses in apartment buildings, while the intended guests are people who do not live or own residential units and shop houses in apartment buildings but are shophouse visitors or have access to the rooms in the apartment building. apartments such as halls or mosques on the top floor. The calculation of the need for recycled water is carried out using equations 7 and 8, the detailed calculation results are presented in Table 10.
Application of Water Conservation Efficiency
Water conservation efforts implemented in Apartment Building X consist of two types of WAC categories, namely the installation of water-saving plumbing equipment included in WAC 2 and water recycling which is included in WAC 3. Clean water is sourced from LOCAL WATER COMPANY Tirta Kahuripan where the need for clean water in the X apartment building itself reaches 91.545 liter/day. The use of water-saving plumbing equipment and water recycling is expected to reduce the usage of clean water sourced from local water company. The balance of clean water and wastewater is presented in Figure 3. The application of the concept of WAC2 and WAC3 is estimated to be able to save 25,701.42 liter/day or 28.08% of the total clean water needs of the Apartment X building of 91,545 liter/day. The application of WAC 2 saves the use of clean water by 4,411.88 liter/day while the application of WAC 3 is estimated to save clean water by 21,290 liter/day. The amount of water conservation efforts can be calculated using equations 9 and 10. The results of the calculation of the amount of water conservation efforts can be seen in Table 11. This 28.08% saving in water usage compared to research conducted on other similar buildings can be said to be quite low. For example, the research conducted in the Cibinong Tower E apartment building with the application of the same WAC points, namely WAC 2 and 3, which can save water usage by up to 33%. This can be influenced by differences in the number of needs and daily use in the building. Increased water use savings can be achieved in other ways, namely implementing other WAC points such as adding rainwater harvesting or water efficiency landscaping.
Conclusion
Apartment X building has a building population of 998 people with daily clean water needs of 91,545 liter/day. The wastewater generated by the apartment building is 73,236 liter/day, with details of black water being 18,309 liter/day and 54,927 liter/day for gray water. The implementation of water conservation efforts in Apartment X is estimated to be able to save 28.08% of clean water use or 25,701.42 liter/day.
|
2022-05-30T15:06:50.584Z
|
2022-01-27T00:00:00.000
|
{
"year": 2022,
"sha1": "99aae10b5af70f92972cb39c43b3a041a83f0d9d",
"oa_license": "CCBY",
"oa_url": "https://ejournal.undip.ac.id/index.php/presipitasi/article/download/40478/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b3f035dfa67062951efff96c993bf97c1be8e105",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
45465703
|
pes2o/s2orc
|
v3-fos-license
|
Greens Function for Anti de Sitter Gravity
We solve for the retarded Greens function for linearized gravity in a background with a negative cosmological constant, anti de Sitter space. In this background, it is possible for a signal to reach spatial infinity in a finite time. Therefore the form of the Greens function depends on a choice of boundary condition at spatial infinity. We take as our condition that a signal which reaches infinity should be lost, not reflected back. We calculate the Greens function associated with this condition, and show that it reproduces the correct classical solution for a point mass at the origin, the anti de Sitter-Schwarzchild solution.
Introduction
The physics of gravitation at energies small compared to the Planck scale is well described by Einstein's theory of general relativity. Unfortunately, the nonrenormalizibility of this theory means that we cannot use perturbation theory to extract predictions at higher energies. This means that in this case, either perturbation theory or general relativity is inapplicable. It is often believed that to learn anything interesting about quantum gravity, we must first find a formulation of it which allows us to make predictions at all orders.
Unfortunately, it seems that we are a long way away from accomplishing this goal.
An alternative approach is to consider quantum gravity at low energies, and see what might be learned. At low energies there is no doubt that physics is correctly described by the action functional of Einstein's gravity, plus an infinite series of counterterms which diverge in the ultraviolet, but go to zero in the infrared limit [1]. This approach tells us, for example, that the gravitational anomaly of the standard model must cancel, which gives an observationally verified restriction on the particle content of the model [2]. This low-energy gravity approach has recently been used by Tsamis and Woodard to suggest a mechanism for suppression of the cosmological constant [3,4]. They consider quantum gravity in a de Sitter space background, the maximally symmetric background for gravity with a positive cosmological constant. They develop a formalism for doing low-energy perturbation theory for gravity in this background [5,6,7], and argue that there is a natural relaxation of the effective cosmological constant over time [1].
In this paper we take the first step toward making a similar analysis for the case of a negative cosmological constant. The maximally symmetric background for gravity with a negative cosmological constant is anti de Sitter space. In this paper, we determine the appropriate retarded Greens function for the linearized graviton kinetic operator on a background which is anti de Sitter everywhere and at all times. The retarded Greens function G(x, x ′ ) allows us to determine the linearized response of the metric to a distribution of energy-momentum T (x), as Furthermore, the average of the retarded and advanced Greens functions gives the imaginary part of the free propagator for the theory: From this, the free Feynman propagator can be deduced, up to terms which are dependent on the vacuum. This of course is the necessary ingredient in formulating a theory of Feynman diagrams for anti de Sitter space gravity.
In the first section of this paper (after this introduction), we discuss the geometry of the anti de Sitter space background, and define coordinate systems which we will need to use. We do this for an arbitrary number D of spacetime dimensions. We show that this background has the peculiar feature that it is possible for an observer to travel from finite to in coordinate values (or vice-versa) in a finite time. Therefore, in solving for (1) one needs to specify what happens to a signal which reaches infinity [8]. In the next section we write and solve the gauge fixed equations of motion for gravity in the anti de Sitter background.
In the section after that one, we solve explicitly for the retarded Greens function of anti de Sitter gravity in four spacetime dimensions, using the boundary condition that any signal which reaches infinity should be lost. In the penultimate section we show that our Greens function gives the right classical limit for a certain source; specifically that it gives the anti de Sitter-Schwarzchild solution, up to a coordinate transformation, when the source T (x) is a single point mass. Finally we will conclude by discussing where to proceed from here, in particular how anti de Sitter space might fit into realistic models of the history of the universe, and how the analysis in this paper might be modified to accomodate this.
Anti de Sitter Space
Anti de Sitter space (AdS) in D dimensions can be described in terms of D + 1 coordinates* X 0 , X i , X D , in terms of which the metric is We obtain Anti de Sitter space by restricting these coordinates to obey for some constant h. In this paper we will set h = 1 for convenience.
The space AdS contains closed timelike curves (e.g. the circle (X 0 ) 2 + (X D ) 2 = 1), along which an observer could travel and return to his own past. This undesirable feature can be removed by extending de Sitter space to its universal covering space (CAdS). This just means that we introduce an integer N (winding number) which we increment by 1 every time we go around the loop, and we consider different values of N for the same X to be different spacetime locations.
It is of course possible to solve (4) in terms of a set of D coordinates x µ . These coordinates will then have a non-flat metric, given by One such solution (given here for D = 4, but easily generalized to any D) is * Notation: Latin indices i, j, etc. are spatial indices which go from 1 to D − 1. Greek indicies will run from 0 to D − 1.
The metric in these coordinates can be calculated using (5). It is the static and isotropic metric g τ τ = −1 − r 2 (7a) To obtain de Sitter space, we should take the range of τ to be 0 to 2π, and identify all observables at these two boundaries. The covering space CaDS can be obtained simply by letting τ take any real value; it is not necessary to introduce winding numbers when these coordinates are used.
From the metric (7) we can show that the Riemann tensor for anti de Sitter space is From this, we can see that the anti de Sitter metric obeys the gravitational equations of motion R αβ = Λg αβ (10) The equation of motion of a massless signal is obtained by setting ds 2 = g µν dx µ dx ν to zero. For a radially moving signal, this gives r = tan(τ − τ 0 ). This shows that a massless signal can get from zero to infinity or vice-versa in a finite time, namely π 2 . This means that in CAdS a local event will causally influence all spatial locations after a certain finite time.
The consequence of this is that the CAdS Greens function depends on the choice of an extra boundary condition, at r = ∞ [8]. We will argue that the condition which works is that any signal which reaches r = ∞ should be lost.
The static coordinate system is not the most convenient for the calculations which are to follow. From (10) we can show that the conformal or Weyl tensor is identically zero for anti de Sitter space. This fact implies that there exists conformal coordinate systems in which the anti de Sitter metric is proportional to the Minkowski metric. One such coordinate system is* t, y, x i , in terms of which We then find These conformal coordinates do not naturally extend to CAdS; winding numbers N need to be introduced. The two regions y > 0 and y < 0 actually correspond to different coordinate patches. It is not possible to go from positive y to negative y by passing through * Barred Latin indices such as i take values from 1 to D − 2. This convention and others will be fully introduced in the next section. y = 0, but it is possible to do so by going through y = ∞. y = 0 is not really part of the space since no finite values of the X coordinates make y zero. To emphasize the separation between positive and negative y regions, we will take the convention that regions of y > 0 are given integer winding numbers, while regions with y < 0 are given half integer numbers.
Now consider a massless signal originating at x ′µ on the patch with winding number N.
By setting ds 2 = 0, we see that the signal will follow the usual path of a lightlike beam, the path x such that (x − x ′ ) 2 = 0. If the signal is initially headed toward y = 0, then it will at some time hit y = 0. If it is initially headed away from y = 0, then it will go into the far future of sheet N, and reappear in the far past on the sheet N + 1 2 . On this sheet it will be heading toward y = 0. Our boundary conditions will be that any signal which hits y = 0 is lost. Therefore a massless signal from sheet N can at most get to sheet N + 1 2 before it is lost. Massive (timelike) signals, on the other hand, can remain present for all N. These facts will be important in the construction of the Greens function.
We see that an observer at x µ on sheet N can causally observe any event x ′µ on sheet and any event on sheets N ′ < N − 1 2 no matter where or when. This is the same effect we saw using static coordinates. Although in conformal coordinates it takes an infinite amount of time t to cross to another N level, it is still possible for a massless signal to cross the entire spatial extent of the manifold without taking the entire temporal lifetime of the manifold to do it.
The condition (4), and hence the anti de Sitter metric, is invariant under the anti de Sitter group, the D(D + 1)/2 parameter group of Lorentz rotations of the X coordinates.
In static coordinates, τ translation and spatial rotations which shift θ, φ are some of the anti de Sitter transformations; the others are nonlinearly realized in these coordinates. In conformal coordinates, the anti de Sitter symmetries are realized as translational invariance in the D − 1 dimensional flat subspace, Lorentz invariance in this subspace, invariance under simultaneous dilatation of all coordinates, and D − 1 nonlinear symmetries. This is exactly analogous to the de Sitter case [9].
An important invariant under the anti de Sitter group is the distance function In the static coordinates this is In conformal coordinates it is Setting 1−z = 0 in either coordinate system is an alternate way to find the path of a massless signal. Since our Greens function should be invariant under the anti de Sitter group (at least up to gauge transformations), it will naturally be a function of 1 − z.
Gauge Fixing and Classical Solutions
We now expand the gravitational Lagrangian about the anti de Sitter background The resulting Lagrangian for ψ is invariant under the gauge transformation provided that the functions e µ (x) are well-behaved enough at infinity to allow integration by parts. We borrow the following notation from [5] (some of which is standard): Indices are raised and lowered by the (spacelike) Minkowski metric η µν (since we have given up general covariance by working exclusively in the conformal coordinate system). Greek indices go from 0 to D − 1, while Latin indices take only spacelike values 1 to D − 1. Indices in parentheses are symmetrized (a (µν) ≡ 1 2 (a µν + a νµ )). A bar over a δ or η tensor indicates the suppression of its non-flat (in this case y) components; e.g. η µν = η µν − δ y µ δ y ν . We also introduce the new notation that barred indices only run over flat coordinates. Thus a tensor with a barred index such as A µ must have zero y component.
Following [5], we expand out the quadratic part of the Lagrangian, and add the gauge Then the gauge fixed quadratic Lagrangian is We will now investigate solutions of the homogeneous equations of motion. The gaugeinvariant equations are If we demand F = 0, then solutions of the gauge-fixed equations (Dψ) µν = 0 are also solutions of the invariant equations. Let us look for such solutions. Following the analogous treatment of the Λ > 0 case [6], it is convenient to reexpress ψ: where ψ A is symmetric. We then calculate Thus the equations of motion for the A, B, C components are The general solutions to these equations are where h l are spherical Hankel functions* These functions satisfy recursion relations** * Specifically, they are spherical Hankel functions of the first kind. Their complex conjugates h * l are Hankel functions of the second kind. ** Note that for D = 4, (29c ) involves h −1 . In this case we define h −1 ≡ ih 0 , so that these relations work for l = −1 and l = 1 respectively.
We then need to enforce the gauge condition F µ = 0. In terms of the polarization coefficients A, B, C this condition becomes The space of solutions still has a residual gauge invariance under transformations which preserve F µ = 0. We could use this residual invariance to solve (32) and eliminate some parameters. There are several ways to do this. One would be to set all the B's and C to zero. Then we also need A to be traceless, and to satisfy Alternatively we could set all timelike components zero: A tµ = B t = 0. Then we need to This last solution gives a Fock space of manifestly nonnegative norm.
These are the solutions for F = 0. We can also ask if there are any solutions with F nonzero. In order that such solutions satisfy both the invariant and fixed equations of motion, F must satisfy The only solutions to this are where c is antisymmetric. There are not any ψ solutions of the free equations which are bounded for |x i | → ∞ and give such results for F .
Retarded Green's Function
The retarded Green's function for our theory must satisfy It must also obey retarded boundary conditions. As we have discussed previously, we need to extend AdS to the covering space CAdS. The time coordinates t, t ′ determine which is the earlier point only in the case that the winding numbers N, N ′ are equal. If N < N ′ then x corresponds to an earlier event than x ′ , and vice versa, regardless of t and t ′ . So the correct retarded boundary condition is We will proceed to solve this explicitly shortly, but first we discuss in general whether the solution to the gauge-fixed equations of motion generated by must also solve the gauge-invariant equations of motion when the source T satisfies the appropriate conservation law. This law is the requirement for any gauge parameters e µ (x). If T does not satisfy (38) then no solutions exist to (37).
Now consider the variation of the gauge-fixed action under a gauge transformation: Now evaluate this with ψ set equal to the solution (36). Then δS F ixed δψ αβ (x) is T αβ (x) by the gauge-fixed equations of motion. Then the left side of (40) must be zero because T is a conserved current. Then so too must the right side, and integrating this by parts we see that we must have It is easy to see that (33) implies (41), but not the other way around. If our solution also satisfies (33), then it will obey the invariant as well as gauge fixed equations of motion.
We now move on to the explicit determination of G. After some analysis which exactly parallels that of the de Sitter case [5], we determine the tensor structure of G: and a, b, c obey the equations Because of the delta functions, we can write these equations in the symmetric form To solve these equations explicitly, we expand the delta functions: The expansion (46c ) works for any l, but we choose l = 1 4 (D − 1) 2 − n to diagonalize D n .
We then similarly expand G n as We will first solve this for N = N ′ and later generalize. This solution is The evaluation of this integral depends strongly on the value of D. We will now evaluate it for the physical case of D = 4. To do so, we notice that our theory is Lorentz invariant in then we can, without changing the background metric, go to a frame where t − t ′ = 0, in which case we see that G n = 0.
Otherwise, we can go to a frame where x i − x ′i = 0. Then We write the k integral in polar coordinates and substitute u = √ k 2 + α 2 to get Now we look at this for the n values of interest. First n = 0 gives l = 1. We get The first term in (53) just gives delta functions. The others can be evaluated by contour integration, and we get Now we need to generalize this result to x i − x ′i = 0 and N = N ′ . The straightforward invariant and causal generalization is where we recall that 1 − z(x, x ′ ) is given by (16). (55) is constructed following the discussion at the end of the second section. The delta function term is a massless signal so it only propagates to the next half level. The theta function term is a timelike signal so its influence is felt on the next half level N = N ′ + 1 2 at all times such that and thereafter at all locations for N ≥ N ′ + 1 (hence the final term with no theta function).
If we had used some sort of reflective boundary conditions then the delta function would need to be continued to all N > N ′ (possibly with an alternating sign, depending on which type of boundary conditions were used).
So much for the case of n = 0. The other two cases are the same (n = 2) in D = 4. For N = N ′ and x i = x ′i they evaluate to which generalizes to Putting it all together with the tensor factors, the final answer for our Greens function is In the next section we show that this Greens function gives the correct anti de Sitter-Schwarzchild solution as its response to a single point mass.
Response to a Point Mass
Consider a point mass which is at rest at r = 0 in the static coordinates (7). These coordinates are related to our conformal coordinates as follows: x 1 = r sin θ cos φ r cos θ + 1 + r 2 cos τ −1 (59c) We see that a point mass at r = 0 has x 1 = x 2 = 0, and y 2 = t 2 + 1. So the path is y = ± √ t 2 + 1, the sign depending on whether we are on an N = integer level (y > 0) or N = half-integer level (y < 0). The action for a point particle of mass M which follows a spacetime path q µ (τ ) is The linearized energy-momentum tensor for the particle is then Calculating this for our particle, we find the nonzero components of T are This T must satisfy the conservation law which we get from (38): It is easily verified that (63) is in fact obeyed by (62).
We now wish to explicitly compute the response to the source T (x): We can tell by the forms of (58) and (62) that this response will have the form Now we need to evaluate the functions A through D. For definiteness we will take both y and t to be positive. We need to look at each term in (58) seperately: First, G 1 annihilates all components of T except T tt . Thus its contribution will be of the form , and D 1 = 0. Then we calculate where l ≡ −t 2 + x i x i + y 2 + 1 (68) (67) evaluates to Note that (70) gives a real s for any t and y. Now for the second part of G, which is: When we contract this with (62), we get The delta function for the minus sign is not zero for any t ′ values for which θ(−t + t ′ ) = 0.
For the plus sign, there is one such value, which is Then we use the identity to find the results Finally, there is the third term in the Greens function This gives a contribution of the form The constant T is formally divergent, but this divergence does not affect any physical observables. This is because it can be removed by a simple rescaling of the flat spatial coordinates: This divergence is a result of the unphysical assumption that our space was anti de Sitter since infinitely long ago. If we had used reflective boundary conditions, there would have been additional divergent terms whose forms would be similar to (75).
These do not seem to be removable by coordinate transformations.
For the form (65), we calculate which evaluates to We can see that this F , while not zero, satisfies the equation (33). Thus our solution will obey the gauge invariant equations of motion. One can check this directly by computing the first-order Riemann tensor for the solution (65): where, as one might expect, ǫ 12 = −ǫ 21 = 1, ǫ 11 = ǫ 22 = 0. From (80), we obtain the Ricci Substituting the explicit values for the coefficient functions, we can show to linearized order.
This shows that the point mass response solves Einstein's equations. Now we need to determine whether the solution generated is equivalent to the anti de Sitter-Schwarzchild solution. In static spherical coordinates τ, r, θ, φ, the anti de Sitter-Schwarzchild metric is with all off-diagonal components of g zero. Does a coordinate transformation exists from τ, r, θ, φ to t, x 1 , x 2 , y such that (83) It is easier to show indirectly that our solution is equivalent to (83). We can do this by comparing the Weyl tensors for both solutions. This is given in general by (11); for a metric such as ours which obeys Einstein's equations with Λ = −3, this is Evaluating this for our solution, we find These are the independent components of C; the other components can be obtained from these by switching x 1 ↔ x 2 , using the standard permutation symmetries of C, or using the fact that contracting any two indices of C with the metric gives zero. Compare this with the first-order Weyl tensor obtained from the anti de Sitter-Schwarzchild solution: We have written C with all indicies up, to facilitate transforming it to conformal coordinates.
Conclusions
We have determined the retarded Greens function for quantum gravity in the idealized case of a background which is anti de Sitter at all locations and all times. To do so, we needed to establish a boundary condition at spatial infinity (which corresponds to y = 0 in our conformal coordinate system). We mandated the boundary condition that any signal which reaches spatial infinity should be lost, not reflected back. We showed that this boundary condition gives the correct classical solution for the case of a point mass at the origin, up to of course a change of coordinates.
It is easy to go from the Greens function (58) to the Feynman propagator for gravity in anti de Sitter space for the case N = N ′ , up to real analytic terms which are annihilated by the kinetic operator. This will be As in the de Sitter case [5,9], the propagator cannot be made invariant under the anti de Sitter symmetry.
For N = N ′ it is not clear how to obtain the propagator, because of the other terms in the Greens function. Loss of quantum coherence may be a problem here, since information which reaches spatial infinity (y = 0 in conformal coordinates) will be lost. We still advocate the boundary conditions we have used here rather than reflective ones, because as we have shown, our boundary conditions give the correct classical solution (at least for one specific case). This loss of coherence will not be present when we go to physically realistic situations.
If it occurs in the case of infinite eternal anti de Sitter space, then it just serves to underscore the unphysiciality of this mathematical model.
To proceed further, we would need to understand what modifications of this work would be necessary so that it might apply to a physically realistic situation. A negative cosmological constant might have existed in the physical universe as the result of a field theoretic phase transition associated with a symmetry breaking. Such a phase transition alters the effective vacuum energy by redefining the vacuum. There are two differences between this realistic situation and our mathematical model. First, by causality the phase transition cannot occur throughout the whole universe at one time. Therefore a region of anti de Sitter space would be surrounded by normal space. Secondly, the universe would not have been anti de Sitter for all time, as our model space in this paper was. These features will likely make dealing with a realistic anti de Sitter phase more difficult than the simple mathematical model which we have solved in this paper.
|
2018-04-03T06:06:34.021Z
|
1994-06-03T00:00:00.000
|
{
"year": 1994,
"sha1": "d53f7be6a3830ec3682c39946502fe00bea5b592",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9406005",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d7e915beb369591757c429d602bb1b2ed2a1fa32",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
120861341
|
pes2o/s2orc
|
v3-fos-license
|
J-PARC decay muon channel construction status
The new Muon Science Facility (MUSE) that is now under construction at J-PARC in the Materials and Life Science Facility (MLF) building will comprise four types of muon channels. In the first stage, a conventional superconducting decay muon channel (D-Line) was constructed, which can extract surface (positive) muons with an expected muon yield of 107/s and decay positive/negative muons up to 120 MeV/c, with an expected muon yield of a few 106/s at 60 MeV/c for both positive and negative muons. This channel will be used for various kinds of muon experiments like μSR, muon catalyzed fusion and nondestructive elements analysis.
Introduction
The new Muon Science Facility (MUSE) [1] is now under construction at J-PARC (Japan Proton Accelerator Research Complex) at the Tokai campus of JAEA (Japan Atomic Energy Agency) in the Materials and Life Science Facility (MLF) building under the collaboration between KEK and JAEA. In the J-PARC project, the 3-GeV 333-µA-proton beam from the rapid cycling synchrotron (RCS) will be transported from the RCS ring to the neutron source situated in the MLF building. There, in the M2 primary proton beamline (M2 tunnel), just 33 m upstream of the neutron source, the muon target made of a 2-cm-thick graphite disc for the production of intense pulsed pion and muon beams is located. To make use of the full potential of this new facility, four dedicated muon channels will be constructed. In the MLF building, the experimental area is divided into two parts across the primary proton beamline, namely the experimental hall No. 1 (East) and the experimental hall No. 2 (West). The four secondary muon channels that are originating from the muon target in the M2 tunnel are the surface muon channel (S-line) and the high-momentum muon channel (H-line) in the experimental hall No. 1, and the superconducting decay muon channel (D-line) and the ultra-slow muon channel (U-line) in the experimental hall No. 2. At present, only the D-Line is under construction. The other three secondary muon channels were temporarily closed with iron shields and concrete blocks.
This report gives a brief overview of the D-line design and construction, including the superconducting solenoid, the vacuum system, the interlock system and the experimental areas. A previous report was published in [2]. The new muon profile monitor used in the beam commissioning and tuning is also described.
Superconducting Decay Muon Channel
The layout of the D-Line in the experimental hall No. 2 of the MLF building is shown in Fig. 1. It consists of three parts: (1) a pion injection, (2) a superconducting decay solenoid, and (3) a muon extraction. Two experimental areas are planed for simultaneous use. This conventional superconducting decay muon channel can extract surface (positive) muons with an expected muon yield of a few 10 7 /s and decay positive/negative muons up to 120 MeV/c, with an expected muon yield of a few 10 6 /s at 60 MeV/c for both positive and negative muons. The overall transport efficiency is estimated by TURTLE. This channel will be used for various kinds of muon experiments like µSR, muon catalyzed fusion and nondestructive elements analysis.
(1) Pion injection: A quadrupole triplet (DQ1-2-3) is placed at a position of 65 cm from the graphite target, which can accept pions in a solid angle of 65 msr. The following bending magnet (DB1) transports pions to the solenoid up to 250 MeV/c at maximum. Therefore kaon decay muon from the target can also be extracted by upgrading the muon extraction to 250 MeV/c. The coils of the quadrupole triplet and the bending magnet are made by MIC for the hard radiation environment. The magnets are directly installed in the M2 primary proton beamline.
(2) Superconducting decay solenoid: To obtain a high intensity decay muon beam, a superconducting solenoid magnet (DSOL) is employed for the pion to muon decay section. The basic design is similar to those used in the muon beamlines at KEK-MSL, TRIUMF and RIKEN-RAL. The decay solenoid consists of twelve units of superconducting coils with 6 cm in bore radius and 50 cm in length. The applied magnetic field is 5 T. The pions and muons are confined within a radius of 5 cm and therefore transported without any significant loss. The old solenoid that was used at KEK-MSL has been modified for this purpose. The magnet coil is forced-indirectly cooled by a supercritical helium gas supplied from an on-line helium refrigeration system. To achieve the muon extraction at low momentum, this cold bore magnet is directly connected to the muon beamline using only thin thermal insulating aluminum foils of 12.5 µm thick at the entrance and exit of the superconducting solenoid magnet, respectively. An insertion device has been developed for the installation of the superconducting solenoid, because it is set between the M2 tunnel and the experimental hall, as shown in Fig. 1. Two sets of linear guide were adopted for the horizontal transport motion to keep good reproducibility. Six iron blocks, with a total weight of about 50 tons, are also set on the insertion device for the radiation protection from the M2 primary beamline.
(3) Muon extraction: The extraction can transport muons up to 120 MeV/c. A magnetic kicker will be installed in a near future for single-pulse experiments and simultaneous use by the two experimental areas. The major components of the old decay muon channel at KEK-MSL, such as the bending magnets, the quadrupole magnets and the DC separator are being re-used. Three Q-triplets (DQ4-5-6, DQ7-8-9 and DQ13-14-15) and two bending magnets (DB2 and DB3) are used to transport the muon beam to the experimental area D2. The D1-leg at the downstream of DB3 that leads to the experimental area D1, which is equipped with two Q-triplets (DQ10-11-12 and DQ16-17-18) and a beam-slicing kicker, is constructed by JAEA [3]. All of the magnets in the experimental hall, except those prepared by JAEA, were formerly used at KEK-MSL in Tsukuba. After refurbishment and modification, the magnets are now successfully installed on the D-line. The magnetic poles of both DB2 and DB3, which are made of SS400, were newly fabricated. In particular, the magnetic pole of DB3 was designed so that replacement with a septum magnet in a near future can be easily conducted. The field mapping of both bending magnets was examined after reassembling, resulting that the uniformity of BL integration is less than 2% for DB2 and 5% for DB3, respectively. The assembled Q-triplet, which shares a common star-shaped beam duct, was designed in view of handling by crane operation. The magnets are settled on a common support and placed on a pre-aligned base plate, so that the assemblies can be remotely installed and uninstalled. When placing the triplet magnets, they are guided and precisely positioned using pivots, which are secured on the base plate, in a similar manner to that of the M2 primary beamline magnets.
Superconducting Solenoid Refrigeration System
The solenoid magnet coil is forced-indirectly cooled by a supercritical He gas (4.8K at 1.0 MPa) supplied from an on-line helium refrigeration system (TCF 50) for the long-term stable operation. A 80 K copper thermal shield is positioned between the 6 K shield tube and the warm iron cryostat vessel. The 6 K and 80 K thermal shields are supported by insulation rods extending from the cryostat wall. The 80 K shield is cooled by the He gas taken from the intermediate heat exchanger in the cold box. The helium refrigeration system consists principally of the followings: (1) a He buffer tank (20 m 3 , 0.95 MPaG), (2) a cooling tower and a water cooling pump, (3) a helium gas screw compressor (Kaeser), (4) an after cooler, (5) an oil separator, (6) load, unload and bypass valves, (7) a cold box, (8) a VME based digital control device, (9) a PC system for control and data logging, and (10) a cryo panel for safety interlock control. The screw compressor supplies high-pressure He gas (0.85 MPa) to the cold box. The cold box is designed to supply various types of He requested to the superconducting solenoid. Before the cooling down procedure from the room temperature is commenced, the oxygen concentration of the circulating He gas is reduced to be less than 50 ppm by the operating He gas purifier.
The cooling power is 35 W at 4.5K and 200W at 80K, and it can produce 8 l/h of liquid He. The whole system is monitored and controlled by a VME controller combined with a personal computer with dedicated software based on the LabVIEW system, and cools down automatically. The typical cooling period from the room temperature to the operating temperature (∼4 K) is about 4 days. The long-term (3 months) operation is established under quite stable condition.
To protect against any serious damage to the superconducting coils and the He refrigeration system, a VME based interlock system is installed. The system can detect any anomaly in the voltage of the superconducting solenoid, the power lead temperature and the trip signal from the He refrigeration system. Once any emergency status is detected, the electric power supply immediately stops the current supply and the refrigeration system changes to self-operation mode. This system can also record the temperatures and pressures of the superconducting coils, the cryostat and the power leads.
Vacuum System
The block diagram of the vacuum system of the decay muon channel is shown in Fig. 2 a bypass. The beam channel is divided into four sections, separated by three gate valves (DGV2, DGV3-1, and DGV3-2; VAT 12150). Each of the sections is separately operated by using turbo molecular pumps with a pumping speed of 0.5 m 3 /s. It is noted that the section including the superconducting solenoid should have a separate vacuum system from a viewpoint of operation of the cryogenic system. Particularly, in the event of a quench of superconductivity, tritium gas absorbed through the bypass during the beam time is released due to the temperature raise. Therefore, the exhaust must be led to a vent stack without passing through the other pumping stations. The achieved pressure of the beam channel has been typically ∼1×10 −4 Pa. At the moment, the vacuum chamber of the DC separator (DSEP) is directly connected to the beamline. However, in order to prevent the high-voltage gap from vacuum discharging, it is designed to control pressure independently by inserting beam foils at both sides of the chamber. A fast closing valve (DFCV; VAT 75046) is installed at the downstream of DGV2 to protect the vacuum of the primary beamline in the event of a sudden vacuum loss. Two FV-type pressure sensors are attached on each side of two beam legs, which respond in 1 ms to close DFCV. If closing of DFCV is detected, DGV2 is also automatically closed. In the case that either of the two experimental areas is in a maintenance condition, the corresponding pressure sensor is
Interlock System
To ensure a safe operation of the facility, a number of safety systems have been incorporated into the interlock system. For example, to prevent users from being exposed to excess dosage of radiation, the entry to experimental areas is strictly controlled. In addition, proper and safe operation of the machines is also necessary. To ensure this, all the muon experimental instruments are controlled by means of the Muon Control System. Its purpose is to control and monitor the safe operation of all the muon instruments. In particular, the personal protection system (PPS) is in place to prevent personnel from being exposed to high levels of radiation, and is the highest priority interlock system at J-PARC. The Muon Control System is related to the operation of the muon target, the D-line and the D1, respectively D2, experimental area entry system. In particular, the experimental area entry system is categorized in the PPS system. All these components worked successfully on Day 1. The experimental area entry system diagram is shown in Fig. 3 (top). In this system, several status of the PPS related components are being watched. For example, the status of the beam blockers are monitored, which is important to ensure the safety of the personnel. When the muon blocker is opened or the DB3 magnet, which acts as a switching magnet, is turned on, the PPS controller does not allow the area entry door to be opened. This logic guarantees the safety operation for the experimenter. Figure 3 (bottom) shows a view of the area entry door with the muon blocker controller (up) and the door controller (down). The blocker controller has five keys. Four keys are used as personal keys. If all the keys are not back to the muon blocker controller, one can not lock the experimental area door. The operation of the muon blocker can be performed from this muon blocker controller. This system is common with the neutron shutter control system. The following procedure is necessary to enter an experimental area. An experimenter needs first to turn off the switching magnet and close the muon blocker. Then he moves a key from the blocker controller to the door controller and unlock the door. The area entry door can now be opened.
Experimental Areas and Cabin
The decay muon channel is divided into two legs after the bending magnet DB3 (see Fig. 1), each leg leading to an experimental area, D1 and D2, respectively. Between the two experimental areas, a cabin was installed to house the electronics room and the counting room. Pictures representing the two experimental areas and the cabin including detailed views of the experimental area D1, the experimental area D2 and the interior of the cabin are shown in Fig. 4. The experimental area D1 is occupied with a beam slicer with slits, a quadruople magnet triplet and a µSR spectrometer for µSR experiments. A more detailed description of this µSR experimental apparatus is reported in these proceedings [3]. The experimental area D2 is presently used for commissioning and beam optics tuning. Each experimental area is equipped with electrical power outlets (AC 100V, 200V and 420V) and cooling water supply. In the future pressurized air, helium recovery line and exhaust line will also be installed. The cabin is divided into two parts, the electronics room and the counting room, respectively. The electronics room is air-conditioned and equipped with five 19-inch rack for the electronics modules of the data taking system. This cabin will be used by users who perform experiments at either one of the experimental areas.
Beam Tuning
A new muon beam profile monitor (MBPM) was constructed for the commissioning of the decay muon channel and the beam optics tuning. The objective for the design of this new beam profile monitor was similar to that developed at RIKEN-RAL with Dr. K. Ishida [4], that is to be able to obtain both horizontal and vertical beam profiles simultaneously using surface muons (µ + ), which have a momentum of only 27 MeV/c. At this energy, the range of a muon in a The schematic view of the beam profile monitor is shown in Fig. 5. Each axis comprises fifteen (15) scintillators (4-mm wide and 0.5-mm thick, with 10-mm steps) mounted on a light guide and connected each with seven φ1-mm optical fibers to a multianode (4x4) photomultiplier (Hamamatsu type H6568-200-10MOD). Each scintillator is wrapped with two layers of 6-µmthick Aluminum-Mylar. In addition, the front and the back of the central area are also covered with an additional layer. This proved to be sufficient to reduce light leaks to an acceptable level without significantly slowing down the incoming muons. Each photomultiplier output was sent to a charge ADC (LeCroy model 2249W) with a time gate of 200 ns to be digitized. The same data acquisition program used at RIKEN-RAL that is based on EXP2K and modified by Dr. K. Ishida for the use of the MBPM was used to process the data and fill the histograms. A picture of the new muon beam profile monitor is shown in Fig. 6.
As an example of the performance of this beam profile monitor, the horizontal and vertical muon beam profiles obtained with surface muons at the experimental area D1 for different DB3 bending magnet settings are shown in Fig. 7. Each profile can be obtained with just 1000 spills or about 40 seconds at 25Hz beam operation. Figure 8 shows the different beam profile parameters that are generated automatically after each profile is completed. This graphical interface was found to be very useful in rapid visualization of the response of the beam to each magnet and for optimization of the beamline parameters.
In the future, the three slit box that are used along the D-line to select the muon beam momentum and control the muon beam size will also have a compact muon beam profile monitor just behind the slits for beam optics tuning purposes. Until now, the muon beam tuning has only been performed at the end of a muon channel by looking at the total muon intensity or by measuring the muon beam profile. A vacuum version of this new MBPM using multianode PMT's and optical fibers is now under consideration.
Concluding Remarks
At the end of the commissioning beam time of the experimental area D1 in December 2008, the muon beam profile shown in Fig. 9 was obtained. The figure on the left represents the beam profile measured with the new muon beam profile monitor, and the one on the right the profile measured with an imaging plate. The beam tuning is still not completely optimized yet, and the intensity is somewhat lower than expected. Further beam tuning is scheduled to optimize the beam transport efficiency as the beam power of the primary proton beam is gradualy increased.
|
2019-04-19T13:06:42.949Z
|
2010-04-01T00:00:00.000
|
{
"year": 2010,
"sha1": "415125ba13e954e3924f93b8f8fce33be4df0486",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/225/1/012050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f892a3c10ee1007cfa01fdc1e2d08eb32376c015",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
20002348
|
pes2o/s2orc
|
v3-fos-license
|
Clinical features of Parkinson’s disease with and without rapid eye movement sleep behavior disorder
Background Rapid eye movement sleep behavior disorder (RBD) and Parkinson’s disease (PD) are two distinct clinical diseases but they share some common pathological and anatomical characteristics. This study aims to confirm the clinical features of RBD in Chinese PD patients. Methods One hundred fifty PD patients were enrolled from the Parkinson`s disease and Movement Disorders Center in Department of Neurology, Shanghai General Hospital from January 2013 to August 2014. This study examined PD patients with or without RBD as determined by the REM Sleep Behavior Disorder Screening Questionnaire (RBDSQ), assessed motor subtype by Unified PD Rating Scale (UPDRS) III at “on” state, and compared the sub-scale scores representing tremor, rigidity, appendicular and axial. Investigators also assessed the Hamilton Anxiety Scale (HAMA), Hamilton Depression Scale (HAMD), Mini-Mental State Examination (MMSE), Clinical Dementia Rating (CDR), and Parkinson’s disease Sleep Scale (PDSS). Results One hundred fourty one PD patients entered the final study. 30 (21.28%) PD patients had probable RBD (pRBD) diagnosed with a RBDSQ score of 6 or above. There were no significant differences for age, including age of PD onset and PD duration, gender, smoking status, alcohol or coffee use, presence of anosmia or freezing, UPDRS III, and H-Y stages between the pRBD+ and pRBD− groups. pRBD+ group had lower MMSE scores, higher PDSS scores, and pRBD+ PD patients had more prominent proportion in anxiety, depression, constipation, hallucination and a greater prevalence of orthostatic hypotension. Conclusion pRBD+ PD patients exhibited greater changes in non-motor symptoms. However, there was no increase in motor deficits.
Background
Rapid eye movement sleep (REM) is characterized by decreased or absent muscle tone (atonia), desynchronization of the electroencephalogram, with the presence of saw tooth waves, and autonomic instability. Rapid eye movement sleep behavior disorder (RBD) is a form of parasomnia during which patients develop limb or body movements, which correlate with dream enactment behavior. The abnormal physiology of RBD is loss of muscle atonia (paralysis) during otherwise intact REM sleep [1].
The standard RBD diagnostic criteria are based on the 2nd edition of the International Classification of Sleep Disorders (ICSD) [2], and polysomnology (PSG) is necessary for a definitive diagnosis. Nomura et al., determined that RBD rapid screening questionnaire (RBDSQ), which is completed by the patient, had a sensitivity of 84.2% and specificity of 96.2% to diagnose RBD when compared with standard RBD diagnostic criteria using PSG in PD at a cut off of 6 points (total score of RBDSQ is 13, and a score of 5 is the cut-off point for healthy individuals) [3]. Chahine et al. investigated the use of the RBDSQ plus Mayo Sleep questionnaire 1 (MSQ1) compared with PSG in PD patients. They found sensitivity was highest when the questionnaires were used in combination while specificity was highest for the RBDSQ used alone at a cut-off point of 7 [4]. Shen SS used the RBDSQ to diagnose RBD in Chinese patients, compared with PSG, and found a cutoff points at 6 had the best specificity and sensitivity [5].
RBD has a close relationship with neurodegenerative diseases, especially those with α-synucleinopathy pathology such as PD, dementia with Lewy bodies (DLB) and multiple system atrophy (MSA) [1,6]. Recently, studies have indicated that PD patients with RBD might have some specific clinical features more commonly than those without RBD. Although data is mixed, PD patients with RBD have been reported to have worse decision-making [7], cognitive impairment [6], freezing, falls and rigidity [8,9]. They also have a higher prevalence of orthostatic hypotension (OH) [8] and visual hallucinations (VH) [10]. However, there are no detailed data in PD patients with RBD in China. In this paper, we investigated the clinical features of PD patients with RBD in a tertiary referral center in China. The present study focused on the characteristics of motor and non-motor symptoms of PD with RBD compared to PD without RBD.
Patient selection
One hundred fifty PD patients were enrolled from the Parkinson's disease and Movement Disorder Center in the Department of Neurology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, China from January 2013 to August 2014. The diagnosis of PD was made by two movement disorder specialists according to the UK Parkinson's Disease Society (UKPDS) Brain Bank Criteria. Patients with severe dementia (CDR ≥ 2), or other central nervous system disorders were excluded from this study. The study was approved by the Institution's Ethics Committee and all recruited patients consented to participate in the study.
Patient evaluation
Of 150 patients, 5 patients were excluded because of probable DLB, 3 patients had probable MSA, and 1 patient had vascular Parkinsonism. A total of 141 patients were enrolled in the study. Patient evaluation was performed by movement disorder specialists. Parkinsonism staging was evaluated according to the Hoehn & Yahr staging scale. Part III of the Unified PD Rating Scale (UPDRS III) was performed during the "on" state. Motor subtype was analyzed by predominance of tremor or rigidity (UPDRS subscores) and predominance of limb or axial features (UPDRS sub-scores) [9], Hamilton Anxiety Scale (HAMA; cut-off point ≥8), Hamilton depression Scale (HAMD; cutoff point ≥8) and Mini-Mental State Examination (MMSE) were used to evaluate the patient's mood and cognitive state. The patients whose MMSE score was lower than 17 (illiteracy) or lower than 20 (primary school level) or 24 (higher than middle school cultural level) [11] were considered to have dementia and were evaluated using the Clinical Dementia Rating (CDR). Patients whose CDR was higher than 2 were excluded from the study. Investigators used the REM Sleep Behavior Disorder Screening Questionnaire(RBDSQ)to detect clinical probable RBD (pRBD). We set the RBDSQ cut-off point at a score of 6 according to the highest sensitivity and specificity determined by a previous study [5]. The Parkinson's disease Sleep Scale (PDSS) was used to evaluate patient's sleep quality. Orthostatic hypotension (OH) was screened using a simple question:"Do you feel dizziness or weakness when you stand up?" If the answer was "yes", a blood pressure test from the supine to standing position was checked, a fall in systolic blood pressure of ≥20 mmHg, or in diastolic blood pressure of ≥10 mmHg, was diagnosed as OH. Other items including smoking, alcohol and coffee consumption, hyposmia or anosmia, constipation were also documented by question. Patients continued with their prescribed treatment regimen, they used anti-Parkinson drugs, antihypnotics or anti-depressants as necessary.
Statistical analysis
The data were analyzed using the Statistical Package for Social Sciences (SPSS) 19(IBM Co., USA). The data is presented as mean, counts and percentages, and the adjusted difference in means. Analysis of descriptive variables was performed using two-tailed t tests. Mann-Whitney U tests and X 2 tests were used where appropriate. A P-P plot was used to test normal distribution. A p value <0.05 was considered to be significant.
Demographics
Among 141 patients, 74 were male (52.48%): 18 male (60%) in pRBD + group, and 56 male (50.45%) in pRBD − group (p = 0.655). Thirty patients (21.28%) were diagnosed with probable RBD (pRBD) based on a RBD screening questionnaire score ≥ 6. If the cut-off score was set at 7 or 5, the incidence of RBD was 17.02% and 26.24% respectively, with little difference in the clinical features ( Table 3). The mean age in the pRBD + group was 68.33 ± 8.76 versus 69.32 ± 9.75 years in the pRBD − group (p = 0.618). Mean PD duration years is 4.13 ± 4.216 in pRBD + and 4.65 ± 3.570 in pRBD − group (p = 0.5). Smoking, alcohol and coffee consumption were infrequent in both groups (NS). There was no difference between the numbers of patients who took levodopa or dopamine agonists, and the levodopa equivalent dosage was similar in both groups. However, the pRBD + group had greater antidepressant and anti-hypnotic use (Table 1).
Motor symptoms
The UPDRS III score was similar in the pRBD + and pRBD − groups (26.93 ± 14.62 vs 23.68 ± 15.93, p = 0.174). Mean H-Y stage was also similar between the two groups (2.40 ± 0.90 vs 2.26 ± 0.90, p = 0.299). When limb scores in UPDRS III was compared with axial scores, the ratio showed no difference between pRBD + and pRBD − groups (5.67 ± 4.22 vs 6.17 ± 4.61, p = 0.734). When rigidity scores in UPDRS III The pRBD + PD patients had significantly higher rates of anxiety, depression and visual hallucinations rates. And they had a higher mean PDSS score. Dementia was not significantly difference between two groups. Constipation and OH were more prominent in the pRBD + group were compared with resting tremor score, the ratio was also not significantly different (3.01 ± 4.06 vs 1.79 ± 2.08, p = 0.32). Overall there was no difference between pRBD + group and pRBD − group for motor severity and motor subtype (Table 2). Freezing was also not different between the two groups (26.7% vs 22.52%, p = 0.75).
Discussion
Probable RBD is common in early PD and predicts future cognitive decline, particularly in attention and memory domains [12]. The pedunculopontine nucleus (PPN) and locus cerulean (LC)/ dorsal subcoeruleus (subCD) are compromised in both PD and RBD [13][14][15]. Autopsy studies show that the loss of cholinergic neurons of the PPN in PD has a significant negative correlation with the modified Hoehn and Yahr stage [16], and contribute to freezing and falls [16,17]. Dysfunction of the PPN relates to visual hallucination (VH) [15]. A resting-state functional connectivity MRI (rs-fcMRI) study in RBD patient showed reduced connection between lateral geniculate nuclei LGN and visual association cortex [18]. PD patients with probable RBD showed smaller volumes than patients without RBD and than healthy controls in the pontomesencephalic tegmentum where cholinergic, GABAergic and glutamatergic neurons are located. It is additionally associated with If the cut off score was set at 7, 6, or 5, the clinical features had little difference. If the cut off score was set at 5 score, the hallucination rates would have no difference in two groups instead. The other features were same in the three condition Overall motor symptoms and signs were similar but we found significant difference between the two groups in many aspects of non-motor symptoms, including MMSE performance, visual hallucinations, depression, anxiety, orthostatic hypotension and constipation. Except for constipation, these results are consistent with most previous studies more widespread atrophy in other subcortical and cortical regions [19]. The basal ganglia activity is changed across the sleep-wake cycle in RBD [20]. The appearance of RBD in PD may be related to regional gray matter changes in the left posterior cingulate and hippocampus but not localized to the brain stem [21]. Our study had compared motor and non-motor symptoms in PD patients with and without RBD. The results were similar if we used a RBDSQ cut off point at 6 or 7 (Table 3). Overall motor symptoms and signs were similar but we found significant difference between the two groups in many aspects of non-motor symptoms, including MMSE performance, visual hallucinations, depression, anxiety, orthostatic hypotension and constipation. Except for constipation, these results are consistent with most previous studies (detailed in Table 4) [7,[22][23][24]. Our findings are consistent with some studies that there are no difference between the motor symptoms [13,23], however, some previous studies indicated that pRBD+ patients showed much more worse in the gait, balance or increased dyskinesia (Table 4) [8,9,22,25].
In summary, this study systematically investigated the clinical features of PD patients with RBD. There are several potential weaknesses. We used the RBDSQ to detect RBD in PD patients which is easier and more readily available than PSG. The sample size is relatively small and it might have given false negatives and false positives for diagnosing RBD without PSG. We diagnosed the patient with anosmia and constipation only based on self-report and not using objective examination. Neurological image and electrophysiology will be valuable for further study.
Conclusions
The present study demonstrated that there were no significant differences in motor deficits in pRBD + PD patients, while the non-motor symptoms are prominent, such as mood, sleep, constipation, cognition and orthostatic hypotension. However, further studies and laboratory tests are needed to improve the understanding of RBD in PD.
|
2017-12-21T11:50:43.554Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "1a008168821b08fe9ee854cc8d840bbddc6c56c5",
"oa_license": "CCBY",
"oa_url": "https://translationalneurodegeneration.biomedcentral.com/track/pdf/10.1186/s40035-017-0105-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a008168821b08fe9ee854cc8d840bbddc6c56c5",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249877301
|
pes2o/s2orc
|
v3-fos-license
|
An Endoscope Image Enhancement Algorithm Based on Image Decomposition
: The visual quality of endoscopic images is a significant factor in early lesion inspection and surgical procedures. However, due to the interference of light sources, hardware, and other configurations, the endoscopic images collected clinically have uneven illumination, blurred details, and contrast. This paper proposed a new endoscopic image enhancement algorithm. The image decomposes into a detail layer and a base layer based on noise suppression. The blood vessel information is stretched by channel in the detail layer, and adaptive brightness correction is performed in the base layer. Finally, Fusion obtained a new endoscopic image. This paper compares the algorithm with six other algorithms in the laboratory dataset. The algorithm is in the leading position in all five objective evaluation metrics, further indicating that the algorithm is ahead of other algorithms in contrast, structural similarity, and peak signal-to-noise ratio. It can effectively highlight the blood vessel information in endoscopic images while avoiding the influence of noise and highlight points. The proposed algorithm can well solve the existing problems of endoscopic images.
Introduction
Medical endoscopy is of great significance in early lesion screening and improving the success rate of surgical operations. Whether it is the tracking detection of wireless capsule endoscopy [1] or the high-precision surgical navigation of AR (Augmented Reality) [2], it is closely related to the endoscopic image. The visual quality of endoscopic imaging is often affected by the intricacies of the internal structure of the human body, plus factors such as light source interference [3] and hardware limitations during endoscopic image acquisition, while the cost of access to the underlying image processing side of the hardware is vast [4], so we can improve the results of conventional endoscopic imaging.
Under normal circumstances, uneven illumination and low contrast are the most critical factors affecting the clinical diagnosis of endoscopy [5]. At the same time, further lesion inspection and polyp diagnosis are inseparable from high-quality endoscopic images.
To improve image quality, early researchers made a series of improvements based on gamma correction [6] and a single-scale retinex [7] algorithm. Huang et al. [8] proposed weighted adaptive gamma correction (AGCWD), which adaptively modifies the function curve by normalizing the gamma function to achieve the effect of adaptive correction of luminance. Jobson et al. [9] proposed Multi-Scale Retinex (MSRCR) with a color recovery function to solve the phenomenon of color distortion and saturation loss arising in the enhanced images. However, it is difficult to maintain the brightness and color fidelity of endoscopic images alone when these methods are applied to endoscopic images.
Image enhancement algorithms based on multi-exposure fusion are also widely used in image processing [10]. Hayat et al. [11] proposed a multi-exposure fusion technology based on multi-resolution fusion, which effectively solved the problem of image artifacts. Ying et al. [12] proposed an accurate image contrast enhancement algorithm to solve the problem of insufficient contrast in some areas of the image and excessive contrast in some areas, to use illumination estimation technology to design a fusion weight matrix, and to synthesize multi-exposure images through the camera response model.
Histogram equalization (HE) [13] is commonly used for image contrast enhancement due to the ease and straightforwardness of implementation. Still, its application to endoscopic images can lead to noise amplification and over-enhancement problems. To solve the defects of histogram equalization, researchers proposed some algorithms based on histogram equalization improvement. Zuiderveld et al. [14] proposed restricted contrast adaptive histogram equalization (CLAHE) using threshold clipping histogram to prevent over enhancement. Chang et al. [15] proposed quadruple histogram equalization by dividing the image into four sub-images through the mean and variance of the image histogram. However, these methods cannot handle the luminance error and have certain limitations.
More and more algorithms for medical image enhancement have been developed in recent years. Al-Ameen et al. [16] proposed a new algorithm to improve the low contrast of CT images by adjusting the single-scale Retinex and adding a normalized Sigmoid function to improve the contrast of CT images. Palanisamy et al. [17] proposed a framework for enhancing color fundus images by improving luminance using gamma correction and singular value decomposition and local contrast using contrast-limited adaptive histogram equalization (CLAHE), which adequately preserves detail while improving visual perception. Wang et al. [4] proposed an endoscopic image brightness enhancement technique based on the inverse square law of illumination and retinex to solve problems such as overexposure and color errors arising from endoscopic image brightness enhancement. An initial luminance weighting based on the inverse square law of illuminance is designed. A saturation-based model is proposed to finalize the luminance weighting, effectively reducing image degradation caused by bright spots. These algorithms cannot solve the defects of endoscopic images in a multifaceted way.
To ensure the outstanding effect of the underlying blood vessel details without color distortion, this paper proposes a new endoscopic image enhancement framework. Global enhancement with noise suppression, brightness correction by adaptive bilateral gamma function for the base layer separated by weighted least squares based, the separated detail layer is subjected to sub-channel selective adaptive stretching with high highlight suppression to achieve the effect of detail enhancement. The main contributions of this paper are as follows: 1. This paper proposes a novel framework for endoscopic image enhancement to avoid the interference of high brightness and noise in endoscopic imaging by separating the noise layer and high brightness mask.
2. According to the characteristics of endoscopic images, an adaptive brightness correction function based on bilateral gamma is proposed, which enhances the brightness of light and dark areas while preventing excessive enhancement of high-light areas.
3. According to the color characteristics of endoscopic images, this paper proposes a detail layer sub-channel processing method, which uses different image weights for each channel for scaling. Then, the detail layer enhancement factor was designed according to the connection before and after the base layer enhancement. It achieves the effect of detail enhancement while highlighting the color features of endoscopic images.
Dataset Introduction
We collected 200 endoscopic images jointly with Hefei Deming Electronics Co., Ltd.
Proposed Algorithm
The framework flow of the algorithm is shown in Figure 2. which is inspired by the different absorption characteristics of human tissues for different spectra and the multipath processing mechanism. First, to prevent the effect of noise on microvascular imaging, the input map was decomposed into structural and noise layers according to a modified total-variation method for endoscopic image design and enhancement of structural layers with noise suppression. Secondly, to improve the image brightness and highlight local vascular information, the structural layer is divided into base and detail layers by the weighted least squares method. The main brightness information exists in the base layer, The LEI_D dataset is collected from different departments such as urology, thoracic surgery, oncology, cardiovascular surgery, cardiovascular surgery, otolaryngology, nephrology, obstetrics and gynecology, and hepatobiliary and pancreatic surgery. The collected endoscopic images cover different organ tissues and lesions in the human body, including oral cavity, nostril, bladder, liver, intestine, gallbladder, uterine fibroids, urinary tumors, pituitary adenomas, recurrent maxillary sinus tumors, gastric cancer, rectal cancer, etc. In addition, the dataset also covers some animal tissue images, such as chicken guts, intestines, bronchial tubes, etc.
Proposed Algorithm
The framework flow of the algorithm is shown in Figure 2. which is inspired by the different absorption characteristics of human tissues for different spectra and the multipath processing mechanism. First, to prevent the effect of noise on microvascular imaging, the input map was decomposed into structural and noise layers according to a modified totalvariation method for endoscopic image design and enhancement of structural layers with noise suppression. Secondly, to improve the image brightness and highlight local vascular information, the structural layer is divided into base and detail layers by the weighted least squares method. The main brightness information exists in the base layer, and the detail information such as blood vessels exists in the detail layer. In terms of brightness, the proposed adaptive bilateral gamma function is used to correct the brightness of the base layer; In terms of details, since the gain of the green and blue channels are more beneficial and the detail information such as blood vessels exists in the detail layer. In terms of brightness, the proposed adaptive bilateral gamma function is used to correct the brightness of the base layer; In terms of details, since the gain of the green and blue channels are more beneficial to highlight the detailed information of blood vessels and the highlighted area will affect the channel stretching effect, the three channels are selectively stretched on the premise of removing the highlight information. Finally, the improved base, detail, and noise layers are fused to obtain a result map with a better visual effect. Figure 2. Framework Flowchart, k_TV stands for total variation method based on noise suppression factor, WLS stands for weighted least squares method, and canny_guided filter is the abbreviation for weighted guided filtering based on the improvement of the canny operator.
Modified Total-Variation to Extract the Noise Layer
In endoscopic images, there is usually some noise [18] that affects the image's visual quality, the enhancement of the image will lead to noise amplification, and the removal of noise may cause the loss of information in particularly small blood vessels, so this paper adopts the method of noise suppression. Before the endoscopic image is enhanced, the corresponding noise layer is extracted by applying the improved global noise estimation of the endoscopic image to the total-variation method.
The image edge structure has strong second-order differential properties [19], so the image is sensitive to the noise statistics of the Laplace mask. We use a kernel consisting of two Laplacian masks to participate in the convolution operation (Equation (2)). Conventional noise estimates may contain vascular details since endoscopic images differ from conventional images. We add a noise suppression factor k to the arithmetic mean-based image noise estimation [20] algorithm to control the level of extracted noise. The set global noise parameter is obtained from Equation (1).
Modified Total-Variation to Extract the Noise Layer
In endoscopic images, there is usually some noise [18] that affects the image's visual quality, the enhancement of the image will lead to noise amplification, and the removal of noise may cause the loss of information in particularly small blood vessels, so this paper adopts the method of noise suppression. Before the endoscopic image is enhanced, the corresponding noise layer is extracted by applying the improved global noise estimation of the endoscopic image to the total-variation method.
The image edge structure has strong second-order differential properties [19], so the image is sensitive to the noise statistics of the Laplace mask. We use a kernel consisting of two Laplacian masks to participate in the convolution operation (Equation (2)). Conventional noise estimates may contain vascular details since endoscopic images differ from conventional images. We add a noise suppression factor k to the arithmetic mean-based image noise estimation [20] algorithm to control the level of extracted noise. The set global noise parameter θ c is obtained from Equation (1).
where * represents the convolution operator, W and H represent the width and height of the image, I c is the input image, and k is 40 in this paper (the value analysis of k is explained in the experimental analysis). In the total-variation structure texture method [21], the input image consists of a superposition of a structural layer and a noise layer; The model is shown in Equation (3). Combined with TV (total variation) regularization, the structural layer is obtained by minimizing the objective function. This objective function (4) consists of two components. The first is a different term adapted to the texture components. The second is a regularization term based on the total amount of variation used to limit the details of the image.
where ∇ denotes the ladder segment operator and λ c is the parameter θ c obtained by the set endoscopic global noise estimation.
Weighted Least Squares Decomposition of Images
Perform the weighted least squares (WLS) [22] method on the structural layer to obtain the base layer, and then subtract the base layer from the structural layer to obtain the detail layer. The extraction of detail layers based on weighted least squares can extract good detail information while maintaining the original graph architecture. Compared to the artifacts that tend to appear with bilateral filtering and the complexity of bootstrap filtering to guide image selection, the weighted least squares method is applied to the endoscopic enhancement to smooth as much as possible in the areas with small gradients and keep as much as possible in the edge parts with strong gradients. After processing, we get a base layer with background brightness information of the subject and an avascular layer with minute detail information.
Decomposition Model: WLS Model: where p represents the position of the pixel point, and a x and a y control the degree of smoothing at different positions. The first term represents that the input and output images are as similar as possible. The second term is a common term that smoothes the output image by minimizing the partial derivative. λ is used as a regularization parameter to balance the two weights.
Adaptive Bilateral Gamma Correction Brightness
After observing a series of endoscopic images, it is found that there is a problem of uneven illumination in the endoscopic images, especially the dark area information cannot be well highlighted. To ensure that the brightness area is not over-enhanced and improve the brightness of the dark area without changing the color. In this paper, we use the bilateral gamma function to correct the luminance channel of the base layer and propose adaptive weighting of the image pixel positions. Figure 3 shows the flow chart of luminance correction.
Bilateral gamma functions: Electronics 2022, 11, 1909 6 of 20 where V is the base layer luminance channel in HSV color space, V is the corrected luminance channel. The internationally recommended γ value is 2.5.
Electronics 2022, 11, x FOR PEER REVIEW 6 of 20 where V is the base layer luminance channel in HSV color space, is the corrected luminance channel. The internationally recommended value is 2.5. As shown in Figure 4, Above y = x is the graph of with , and below y = x is the graph of with . As the value of increases, the function curve becomes steeper and steeper, indicating that the degree of change becomes more and more drastic. The images of V channels are corrected separately by bilateral gamma functions ( ) to get two images, V1 and V2. V1 is a brighter image by simple gamma correction and V2 is a darker image by negative gamma correction. So, when two images are fused, the brighter the region should have less weight, and the darker the region should have more weight. Combined with the curve change trend of the sigmoid function, this paper proposes adaptive weights 1.
The obtained functions, weights, and stretching effects are plotted in Figure 5. It can be seen that the brightness of the image deviates considerably, and the brightness is enhanced, but the enhancement is much more than the expected guesses. Some areas of the image appear too bright and too dark, which can blur the frame of the subject of the base layer. Therefore, functions of the sigmoid type are not suitable for stretching here, and α1 will be improved. Using the arctan curve transformation trend in the function, this paper proposes adaptive weights 2.
By comparing the base layer obtained by the α2 function with the original base layer, it can be seen that this rendering has achieved a satisfactory result, and the brightness of the light and dark areas has been improved, while the brightness of the bright areas has not undergone excessive enhancement.
To reduce the algorithm's complexity while optimizing its accuracy, the value of γ is obtained more precisely by the following steps.
Step2: Define a new comparison metric, which is a combination of two performance metrics (MSE (Mean Squared Error) [23] and SSIM (structural similarity index) [24]). As shown in Figure 4, Above y = x is the graph of V 1 with γ, and below y = x is the graph of V 2 with γ. As the value of γ increases, the function curve becomes steeper and steeper, indicating that the degree of change becomes more and more drastic. The images of V channels are corrected separately by bilateral gamma functions (V 1 and V 2 ) to get two images, V1 and V2. V1 is a brighter image by simple gamma correction and V2 is a darker image by negative gamma correction. So, when two images are fused, the brighter the region should have less weight, and the darker the region should have more weight. Combined with the curve change trend of the sigmoid function, this paper proposes adaptive weights α1.
Electronics 2022, 11, x FOR PEER REVIEW 7 of 20 Figure 4. The function curves for values of 2.2, 2.5, and 2.8. The curve above y = x is V1 transformed with γ, and the curve below y = x is V2 transformed with γ. The red curve represents a γ value of 2.2, the green curve represents a γ value of 2.5, and the blue curve represents a γ value of 2.8. Figure 4. The function curves for γ values of 2.2, 2.5, and 2.8. The curve above y = x is V1 transformed with γ, and the curve below y = x is V2 transformed with γ. The red curve represents a γ value of 2.2, the green curve represents a γ value of 2.5, and the blue curve represents a γ value of 2.8.
The obtained functions, weights, and stretching effects are plotted in Figure 5. It can be seen that the brightness of the image deviates considerably, and the brightness is enhanced, but the enhancement is much more than the expected guesses. Some areas of the image appear too bright and too dark, which can blur the frame of the subject of the base layer. Therefore, functions of the sigmoid type are not suitable for stretching here, and α1 will be improved. Using the arctan curve transformation trend in the function, this paper proposes adaptive weights α2. Figure 4. The function curves for values of 2.2, 2.5, and 2.8. The curve above y = x is V1 transformed with γ, and the curve below y = x is V2 transformed with γ. The red curve represents a γ value of 2.2, the green curve represents a γ value of 2.5, and the blue curve represents a γ value of 2.8. Step3: The images are subjected to adaptive bilateral gamma correction according to the above method, and the indicator value F of each value is recorded when varies at a spacing of 0.1, and the average indicator corresponding to each value is calculated (i corresponds to the image serial number).
Step4: The optimal parameter of γ is selected after comparison. The optimal parameter of γ is 2.2 when applied to the endoscopic image dataset, obtained by computational comparison. By comparing the base layer obtained by the α2 function with the original base layer, it can be seen that this rendering has achieved a satisfactory result, and the brightness of the light and dark areas has been improved, while the brightness of the bright areas has not undergone excessive enhancement.
To reduce the algorithm's complexity while optimizing its accuracy, the value of γ is obtained more precisely by the following steps.
Step3: The images are subjected to adaptive bilateral gamma correction according to the above method, and the indicator value F of each γ value is recorded when γ varies at a spacing of 0.1, and the average indicator F ave corresponding to each γ value is calculated (i corresponds to the image serial number).
Step4: The optimal parameter of γ is selected after comparison. The optimal parameter of γ is 2.2 when applied to the endoscopic image dataset, obtained by computational comparison.
Electronics 2022, 11, 1909 8 of 20 It can be obtained from the above design process of this module. The adaptive bilateral gamma correction function we designed performs a reasonable scaling transformation of each pixel value of the base layer image by assigning corresponding weights to each pixel point for bilateral fusion. This method avoids the phenomenon of over-enhancement and over-compression in the local image area during the brightness correction process so that the problem of uneven brightness in endoscopic images is effectively solved.
Highlight Detail Layer Information
To prevent the high bright spot area in the endoscopic image from contrast enhancement in the detail layer, we decided to bring up the high bright spot before the detail layer processing and restore the high bright spot area after the processing is completed. Before making a highlight mask, the original image is pre-processed for enhancement (16). The purpose of enhancement is to make the reflective areas more visible and reduce the associated interference factors. This paper adopts the theory [25] that the reflective pixel's brightness Y (luminance) is greater than its color brightness y (chromatic luminance). Convert the image from RGB to CIE-XYZ space to get Y, and then find y according to the formula. The area where Y is greater than y is extracted as the highlighted area.
Pre-processing enhancement model.
As shown in Figure 6, the extracted mask is the high brightness areas of the image. If the high brightness mask does not separate these areas, they will be overly enlarged in the detail layer, thus blurring the edges and thus affecting the visual quality of the image. It can be obtained from the above design process of this module. The adaptive bilateral gamma correction function we designed performs a reasonable scaling transformation of each pixel value of the base layer image by assigning corresponding weights to each pixel point for bilateral fusion. This method avoids the phenomenon of over-enhancement and over-compression in the local image area during the brightness correction process so that the problem of uneven brightness in endoscopic images is effectively solved.
Highlight Detail Layer Information
To prevent the high bright spot area in the endoscopic image from contrast enhancement in the detail layer, we decided to bring up the high bright spot before the detail layer processing and restore the high bright spot area after the processing is completed. Before making a highlight mask, the original image is pre-processed for enhancement (16). The purpose of enhancement is to make the reflective areas more visible and reduce the associated interference factors. This paper adopts the theory [25] that the reflective pixel's brightness (luminance) is greater than its color brightness (chromatic luminance). Convert the image from RGB to CIE-XYZ space to get , and then find according to the formula. The area where is greater than is extracted as the highlighted area.
As shown in Figure 6, the extracted mask is the high brightness areas of the image. If the high brightness mask does not separate these areas, they will be overly enlarged in the detail layer, thus blurring the edges and thus affecting the visual quality of the image. For the endoscopic images, separation of the three-channel images shows that the Rchannel has minor distinct vascular features and the G and B-channel planes have visible margins and lesion borders. Blue light is most suitable for enhancing superficial mucosal structures and detecting minor mucosal changes. Greenlight is relatively more suitable for enhancing thick blood vessels in the middle layer of the mucosa [26]. Therefore, the blue component and the green component are more advantageous for extracting For the endoscopic images, separation of the three-channel images shows that the R-channel has minor distinct vascular features and the G and B-channel planes have visible margins and lesion borders. Blue light is most suitable for enhancing superficial mucosal structures and detecting minor mucosal changes. Greenlight is relatively more suitable for enhancing thick blood vessels in the middle layer of the mucosa [26]. Therefore, the blue component and the green component are more advantageous for extracting endoscopic image information. To ensure the consistency of the image structure while highlighting the vascular details, a stretch factor w c is set based on the connection between the enhanced base layer and the foundation layer. R channel remains unchanged, and the following stretching model enhances G and B channels.
where norm is the normalization and std is the variance of the image, Z(x, y) is a weighted guided filter based on the improved canny operator. Conventional filtering [27] in processing edge information will perform excessive smoothing operations for endoscopic images, i.e., the image's texture and vascular detail information is lost. This paper proposes an improved weighted guided filter based on the canny operator. The main operation uses the edge (CWGIF) weight calculated by the canny operator to replace the local variance calculated by the window in the original weighted guided filter (WGIF).
Original edge weighting factor calculation formula: where σ 2 G,1 (p) is the variance of a 3 × 3 window with p as the center of radius 1, ε is a constant taking the value of (0.001L) 2 .
Canny operator improves the calculation of edge weight calculation factor: where C(p) is the value of the canny operator detection of pixel p.
The weighted bootstrap filtering model based on the canny operator is as follows.
where a k and b k are the parameters to be solved, and G(i) is the bootstrap image (corresponding to the G and B single-channel maps). According to the edge weights (Equation (22)), the cost function is as follows.
The parameters a k and b k can be calculated using the least-squares method on the Formula (24).
where u k and σ 2 k are the mean and variance are corresponding to the single-channel bootstrap image G within window W k . |w| is the total number of pixels within the window W k , p k is the grayscale mean of the input single-channel image p within the window W k . Figure 7, both WGIF and CWFGIF smooth the gradient in the normal area (Standard tissue areas of the human body in the endoscope), but the blue line is closer to the red line in the areas with large gradients (Vascular and arterial texture areas in the endoscope). These indicate that CWGIF has better retention in this region. The image is obtained by outputting any row in a two-dimensional coordinate system. Where red is the input image, green is the resulting map of the WGIF method, and blue is the resulting map of the CWGIF method.
Experimental Environment
The experiment's environment is as follows: the CPU is an 11th Gen Intel(R) Core(TM) i5-11400F @ 2.60 GHz 2.59 GHz, Windows 10 is the operating system, and Matlab R2019b is the test algorithm platform.
Noise Rejection Factor k
When a global noise estimation technique is applied to endoscopic images, vascular detail information can be wrongly treated as noise. Add a noise rejection factor k to maximize noise separation while protecting vascular detail information. Initially, we set k to ten. Based on the canny operator [28] and a combined examination of edge retention (EPI) [29] and peak signal-to-noise ratio (PSNR) [23]. Figure 8a shows the structure layer canny operator detection plots for image1 and image3 at k at 40 and 50, and Figure 8b represents the PSNR values of the structure layer and the EPI values of the structure layer relative to the original image (in the case of image1). As shown in Figure 8, the EPI and PSNR of the resulting structural layer and the original map gradually increase as k increases. Some noise and light patches are identified as pseudo-edges when k = 50. Multiple images were combined and analyzed, and the k value for the endoscopic images was set at 40. The image is obtained by outputting any row in a two-dimensional coordinate system. Where red is the input image, green is the resulting map of the WGIF method, and blue is the resulting map of the CWGIF method.
Experimental Environment
The experiment's environment is as follows: the CPU is an 11th Gen Intel (R) Core (TM) i5-11400F @ 2.60 GHz 2.59 GHz, Windows 10 is the operating system, and Matlab R2019b is the test algorithm platform.
Noise Rejection Factor k
When a global noise estimation technique is applied to endoscopic images, vascular detail information can be wrongly treated as noise. Add a noise rejection factor k to maximize noise separation while protecting vascular detail information. Initially, we set k to ten. Based on the canny operator [28] and a combined examination of edge retention (EPI) [29] and peak signal-to-noise ratio (PSNR) [23]. Figure 8a shows the structure layer canny operator detection plots for image1 and image3 at k at 40 and 50, and Figure 8b represents the PSNR values of the structure layer and the EPI values of the structure layer relative to the original image (in the case of image1). As shown in Figure 8, the EPI and PSNR of the resulting structural layer and the original map gradually increase as k increases. Some noise and light patches are identified as pseudo-edges when k = 50. Multiple images were combined and analyzed, and the k value for the endoscopic images was set at 40.
Setting of Parameter φ in Brightness Correction
As shown in Figure 9, as the value of gradually increases, the image's average brightness grows larger and larger, and the image's histogram concentration value grows closer and closer to the high pixel value region. Setting the brightness enhancement to 0.6 makes it easier for human eyes to notice and judge. [30], SSIM(structural similarity index) [24], PSNR(Peak Signal-to-Noise Ratio) [23], C_II(contrast improvement index) [31], and Tenengrad gradient [32].
Setting of Parameter ϕ in Brightness Correction
As shown in Figure 9, as the value of ϕ gradually increases, the image's average brightness grows larger and larger, and the image's histogram concentration value grows closer and closer to the high pixel value region. Setting the brightness enhancement to 0.6 makes it easier for human eyes to notice and judge.
Subjective Analysis
Endoscopic images are mainly used by physicians to analyze and judge the images of blood vessels and organ tissues collected from patients to identify abnormal areas. As a result, the enhanced image should keep the image's brightness, color, and naturalness while emphasizing the details of lesions and blood vessels to meet the typical observation range of the human eye. Consider the following photographs of different human tissues as examples.
AGCWD over-enhanced localized areas when enhancing endoscopic images. The high bright spot area in Figure 10a is over-enhanced, and the vascular features around the bright spot are nearly unnoticeable. These phenomena are better illustrated in Figure 11a.
MSRCR captures information at several scales before completing color recovery, which frequently results in color distortion in the recovered endoscopic images, as shown in Figure 10a,c. It is possible that the extraction scale is not exact enough, or that the color recovery technique produces bias, and that when used to endoscopic pictures, the algorithm produces judgment errors.
Al-Ameen et al.'s algorithm would have greatly increased contrast in some regions, but the image as a whole is excessively dark, impairing human eye visual observation.
Palanisamy et al.'s algorithm delivers good contrast results, which are considerably more reasonable than the prior algorithms, and accomplishes the basic goal of endoscopic image enhancement. On the negative side, magnifying Figure 10c reveals a tendency for the blood vessels in the endoscopic image to darken, affecting the physician's assessment of the lesion.
Subjective Analysis
Endoscopic images are mainly used by physicians to analyze and judge the images of blood vessels and organ tissues collected from patients to identify abnormal areas. As a result, the enhanced image should keep the image's brightness, color, and naturalness while emphasizing the details of lesions and blood vessels to meet the typical observation range of the human eye. Consider the following photographs of different human tissues as examples.
AGCWD over-enhanced localized areas when enhancing endoscopic images. The high bright spot area in Figure 10a is over-enhanced, and the vascular features around the bright spot are nearly unnoticeable. These phenomena are better illustrated in Figure 11a.
MSRCR captures information at several scales before completing color recovery, which frequently results in color distortion in the recovered endoscopic images, as shown in Figure 10a,c. It is possible that the extraction scale is not exact enough, or that the color recovery technique produces bias, and that when used to endoscopic pictures, the algorithm produces judgment errors.
Al-Ameen et al.'s algorithm would have greatly increased contrast in some regions, but the image as a whole is excessively dark, impairing human eye visual observation.
Palanisamy et al.'s algorithm delivers good contrast results, which are considerably more reasonable than the prior algorithms, and accomplishes the basic goal of endoscopic image enhancement. On the negative side, magnifying Figure 10c reveals a tendency for the blood vessels in the endoscopic image to darken, affecting the physician's assessment of the lesion.
Ying et al.'s algorithm provides a significant increase in brightness, and the enhanced image is clearly visible; however, there appears to be a decrease in contrast as the brightness is increased, and part of the subject architecture is blurred, resulting in the loss of some information in the enhanced endoscopic image. This defect is highlighted more clearly in Figure 11b.
Wang et al.'s algorithm improves brightness and contrast more effectively; nevertheless, the brightness of the light and dark areas is overly boosted in some photos, obscuring the information in the dark areas. As seen in the lower-left corner of Figure 10e.
We zoomed in on a localized section of the individual technique effect figure in the preceding figure to observe. The high brightness area of the map obtained by AGCWD is unduly magnified, as seen in the zoomed-in view of the local area in image 1, which hampers normal observation. The contrast intensity of the plot obtained by Palanisamy et al. was excessively high, making the trend of the blood vessel color change from red to black more evident. Wang et al.'s algorithm is brightened, but the accompanying detail information is not well synchronized, and some tiny blood vessel details are blurred.
Our algorithm improves the brightness, contrast, and naturalness of blood vessels in general. The image's contrast is improved in the normal brightness area, while the intricacies of blood vessels in the dark area are highlighted, avoiding the artifacts, overenhancing, and color distortion that can occur with traditional image enhancement. Ying et al.'s algorithm provides a significant increase in brightness, and the enhanced image is clearly visible; however, there appears to be a decrease in contrast as the brightness is increased, and part of the subject architecture is blurred, resulting in the loss of some information in the enhanced endoscopic image. This defect is highlighted more clearly in Figure 11b.
Wang et al.'s algorithm improves brightness and contrast more effectively; nevertheless, the brightness of the light and dark areas is overly boosted in some photos, obscuring the information in the dark areas. As seen in the lower-left corner of Figure 10e.
We Our algorithm improves the brightness, contrast, and naturalness of blood vessels in general. The image's contrast is improved in the normal brightness area, while the
Objective Analysis
This research uses five indexes to conduct an objective examination of the good and bad images.
PCQI index [30] (patch-based contrast quality index) was developed as an adaptable representation based on local block structure to forecast contrast variation. Each block calculates the average intensity, signal strength, and signal structure. Images with higher PCQI values have better contrast.
SSIM index [24] (structural similarity index) compares three sample and outcome variables (luminance, contrast, and structure) to determine how similar the improved image and the original image are. ( where ( , ), ( , ), and ( , ) represent the image's brightness, contrast, and structure, respectively. The peak signal-to-noise ratio (PSNR [23]) indicator is used to assess noise performance. When reconstructing enhanced images, a greater PSNR suggests that the rebuilt enhanced images are of higher quality.
Objective Analysis
This research uses five indexes to conduct an objective examination of the good and bad images.
PCQI index [30] (patch-based contrast quality index) was developed as an adaptable representation based on local block structure to forecast contrast variation. Each block calculates the average intensity, signal strength, and signal structure. Images with higher PCQI values have better contrast.
SSIM index [24] (structural similarity index) compares three sample and outcome variables (luminance, contrast, and structure) to determine how similar the improved image and the original image are.
where l(x, y), c(x, y), and s(x, y) represent the image's brightness, contrast, and structure, respectively.
Electronics 2022, 11, 1909 16 of 20 The peak signal-to-noise ratio (PSNR [23]) indicator is used to assess noise performance. When reconstructing enhanced images, a greater PSNR suggests that the rebuilt enhanced images are of higher quality.
where I and K represent the improved and original pictures, MAXI represents the image's maximum pixel value, and MSE represents the image's mean square error. C I I [31] is a contrast evaluation index for medical pictures that is calculated by dividing the average value of local contrast of images before and after processing by the ratio, with a larger ratio value indicating more contrast.
where max and min are the maximum and minimum pixel intensity values in a window. C original and C processed are the average of the local contrast of the image before and after processing, respectively. The window size of the conventional image is set to 3 × 3 pixels.
Here, considering the larger proportion of details such as blood vessels in endoscopic images, and referring to the window size set by other medical image calculations, the window size was set to 50 × 50 pixels for a more accurate evaluation of the endoscopic pictures [33]. The Tenengrad [26] gradient is utilized as an image sharpness evaluation index, and the Sobel operator is used to extract the gradient values of the picture's horizontal and vertical directions, respectively. The image becomes sharper and more suited for human eye observation as the Tenengrad gradient value increases.
As shown in Table 1, we present both the metric results for the three sets of a, b, and c images in Figure 10 and the mean values of the 50 image metrics in the dataset. In the table, Im1, Im2, and Im3 correspond to the three groups of a, b, and c images in Figure 10, and "Ave" represents the average of the 50 image metrics. In terms of contrast. The algorithm in this paper is ahead of the mean value of the algorithm proposed by Palanisamy by 0.01 (0.98 and 0.99) in PCQI. It is second only to the algorithm proposed by Al-Ameen in terms of C_II index. Taken combined, the algorithm in this paper improves contrast significantly.
The SSIM value of this algorithm is the best in terms of similarity, ahead of the other six evaluated algorithms, demonstrating that the image enhancement of this algorithm does not lose the picture's significant topic information, which is critical for medical image processing.
In terms of picture quality, this algorithm's PSNR values are higher than those of other algorithms, implying that the reconstructed images are of higher quality. In terms of Tenengrad gradient index, the approach given in this study is pretty near to both the algorithm proposed by Palanisamy et al. and the algorithm value of adaptive gamma correction, showing that the clarity aspect is also ensured.
When image contrast, image quality, and information preservation are all considered, as can be seen from Figure 12, our algorithm is excellent for processing endoscopic images. In terms of contrast. The algorithm in this paper is ahead of the mean value of the algorithm proposed by Palanisamy by 0.01 (0.98 and 0.99) in PCQI. It is second only to the algorithm proposed by Al-Ameen in terms of C_II index. Taken combined, the algorithm in this paper improves contrast significantly.
The SSIM value of this algorithm is the best in terms of similarity, ahead of the other six evaluated algorithms, demonstrating that the image enhancement of this algorithm does not lose the picture's significant topic information, which is critical for medical image processing.
In terms of picture quality, this algorithm's PSNR values are higher than those of other algorithms, implying that the reconstructed images are of higher quality. In terms of Tenengrad gradient index, the approach given in this study is pretty near to both the algorithm proposed by Palanisamy et al. and the algorithm value of adaptive gamma correction, showing that the clarity aspect is also ensured.
When image contrast, image quality, and information preservation are all considered, as can be seen from Figure 12, our algorithm is excellent for processing endoscopic images.
Discussion
This work demonstrates a study on enhancing the visibility of endoscopic images to achieve accuracy in machine-assisted physician diagnosis [34]. During endoscopic imaging, organ substructures or surroundings are hidden in the obtained endoscopic images due to the complexity of the internal structures of the human body and the limitations of Table 1.
Discussion
This work demonstrates a study on enhancing the visibility of endoscopic images to achieve accuracy in machine-assisted physician diagnosis [34]. During endoscopic imaging, organ substructures or surroundings are hidden in the obtained endoscopic images due to the complexity of the internal structures of the human body and the limitations of hardware devices for imaging. Based on the imaging characteristics of endoscopy, this work introduces the image decomposition architecture to address these defects.
Effectiveness
The architecture of image decomposition can obtain multiple image layers with different features. Further processing corresponding to each image layer can effectively highlight the contrast of blood vessels and tissues in the endoscope while maintaining the architecture of the image subject. From the experimental analysis comparing the enhancement effect plots of various algorithms, we can see that most of the enhancement algorithms cannot be well applied to endoscopic images. The reason is that they do not consider the imaging characteristics of endoscopic images and the importance of the detailed information of tiny blood vessels, and all perform image enhancement from the image as a whole. They [4][5][6][7][8][9][10][11][12][13][14] changed the three-channel pixel values, resulting in the blurring or loss of the underlying detail information of endoscopic images. The image decomposition model uses various forms of filtering to decompose the figure into structural, noise, and detail layers for separate processing, which effectively solves the problems of biased noise estimation, uneven brightness, and inconspicuous detail information in endoscopic images. The algorithm in this paper is not affected by external factors during the experiment. It can be applied to images of any tissue inside a human or animal, and of course, the framework idea of the algorithm can be used in any image enhancement.
Limitations
Although the algorithm outperforms other existing algorithms in solving existing problems for endoscopic images, it still has some potential limitations or open issues. First, the algorithm analysis is performed on an existing set of images. However, the images that physicians are exposed to in some extreme cases may be more complex, and individual extremely specific images could be collected for experiments in subsequent studies to ensure that the generalizability of the algorithm is maximized. Secondly, compared with other algorithms, it may be designed to set some parameters in different algorithms or set parameters in the evaluation index. We need to study these parameters further to obtain more accurate results. Finally, the algorithm can currently run well on the Matlab R2019b. In the next step, we consider multi-threaded programming and hardware device enhancement to achieve synchronization and setting in the physician's surgical system to highlight the significance of practicality further.
Conclusions
This paper provides an image decomposition-based endoscopic image enhancement algorithm that effectively avoids the interference of high bright spots and noise in endoscopic pictures. Adaptive enhancements were applied to the brightness of the base layer pictures, as well as stretching of the detail layer vascular lesions and other information in sub-channels suited for endoscopic image characteristics. Traditional image enhancement difficulties such as color distortion, hazy disappearance of vascular features, and excessive local brightness augmentation are efficiently solved by this algorithm. The algorithm produces good visual results and assessment indications through subjective and objective examination. It was demonstrated that this algorithm outperforms numerous other conventional algorithms for processing endoscopic images. Due to the limitations of traditional algorithms in image processing, in future work, we will consider applying the image decomposition ideas used in this algorithm to neural networks to design a suitable image quality loss to optimize the network model.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-06-21T15:02:28.401Z
|
2022-06-19T00:00:00.000
|
{
"year": 2022,
"sha1": "adbd1d03f2826762625db63eff853f609961117c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/11/12/1909/pdf?version=1655622117",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2f18fb9e1703fd8e3525b46171c7b610327b588c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": []
}
|
257580057
|
pes2o/s2orc
|
v3-fos-license
|
Treatment of insomnia based on the mechanism of pathophysiology by acupuncture combined with herbal medicine: A review
Insomnia is a sleep disorder which severely affects patients mood, quality of life and social functioning, serves as a trigger or risk factor to a variety of diseases such as depression, cardiovascular and cerebrovascular diseases, obesity and diabetes, and even increases the risk of suicide, and has become an increasingly widespread concern worldwide. Considerable research on insomnia has been conducted in modern medicine in recent years and encouraging results have been achieved in the fields of genetics and neurobiology. Unfortunately, however, the pathogenesis of insomnia remains elusive to modern medicine, and pharmacological treatment of insomnia has been regarded as conventional. However, in the course of treatment, pharmacological treatment itself is increasingly being questioned due to potential dependence and drug resistance and is now being replaced by cognitive behavior therapy as the first-line treatment. As an important component of complementary and alternative medicine, traditional Chinese medicine, especially non-pharmacological treatment methods such as acupuncture, is gaining increasing attention worldwide. In this article, we discuss the combination of traditional Chinese medicine, acupuncture, and medicine to treat insomnia based on neurobiology in the context of modern medicine.
Modern medicine perception of insomnia
Insomnia is used to describe the experience of difficulty in sleeping. When people have sufficient time and a subjective desire to fall asleep but have difficulty falling asleep or have poor sleep quality and wake up easily, people who suffer from sleep disturbance have negative emotions and affect their normal work and life, which they describe as insomnia. [1] Insomnia can be acute, chronic, or intermittent, and is considered the most common sleep problem in related investigations [1] and can be associated with other diseases and sleep disorders. In the latest diagnostic etiology, it is proposed that insomnia is not congenitally secondary to other diseases, but co morbid with other diseases, especially with obstructive sleep apnea syndrome and restless legs syndrome are highly co morbid. [2] However, this does not mean that insomnia is not secondary to other diseases, and when this happens, other diseases are successfully treated, and insomnia symptoms disappear.
In the medical literature, insomnia is often referred to simply as "insomnia, but in this article, it is always referred to as "insomnia to distinguish it from other disorders that cause insomnia symptoms. Insomnia can be chronic (onset > 3 months) or acute (onset ≤ 3 months), and in the literature it has been classified as "primary insomnia, "secondary insomnia, and other different types of chronic insomnia (e.g., Psychophysiological insomnia), but this perception is now outdated and anachronistic, and the academic community has replaced these old, outdated names with the more widely accepted "insomnia." [3] Although insomnia can occur in healthy individuals and may even be common, it can still be diagnosed as a disorder. Medicine
Traditional Chinese medicine perception of insomnia
In traditional Chinese medicine (TCM) literature reviews, there is an extensive and profound understanding of insomnia. There are different names such as "sleeplessness," "lack of sleep," "no night" and "insomnia." The development, etiology, and diagnosis of insomnia are mentioned in many places in Yellow Emperor's Classic of Internal Medicine, a pre-Qin text. For example, in "Ling Shu-Ying Health Hui" [4] : "Ying Qi is weakened and less, and Wei Qi is internally felled, so the day is not refined and the night is not close"; "Ling Shu-Evil Guest" [4] : "If Yang Qi is strong, Yang stilts are trapped, and the eyes are not close because of Yin deficiency." The Suwen-the theory of inverse regulation [5] said "the stomach is not harmonious, then restlessness." This reflects the simple perception of insomnia of ancient times. In later medical texts, there were 2 relatively different views on insomnia. One of them is that "the loss of nourishment of the heart and mind" is the fundamental reason for the development of insomnia, that is, the cloud of "Continuing the Case of Famous Physicians": "When a person sleeps peacefully, the mind returns to the heart and the 5 viscera each rest in its place and sleep peacefully"; and "The Book of Jing Yue Quan Shu-Insomnia." Jing Yue Quan Shu-sleeplessness said: "The mind is safe then sleep; the mind is restless with sleeplessness." It is important to note that the "heart" in Chinese medicine does not exactly correspond to the heart organ in modern anatomy, nor can it be simply and coarsely equated. In Chinese medicine, the heart is considered to be the "sovereign" that commands the whole organism and is involved in the pathogenesis and treatment of insomnia as the "place of consciousness," and is considered to be the "seat of the gods" (spirit) or mind. The "mind" is also the seat of the body as a "sovereign" and is involved in the pathogenesis and treatment of insomnia. The "heart" is also a central point involved in the etiology and pathogenesis of anxiety and depression. Addressing the imbalance associated with the "heart" is a key strategy in the treatment of insomnia in TCM. There is scientific evidence that acupuncture can cause physiological changes in the heart, [7] and from a modern medical perspective, these changes are related to the pathophysiology of insomnia. Although this traditional concept seems inconsistent with the modern physiological understanding when first examined, it does make sense in terms of the function of the vagus nervous system. The vagus nerve (10th cerebral nerve) is primarily the afferent nerve that informs the visceral experience of the brain. The ancient Chinese view that consciousness and thought are attributed to the "heart" is of research interest and value.
Another view is that the "brain" is the essence of insomnia, and the "brain" is considered as the "House of the Gods" in Chinese medicine, [8] which corresponds to the "brain in modern medicine. In Chinese medicine, "brain is considered to be the "house of the vital spirit," [8] which corresponds to "brain in modern medicine. In Chinese medicine, the brain is considered to be the master of thinking, as stated in the Preparedness and Emergency Thousand Gold Essentials: "The head is the head of the body and the method of the human spirit, and Wang Ang of the Qing dynasty believed in the Materia Medica that "the memory of human beings is all in the brain." [8] Wang Qingren, in Materia Medica, argued that "the brain is the highest level of the nervous system, not the heart, but the brain'. [9] Although Chinese medicine has a certain understanding of the physiology and pathology of the brain, it is still guided in clinical practice by the theory of Tibetan elephants. In Medical Zhongzheng Ginseng Xilu, it is stated that "the brain is the yuan shen...... brain is disturbed...... shen shen is disturbed then insomnia," and it is believed that "the brain is disturbed "The loss of nourishment in the brain will lead to the state of "Yin deficiency and Yang hyperactivity" in Chinese medicine, which affects the mental activity and nervous system of people, thus causing insomnia.
Over a long period of time, Chinese medicine has developed a rich and unique understanding of insomnia, which is to some extent common to modern medical research, but it should be noted that this understanding is based on the Tibetan theory of Chinese medicine and the fundamental principle of diagnosis and treatment in clinical practice. Owing to the complexity and uniqueness of Chinese medicine, it also presents a unique perspective that is different from that of modern medicine. In this study, we explored the possibility of combining TCM with cognitive behavioral therapy to treat insomnia based on modern medical research on the neurobiological mechanisms of insomnia.
The mechanism of sleep and wakefulness (Fig. 1)
To understand insomnia, we first need to understand the basis of sleep; sleep and wakefulness are normal physiological states of the brain, and 2 processes, sleep-wake homeostasis and circadian rhythm, govern the sleep-wake cycle. Sleep can be consolidated when the 2 can be combined. [3] An interesting new idea is that the sleep-wake cycle embodies an organizing principle during which each process is independent, rather than assuming that sleep has a single function. [10] However, this does not deny that neuronal plasticity also occurs during wakefulness. Sleep and wakefulness may occur simultaneously in a nonintegrated manner in different neuronal populations of the brain, and the states of wakefulness and sleep have traditionally been considered to be strictly separated in time, with the brain either in a sleep or a wakeful state. The assumption of a strict sequence of sleep-wake and a whole-brain or no-brain state has been continuously challenged in the last decade, [11] with studies proposing that sleep and even may be a fundamental property of small local neural networks. Thus, individual cerebral cortices may display sleep-like states that occur somewhat independent of other cortical sleep-like states. Intracerebral recordings in rodents and humans demonstrated the presence of both sleep and wake-like neuronal activity. [12] However, it should be noted that the concept of "local arousal islands" to explain the subjective arousal experience of sleep may be too simplistic.
Nevertheless, the concept of simultaneous arousal and sleep now seems to be worth investigating in depth, based on datadriven approaches showing that even patients with insomnia have more typical light sleep electroencephalogram (EEG) features than controls during deep sleep. [13]
Epidemiological studies
According to recent epidemiological studies, the overall prevalence of insomnia in China is 15%. [14] Insomnia appears to be the second most common psychiatric disorder, with a 12-month prevalence between the most common co morbid anxiety disorder and major depressive disorder. [15] It may be surprising that insomnia is the second most common neuropsychiatric disorder; however, the prevalence of insomnia has been increasing over the past decades, and this increased prevalence is supported by other longitudinal studies. [16] Recent studies have identified females and older age groups as major determinants of insomnia prevalence. [17] Although the mechanisms behind this phenomenon are not fully understood, autopsies on humans have shown gender differences in the brain structures involved in circadian rhythms and sleep regulation. [18] Furthermore, animal experiments have shown that homozygous structures in females respond much more strongly to sex steroid fluctuations than those in males. [19] The phase of the circadian rhythm of estradiol varies throughout the menstrual cycle, and sleep disturbances are most severe in the mid-luteal phase of the menstrual cycle, when ovarian steroid levels begin to decline and women sleep later than men, which may increase the risk of insomnia. [19] Notably, female patients with major depressive disorder sleep relatively late because of their higher circadian estradiol rhythms, [20] which may lead them to develop insomnia. The increased sensitivity of women's sleep architecture and circadian rhythm regulation to sex hormone fluctuations may also contribute to the increased risk of insomnia during pregnancy and menopause.
Paradoxically, however, the higher prevalence of subjective insomnia complaints in women is not reflected in objective classical polysomnographic recordings, and it is important to note that the diagnosis of insomnia is strictly based on subjective insomnia complaints, with objective sleep criteria playing a relatively minor role. [2] Conversely, objective measures suggest that women have better sleep quality than men, at least in humans. This cross-gender difference in the subjective and objective indicators of human sleep quality is only 1 notable example of our limited understanding of the neural correlates of the subjective experience of insomnia.
Epidemiological studies have shown that chronic sleep disorders, including insomnia, significantly increase with age. Frequent nocturnal awakenings were the most common age-related sleep disorder symptoms, followed by difficulty falling asleep and early awakenings.
Insomnia: A disorder of sleep homeostasis and regulation?
Brain circuits are primarily involved in the homeostatic components of circadian rhythm and sleep regulation. Although deviations in circadian rhythm and in vivo homeostatic regulation are likely to affect sleep quality, there is little evidence that insomnia is caused by circadian rhythm and in vivo homeostatic dysregulation. [21] Regarding circadian rhythms of sleep regulation, only a few complaints of patients with insomnia are sleep complaints caused by attempts to start sleep at an inappropriate circadian phase. [22] Similarly, insomnia does not appear to be caused primarily by a deficiency in the functioning of the in vivo homeostatic component of sleep regulation, and homeostatic studies have assessed how sleep deprivation alters slow wave activity in the EEG during the subsequent resumption of sleep.
Slow wave activity (SWA), which is considered a measure to assess cumulative sleep stress, increased during the baseline night in both patients with insomnia and controls, but to a lesser extent. Although some studies have shown an in vivo homeostasis deficit in insomnia, [23] other studies have not confirmed this. [24] All conclusions were based on studies and analyses that did not apply strict deprivation protocols regarding in vivo homeostatic sleep regulation. [25] This means that insomnia may or may not have altered SWA; therefore, this is not sufficient to draw any conclusions about the lack of homeostasis in vivo.
Although adenosine has been considered a molecule that plays a key role in sleep homeostasis, with functional genetic variants in its regulation altering the duration and intensity of SWA, [26] recent Genome-Wide Association Studies (GWAS) have not shown major variants in genes involved in adenosine regulation.
Insomnia, anxiety and depression.
According to previous studies, insomnia usually coexists with multiple neuropsychiatric disorders, with a prevalence of 80% to 90% co-morbidity with depression and anxiety. [27] It should be repeatedly stated that in the past, insomnia was conceptualized as a "secondary disorder," however, it has been demonstrated that insomnia has an independent trajectory. Insomnia does not resolve with treatment or improvement in psychiatric disorders. [28] At the same time, there is a bidirectional relationship between insomnia and multiple psychiatric disorders, and insomnia may exacerbate symptoms of co morbid psychiatric disorders, impede the treatment of comorbid psychiatric disorders, and lead to an increased risk of relapse. [3] A Meta-analysis of 34 cohort studies involving 150,000 participants found that the presence of insomnia doubled the relative risk of developing depression. [29] A noteworthy finding was that insomnia was positively associated with an increased risk of suicidal ideation. [30] Studies of patients have shown a degree of association between sleep problems and depression, and in a study of 5481 hospitalized patients, it was found that more than half of the 3108 patients whose depression was in remission at discharge still had a substantial degree of sleep disturbance [31] The binary effects hypothesis suggests that common neural substrates may underlie insomnia, anxiety, and depression, [32] that is, common neural substrates disrupt sleep and mental health. In depression, co-expression of the insomnia phenotype is common, and the negative effects of insomnia and emotional stress are reinforced in both directions. On the 1 hand, individuals who are very sensitive to stress are prone to insomnia [33] ; on the other hand, preexisting insomnia puts individuals at an elevated risk of developing posttraumatic stress disorder (PTSD) when exposed to traumatic events.
The "monoamine hypothesis" proposes a role for norepinephrine in depression, with studies and findings including norepinephrine deficiency, prolonged increases in norepinephrine activity, and altered sensitivity of downstream receptor responses. However, the exact role of noradrenergic transmission remains a mystery. [2]
Sleep, insomnia and genetics
To date, there is little understanding of the underlying neurobiological mechanisms of insomnia in the academic community or furthermore little knowledge of any of the complex features of insomnia. As a result, recent studies have taken GWAS studies and none of them so far have addressed the speculation that the risk variables predisposing to the onset of insomnia may be different from those leading to its permanence or chronicity. [10] In contrast, Lind et al [34] conducted a retrospective analysis of candidate gene studies on insomnia and found that insomnia is associated with genetic polymorphisms related to other psychiatric disorders. For example, genes are involved in 5-hydroxytryptamine transport or metabolism. [35] One study found that Apoε4 allele carriers had an increased likelihood of developing insomnia, [36] and overall sleep disturbance measured by applying the Pittsburgh sleep quality index (PSQI) was not significantly associated with dopamine-regulated catecholamine-O-methyltransferase. [37] The GWAS approach is considered more appropriate than the Candidate gene studies approach because complex diseases such as insomnia are highly polygenic, that is, their pathogenesis is determined by any combination of variants among many genes rather than by a specific gene. However, GWAS methods currently face a major dilemma in that they require large samples and a large number of statistical tests. This problem has been solved to some extent with the advent of million-dollar study samples; however, the current need for large cohorts for detailed clinical diagnosis of insomnia remains unresolved.
Recently, a genome-wide analysis involving 1,331,010 replicated MEIS1, MED27, IPO7, and ACBD4, providing strong support for the polygenic nature of insomnia risk. [36] This study identified 956 genes associated with at least 1 of 4 different strategies. The 4 strategies used were locus mapping, eQTL, chromatin mapping, and genome-wide gene-based association analysis. Among these genes, 62 were consistently affected by the 4 different strategies. However, GWAS studies have explained only a small fraction of the phenotypic variation in insomnia, that is, 2.6% of the largest genome-wide association data. [38] Theoretically, if all genetic variants affecting insomnia are known and all their effects can be correctly estimated, the maximum variance explained could be equal to the heritability estimated by meta-analysis (44%). [39] Large differences between the variance explained by GWAS and heritability estimates are common in complex shapes and are referred to as "heritability deficits." Despite the increasing sample sizes studied and the occasional sensitive and specific phenotypes, if the "heritability deficit" remains large, 1 may see value in GWAS studies of insomnia, [38] as an important value of GWAS is that it reveals clues involving specific biological functional pathways, tissues, and cell types.
Insomnia: Risk factors
Insomnia increases exposure to many risks, including obesity, [40] type 2 diabetes, [40] cardiovascular disease [41] and even an increased risk of suicide. [42] Notably, poor sleep quality is a much stronger predictor of future health problems than short sleep duration. [43] In some individuals, insomnia may persist long after the initial trigger has passed, and some may try to cope with insomnia by adopting bad habits such as drinking alcohol before bed, which may in turn exacerbate the symptoms of insomnia. [44] Moreover, insomnia may trigger PTSD. [45] Indeed, high-quality sleep may prevent poor mood regulation and anxiety in veterans with PTSD. [46] Insomnia has a bidirectional association with psychiatric disorders [44] and gastroesophageal reflux, [47] and among all psychiatric brain disorders, insomnia is probably the most common and burdensome co-morbidity. [10] 1.9. Polysomnography Polysomnography (PSG) is a multichannel nocturnal sleep study that is considered the gold standard for semi-objective quantification of sleep. PSG is not strictly required for the diagnosis of insomnia, but we can rule out other possible causes of sleep disruption, such as sleep apnea and periodic limb movements during sleep, by assessing PSG. Contrary to what the name insomnia implies, the EEG of patients with insomnia shows features of sleep in a fragmented manner, manifesting as interrupted wakefulness and sleep transitions, and a meta-analysis showed that the PSG variable reflecting interrupted sleep continuity is the most powerful PSG feature of insomnia. [48] Meta-analysis of PSG in patients with insomnia compared to well-sleeping individuals showed that the greatest effect of the difference between groups of patients with insomnia was a higher number of nocturnal awakenings and, therefore, less efficient sleep. The total sleep time was consequently reduced owing to the reduction in N3 sleep and rapid eye movement sleep. Sleep instability is also manifested by a greater tendency to switch to sleep states at lower depths, which makes it difficult for insomniacs to reach stage N3. [49] In contrast, once stage N3 is reached, sleep in insomnia is more similar to that of normal sleepers, without a significant increase in the probability of switching to a state of lower sleep depth or other indicators of classical instability. [49] However, recent data-driven analysis techniques have revealed that the PSG of insomniacs is characterized by the simultaneous presence of shallow sleep, even in the deepest sleep states. [13] 1.10. Rooted neurobiology: TCM theory, mind-brain axis and insomnia 1.1.10. Modern Chinese medicine perception of insomnia. Insomnia is often chronic (>3 months), yet the existing problem is that modern medicines are often recommended for short-term use. [27] Owing to the limitations of medication use, including potential dependence and resistance to long-term use, [50] and the recognition of excessive anxiety factors in the insomnia trajectory, non-pharmacological approaches and traditional Chinese medicine treatments have received increasing attention in the last 2 decades. [51] Cognitive Behavioral Therapy for insomnia has been recommended as the first-line treatment for insomnia, and TCM treatment has both traditional herbal treatment components and non-pharmacological treatments, such as acupuncture and tui-na, emphasizing the combination of pharmacological and non-pharmacological treatments, which are compatible with cognitive behavioral therapy for insomnia in many aspects and can be mutually informed to promote understanding and treatment of insomnia. [52] Based on the theory of TCM, human sleep is inseparable from the tranquility of the "heart and mind," and when the "heart and mind are at peace," 1 can sleep peacefully. [53] The "heart" in the basic theory of TCM is different from the heart in modern medicine, and its role in cardiovascular medicine is broader than that of the "heart," which is understood as the place where the "mind" or "dwelling mind" is hidden. In a www.md-journal.com narrower sense, it refers to mental awareness, including memory, perception, and thought. [7] The "heart" is a broad concept that encompasses the functions of the heart in the modern medical sense, and the 2 cannot be clearly distinguished or simply considered as unrelated medical concepts. [54] From a systems biology perspective, "heart" and "mind" are not "objects," but they are related to anatomical organs and belong to multilevel functions.
2.1.10. The heart-brain axis theory for insomnia (Fig. 2). Modern medicine has demonstrated a close connection between the heart and brain and has gradually developed the heart-brain axis theory, which states that the heart is the pivotal joint connecting the circulatory system and the higher nervous system. [55] Studies have shown that the heart and brain are combined through the autonomic network, which constitutes the central autonomic network and is the supreme center of autonomic nerves. [56] The central autonomic network regulates the vascular tone, pulsation, cardiac output, and myocardial metabolism of the heart after acting through the humors, endocrine, and autonomic nerves, which emanate from the heart, and the blood pumped from the heart supplies the whole body, including the brain, through the aorta. [57] Several experiments have also shown that in addition to autonomic regulatory centers, cardiac afferent neural inputs can affect higher brain centers involved in emotion processing and perception, and changes in afferent and efferent autonomic activity have been found to correlate with changes in heart rhythm patterns, that is, positive emotions lead to increased heart rhythm coherence, whereas negative emotions lead to heart rhythm disturbances. [58] The interdependence between the heart and other organs supported by the TCM theory is also supported when we consider the neuroanatomical connections that exist between the heart and other organs. Clusters of 36 ANS neurons that regulate organs, such as the heart, lungs, gastrointestinal tract, kidneys, and bladder, are located near these organs and communicate with each other, forming a network that facilitates information exchange. For example, neurons that control the heart and respiratory tract communicate with each other. [59] Prospective studies have shown that patients with insomnia have an increased likelihood of developing coronary heart disease. [60] Among the many neural connections between the brain and the body's organ systems, the cardiocerebral axis is an important component that is formed between the brain and the heart through the nervous system, [61] and the vagus nerve is a complex bidirectional system. The vagus nerve originates in the brainstem and projects independently of the spinal cord to many organs in the body cavity, including the heart, respiratory system, and digestive system. Its myelinated branches connect the brainstem to various target organs. [62] These neural pathways allow for direct and rapid communication between brain structures and specific organs. As the vagus nerve contains both afferent and efferent fibers, it facilitates dynamic feedback between brain control centers and target organs to regulate internal environmental homeostasis. Whereas, according to previous studies, cardiac afferents send more signals to the brain than other major organs, approximately 80% to 90%, [63] and brain rhythms exhibit varying degrees of synchronized activity with the heart; for example, brain wave activity and amplitude increase with heart rate. When the heart rhythm is consistent, heart-brain synchronization increases, and these phenomena respond to intercommunication between different biological rhythms. [64] There is evidence that the heart plays a special role in synchronizing activity across multiple systems and levels of organization, through the generation of rhythmic information patterns in the body, the ANS, hormones, stress, and electromagnetic field interactions, as well as other pathways for continuous communication with the brain and body. [65] Studies on vagal activity, mood, and insomnia have found relationships between emotional intensity, vagal function, and sleep in children, with vagal regulation in children assessed by RSA during a baseline and reaction time task, [66] and sleep problems examined by child reports and home monitoring using a wrist activity meter. An increase in emotional intensity predicted a decrease in sleep duration and an increase in nocturnal activity. Poor vagal regulation, characterized by lower inhibition of the reaction time task by the RSA, predicted increased sleep problems. These results suggest that children's mood and vagal regulation predict unique changes in sleep quality. Another study showed that transcutaneous auricular vagal stimulation [67] improved sleep quality and prolonged sleep duration in patients with insomnia by reducing functional connectivity within the default mode network, between the default mode network and salience network, and between the default mode network and occipital cortex.
Chinese medicine diagnosis and classification for insomnia
TCM prescriptions are based on the theory of TCM, methodological approach of diagnosis and treatment, principles of formulary, and specific use of Chinese medicine in combination to treat insomnia. According to the classification of the Chinese Medicine Clinical Practice Guidelines for Insomnia (WHO/WPO), [68] TCM discriminatory evidence classifies insomnia into liver stagnation and fire, phlegm-heat internal disturbance, yin deficiency and fire, loss of harmony of stomach qi, internal obstruction of blood stasis, heart fire, heart and spleen deficiency, heart and gallbladder qi deficiency, and heart and kidney disconnection. In the process of clinical treatment, the majority of TCM practitioners also use prescriptions and medicines on this basis, and on this basis, they combine modern medicine and other therapeutic methods to study and treat insomnia. Phytosphingosine-1-P may be a potential serum metabolic marker for distinguishing insomnia from liver stagnation and fire from phlegm and fire. Phytosphingosine-1-P may be a typical serum metabolic marker for differentiating insomnia with liver stagnation and fire from phlegm fire and yin deficiency and fire. It is well known that the dialectical typing of TCM is not rigid and unchanging, but will be based on different clinical practices of TCM with regional Figure 2. Heart-Brain axis. Medicine and contemporary characteristics, and the TCM theory guiding the treatment of insomnia still has dialectical classification methods, such as the 5 Gods typing, which is based on the theory of "5 Sacred Hidden" in the Huangdi Neijing, [70] and is divided into The 5 types of symptoms are: heart does not hide the spirit, liver does not hide the soul, spleen does not hide the will, lung does not hide the spirit, and kidney does not hide the will. Peng Zhipeng [71] showed that the serum dopamine (DA) levels and PSG parameters of the 3 groups differed by analyzing the serum DA levels and PSG parameters in the normal group, indicating that there were differences in the serum DA levels and PSG sleep structure parameters in the 5 divine types of insomnia in TCM; namely, the kidney did not hide the spirit and the spleen did not hide the will. Rui et al [72] similarly found that there were differences in serum MT content and PSQI scores between the liver that does not harbor the soul type and the kidney that does not harbor the will type. Kai-Kai Wang et al [73] designed animal experiments to investigate the differences in 5-HT1A receptors and 5-HT2A receptors in the relevant organs of rats with pulmonary non-hidden soul type and control rats, and showed that the expression of both receptors was significantly elevated, suggesting that 5-HT1A and 5-HT2A receptors are mainly associated with the brain and, to a lesser extent, with the lungs in pulmonary non-hidden soul type insomnia, which may be a characteristic of one of the relevant transmitters.
The practice of TCM prescriptions for insomnia
An important feature of TCM diagnosis and treatment is that different TCM clinicians can prescribe different medications for the same disease; however, this does not mean that the prescription of TCM is completely subjective; there must be some fixed medication patterns in each clinician's prescription. In a triple-blind, randomized, placebo-controlled, parallel-group clinical trial, each TCM practitioner was found to have 1 or more effective core medication patterns and patients experienced improved sleep quality and longer sleep duration, demonstrating the existence of objective criteria and definite efficacy of TCM prescriptions. It is important to note that although there were only 3 TCM practitioners in the trial, these practitioners were actually considered as treatment rather than a sample, so there was no problem with the small sample size. Hu Kun et al [75] conducted a randomized controlled trial to observe the efficacy of the treatment of insomnia using the classical Chinese medicine formula Sour Date Ren Tang, plus and minus the combination, combined with 5 Elements Music Therapy. The results showed that the sleep treatment was improved and the sleep time was prolonged in the treatment and control groups (P < .01, P < .05), and the efficacy of the treatment group was better than that of the control group (P < .01, P < .05). The results showed that it could significantly improve sleep quality and prolong sleep time. The treatment of primary insomnia with a combination of sour date soup and 5-element music was effective in improving sleep quality and prolonging sleep time, which may be closely related to the modulation of neurotransmitters and inflammatory factors by this treatment modality. Shi Jianing et al [76] designed a randomized controlled trial to evaluate the efficacy of the TCM formula Yixin Ningxin Fang on insomnia of the heart and spleen deficiency type, in which patients were randomly divided into 65 cases each in the treatment and control groups. The results showed that Yishen Ningxin Fang could effectively improve the clinical symptoms of patients with heart and spleen deficiency type insomnia, and its mechanism of action was found to be possibly related to the regulation of salivary MT and CORT levels in patients and restoration of the circadian rhythm of salivary MT secretion. Zhang Zhongyang et al [77] designed a randomized controlled trial to investigate the clinical efficacy of Huang Lian Agaricus Tang Plus in treating yin deficiency and fire-exuberance insomnia. This mechanism of action may be related to an increase in the level of 5-HT and a decrease in the level of DA.
Systematic evaluation on TCM prescriptions for insomnia
In terms of systematic evaluation, Hongshi Zhang et al [78] searched electronic databases including PubMed, EmBase, Cochrane library, and China Knowledge Network, and used a random effects model to calculate the total weighted PSQI, Athens insomnia scale, and its 95% confidence interval. Finally, 15 randomized controlled trials including 1500 patients were included in the meta-analysis. and that large-scale, high-quality randomized controlled trials are needed to confirm these results. Xu Fan [79] identified 13 studies involving 1181 participants by Meta-analyzing the efficacy and safety of the Chinese herbal formula Gentian Diarrhea Liver Soup, after searching the databases of PubMed, CBM, CNKI, and VIP, and the results of the analysis showed that the total effectiveness and cure rate were higher than those of the control group, which was statistically significant, adverse effects were lower than those of the Western medicine group, and clinical safety was higher than that of the Western medicine group, which proved the efficacy and safety of Gentian Diarrhea Liver Tang. Hu et al [80] conducted a systematic evaluation of the clinical efficacy and safety of the Chinese herbal formula Zunyao San in the treatment of insomnia with anxiety, and the analysis included 2 randomized controlled trials involving 681 patients, which showed that the combination of prozac with modern medical adjuvant drugs and prozac alone are beneficial in improving sleep quality, prolonging sleep duration, and relieving anxiety in patients with insomnia and anxiety. However, more rigorous and scientific clinical trials are needed for further evaluation due to the relatively low quality and small sample size of randomized controlled trials collected in this systematic evaluation. Proprietary Chinese medicines are Chinese medicinal preparations with objective, standardized, and quantitative index characteristics under the guidance of TCM physical therapy and combined with modern medical technology, and according to a recent meta-analysis, [81] 11 Proprietary Chinese medicines included in the study (Ginseng Astragalus Wu Wei Zi Tablets, Nourishing Blood and Brain Granules, Shu Sleep Capsules, Ginseng Song Yang Xin Capsules, Baile Sleep Capsules, Shu Liver Relief Capsules, Sweet Dream Oral Liquid, Yin Dan Xin Nao Tong The combination of NBZDs (soft capsule) with first-line drugs was effective in the treatment of insomnia. This study made a promising exploration of the systematic study of proprietary Chinese medicines for insomnia. (Fig. 3) Numerous studies have shown [82] that acupuncture improves sleep quality and prolongs sleep time in patients with insomnia by regulating the activities of sleep factors, such as neurotransmitters, hormones, and cytokines. The specific mechanisms of action are as follows:
Effect of acupuncture on central neurotransmitters.
Various central neurotransmitters in the brain have an effect on sleep, and acupuncture affects sleep structure by regulating the levels of 3 types of central neurotransmitters: amines, amino acids, and peptides. [83,84] Li Zhongwen et al [85] designed a randomized controlled trial in which patients with insomnia were acupunctured by acupuncture of Shenmen and Sanyinjiao, and the results showed that acupuncture of sleep-Sanyinjiao could effectively improve sleep quality and upregulate serum γ-aminobutyric acid (GABA) and 5-HT levels in patients. Li Wenkang www.md-journal.com et al [86] used para-chlorophenylalanine (PCPA) model rats and designed animal experiments according to Experimental Acupuncture; the intervention group was selected to acupuncture bilateral Shenmen, Zhigou, Foot Sanli, and Sanyinjiao points to the inner umbilical ring points, and the results showed that this acupuncture method could improve insomnia symptoms in rats and could upregulate the PCPA model; the results showed that this acupuncture method could improve the symptoms of insomnia in rats, and could upregulate the expression of 5-HT1AR and 5-HT2AR in the hippocampus of the PCPA model.
Acupuncture, cytokines and insomnia.
Cytokines are mostly derived from immune cells and can regulate the immune response and sleep-wake cycle. Acupuncture can regulate sleep mechanisms by modulating immune cytokines such as interleukins. [87,88] Tang Lei et al [89] used the electroacupuncture method of 5 acupuncture points to randomly divide 40 PCPA model rats into control, model, electroacupuncture, and diazepam groups, and the results showed that electro-acupuncture of Wuqi Yu could improve the symptoms of insomnia rats by promoting the synthesis of TNF-α and IL-1β in the hypothalamus and alleviating the inhibitory release of 5-HT and its metabolites caused by PCPA. Qian Lala et al [90] designed a randomized controlled trial using Jiaotai Pill combined with acupuncture for Annmian, Xin Yu, Kidney Yu, Shen Men, and Zhao Hai. Sixty patients with insomnia of the heart-kidney disorder were randomized equally into test and control groups. After 4 weeks of treatment, the results showed that Jiaotai Pill combined with acupuncture could better regulate the levels of cytokines TNF-α, IL-6, and IL-1β and improve the sleep quality of patients and treatment of insomnia.
2.2.5.
Acupuncture regulates the neuroendocrine system to improve sleep structure. It is well known that psychological factors affect human sleep and play an important role in the development of insomnia, such as mental tension, anxiety, and depression, which can become stressors and generate neuroendocrine responses, mainly in the sympatheticadrenomedullary system and the hypothalamic-pituitaryadrenal axis. [91] In contrast, acupuncture can affect sleep architecture, improve sleep quality, and treat insomnia by regulating hormones associated with the sympatheticadrenomedullary system and the hypothalamic-pituitaryadrenal axis. [92,93] Wu Xuefen et al [92] selected the acupuncture points by meridian and randomly assigned 60 PCPA model rats equally into the blank group, model group, Baihui + Shenmen group, Baihui + Sanyinjiao group, and Baihui + non-meridian non-acupuncture points group, with 12 rats in each group, showing that hypothalamic adrenocorticotropic hormonereleasing hormone, serum adrenocorticotropic hormone, and corticosterone levels were reduced in each acupuncture group. The results showed that the hypothalamic-pituitary-adrenal axis may be one of the mechanisms that affects sleep in rats. Li Jiahuan et al [94] used the method of tuning Shen acupuncture and designed a randomized controlled trial in which 60 patients with insomnia were randomly assigned equally to the treatment group (acupuncture of Baihui, Shenting, Yintang, bilateral Shenmen, bilateral Sanyinjiao) and the control group (acupuncture of bilateral Handsanli, bilateral Fuyun, bilateral Feiyang), and after 4 weeks of treatment, it was found that both groups could improve patients insomnia symptoms, and The PSQI and Fatigue Severity Scale scores of the treated group were significantly lower, with better efficacy in improving insomnia and daytime fatigue, and the plasma melatonin and cortisol levels of patients in the treated group were reduced.
2.6. Combining acupuncture and herbal medicine for insomnia (Fig. 4) In Chinese traditional medicine, both acupuncture and Chinese herbal medicine are effective in the treatment of neuropsychiatric disorders, but Chinese physicians tend to treat insomnia not in a single treatment, but more often in a combined form, most often with acupuncture combined with Chinese herbal medicine. Professor Lin Xianming [95] pointed out that in the treatment of insomnia, "acupuncture should be used to regulate the mind first, and prescription should be used to regulate the body." The combination of the 2 can treat both the symptoms and root causes, complementing each other.
Xu Kaiquan et al [96] designed a randomized controlled trial using Dong's Qi acupuncture method (Neiguan, Shenmen, Taixi, Guangyuan, Baihui) combined with Nourishing Heart and Tranquilizing Mind and Dispelling Phlegm Soup (Fu Ling, Han Xia, Xia Gu Cao, Zhu Ru, Chen Pi, Yuan Zhi, Fu Shen, Huang Qi, Yan Hu Suo, Roasted Glycyrrhiza, Sour Jujube and Citrus aurantium) to treat 123 cases of phlegm-damp-internalized insomnia in the combined group, control A group, and control B group. The results showed that all 3 groups could improve the patients' insomnia symptoms after treatment, and the phlegm-dampness constitution score of the combined group was lower than the remaining 2 groups, the GABA and 5-HT levels were higher than the remaining 2 groups, and the Glu and DA levels were lower than the remaining 2 groups. This indicates that the combined treatment modality can significantly improve the sleep quality of patients with phlegm-damp internal insomnia, improve the phlegm-damp constitution, and regulate and adjust serum GABA and Glu levels. Qin Meiying et al [97] collected 164 patients with insomnia of the heart and spleen deficiency type and randomly and equally assigned them into treatment and control groups. The treatment group received sour jujube soup combined with Ziwu Liujiao acupuncture method, while the control group was administered estradiol tablets. The BDNF and GDNF levels in the treatment group were higher than those in the control group, indicating that this combined treatment modality improved the sleep quality of patients by regulating the immune inflammatory state, inhibiting the inflammatory response, and regulating neurocytokine levels. Ma et al collected 108 patients with insomnia of the heart and spleen deficiency type and divided them into a control group and a treatment group; the treatment group used the combined treatment and the control group used only the tranquilizing acupuncture method. Patients' sleep quality and PSQI and TCM symptom scores were lower in the treatment group, IL-6 and DA were lower in the treatment group than in the control group, and 5-HT and GABA were higher than in the control group, indicating that the efficacy of the treatment group was better than that of the control group. This combined treatment method was effective in improving patients sleep quality and treating insomnia. Yan Xueli et al [98] used acupuncture (acupuncture of Si Shen Cong, an Mian, Shen Men, San Yin Jiao, Liver Yu, Lung Yu, Feng Chi, and Foot San Li) combined with Xiang Shu Tang (Xiang Shu, Chuan Xiong, Chai Hu, Qing Pi, Sour Jujube, and Hefei Pi) to treat 120 patients with perimenopausal insomnia of liver depression and qi stagnation, randomly and equally assigned into treatment and control groups. The treatment group received the combined treatment, and the control group was given eszopiclone tablets, and the treatment course was 16 weeks. The results showed that insomnia symptoms improved in both groups, and the TCM symptom, PSQI, and Hamilton Anxiety Inventory scores were lower in the treatment group than in the control group. Serum luteinizing hormone and follicle hormone levels decreased significantly, and estradiol levels increased in the treatment group, and the changes were more significant than those in the control group. The significant decrease in serum luteinizing hormone and follicle hormone levels and the significant increase in estradiol levels after combination treatment may be one of the mechanisms for the improvement of sleep-wake regulation by this combination treatment.
Intersection of TCM and modern medicine
Although it is well known that there are different views of TCM theory in relation to the pathophysiology and neurobiology of insomnia, especially the centrality of "heart" in TCM physiotherapy, some TCM practitioners have started to recognize the fact that "brain" is the highest center of the whole body after the Ming and Qing dynasties. However, given the multi-centered and multi-level theoretical characteristics of Chinese medicine, which are not based on systematic anatomy alone, the "heart" is still regarded as the "sovereign's official" in the education and medical practice of Chinese medicine. Therefore, in the education and medical practice of Chinese medicine, the heart is still regarded as sovereign. Modern medicine, on the other hand, from anatomy and biomedicine, has always firmly regarded the function of the heart in Chinese medicine as a function of the brain. However, theories have not remained unchanged, and the development of the heart-brain axis theory and the study of brain-heart function interactions have provided meaningful insights for cardiology and neuroscience. With respect to biological signal processing, this interaction primarily involves the linear dynamics of the nerve and heartbeat, expressed through time and frequency-domain correlated features. However, the dynamics of the central and autonomic nervous systems exhibit nonlinear and multifractal behavior, and the extent to which this behavior affects brain-heart interactions is currently unknown. According to a recent study, Catrambone et al [99] reported a new signal processing framework designed to quantify nonlinear functional brain-heart interactions in non-Gaussian and multi fractal domains, which relies on a maximum information coefficient analysis between nonlinear multi-scale features obtained from the EEG spectrum and an inhomogeneous point process model of heartbeat dynamics. The conclusions show that significant physical and sympathetic changes, such as those induced by cold-pressure stimulation, have an impact on brain-heart function interactions beyond second-order statistics, thus extending to multi fractal dynamics. These results provide a platform for identifying novel neurologically targeted biomarkers. These studies increasingly suggest that both the heart and the brain may not be central to the body and that there may be a possibility of a binary "heart-brain" central framework.
Chinese medicine treatment of insomnia: challenges and shortcomings
The efficacy of evidence-based TCM treatment is inevitably limited by the professionalism and ability of TCM practitioners, which indicates that TCM treatment for insomnia is currently unable to achieve objective, standardized, and quantified treatment effects, which in turn will limit the progress of TCM research. At present, research on insomnia in Chinese medicine is not objective, standardized, and quantified because of the diversity and variability of evidence-based typing and the fact that Chinese medicine formulas are based on evidence-based treatment and have the characteristics of "adding and subtracting at the moment and changing randomly" in clinical practice. This has led to the fact that research on the identification model of TCM, which is the core of treatment, has not been widely recognized internationally, and the value of the research has been seriously affected. The value of acupuncture in the treatment of insomnia has become increasingly widely recognized because some studies are based on modern medical research and because acupuncture points can quantify indicators and results to a certain extent, but it also has its own shortcomings. Randomized controlled trials of acupuncture and animal experiments are limited by the research method itself, and it is difficult to achieve a large sample of observations, which is often in the thousands. However, it is difficult to observe large samples of thousands of people.
In the process of research, TCM researchers also face differences in their understanding of modern medicine, which suggests that our research still has a long way to go.
Future perspectives
Despite the challenges faced, this has not dampened the enthusiasm of TCM researchers. Cheng-Yong Liu et al [51] designed a randomized controlled protocol of acupuncture combined with cognitive behavioral therapy for the treatment of insomnia. It is of high scientific value to explore the possibility of combining these 2 treatments. At the same time, Jin Yarong et al [100] used the combination of tonic kidney and Anzhi formula with cognitive behavioral therapy to treat insomnia and designed a randomized controlled trial, which showed that the efficacy of the combined treatment was better than that of cognitive behavioral therapy alone. These studies have made a promising exploration of TCM for the treatment of insomnia.
We must understand that TCM is rooted in TCM tradition; we must focus on the value of TCM classics, and should actively explore these classics, Huang Di Nei Jing, Theory of Typhoid, and Qian Jin Yao Fang, and use modern science and technology to verify the value of these classics, to study and promote the development of TCM theory, and the progress of TCM clinical practice.
|
2023-03-18T05:04:48.323Z
|
2023-03-17T00:00:00.000
|
{
"year": 2023,
"sha1": "b7a36fde7d6a06a50530e33464dd6280791853ed",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b7a36fde7d6a06a50530e33464dd6280791853ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9732584
|
pes2o/s2orc
|
v3-fos-license
|
UvA-DARE ( Digital Academic Repository ) Different Stationary Phase Selectivities and Morphologies for Intact Protein Separations
Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
Introduction
Proteomics is often applied to clinical studies in the search of biomarkers [1]. These biomarkers are mostly proteins that are found in the tissue or plasma of patients suffering from a particular disease yet may be expressed in different amounts in healthy patients. While this sounds simple, the reality is that there are approximately 20,000 proteinencoding genes in the human body [2]. Many of these genes code for more than one protein isoform (proteoforms). These proteoforms arise from various post-translational modifications (PTMs) including phosphorylation, methylation and ubiquitination to name but a few, which can change the function of the protein in addition to modifying its structure. Since there are a number of amino acids that can act as sites for PTMs it follows that proteoforms can have varying degrees of PTMs in addition to multiple types of PTM. Consequently from any given proteinencoding gene, a large number of proteins can be produced. Proteins are expressed in varying abundance with some proteins such as albumin in blood, being much more abundant than other proteins present in biological material. It follows that these aspects significantly complicate the study of any proteome. Such samples require analytical techniques capable of providing high resolving power and sensitivity.
Abstract The central dogma of biology proposed that one gene encodes for one protein. We now know that this does not reflect reality. The human body has approximately 20,000 protein-encoding genes; each of these genes can encode more than one protein. Proteins expressed from a single gene can vary in terms of their post-translational modifications, which often regulate their function within the body. Understanding the proteins within our bodies is a key step in understanding the cause, and perhaps the solution, to disease. This is one of the application areas of proteomics, which is defined as the study of all proteins expressed within an organism at a given point in time. The human proteome is incredibly complex. The complexity of biological samples requires a combination of technologies to achieve high resolution and high sensitivity analysis. Despite the significant advances in mass spectrometry, separation techniques are still essential in this field. Liquid chromatography is an indispensable tool by which lowabundant proteins in complex samples can be enriched and separated. However, advances in chromatography are not as readily adapted in proteomics compared to advances in mass spectrometry. Biologists in this field still favour reversed-phase chromatography with fully porous particles.
Mass spectrometry (MS) is an obvious choice for proteomics research given its separation power and the ability to characterize protein structure through the interpretation of the fragmentation patterns in mass spectra. However, the analysis of intact proteins using MS faces a number of technical challenges. The large dynamic range in protein abundances within a sample can result in the suppression of the ionization of low abundant proteins, reducing their ability to be detected. Once ionized, intact proteins feature multiple charge states all corresponding to the same protein species, with multiple isotopes for each charged state. Different types of mass spectrometers present different levels of resolving power which may or may not be enough for the isotopic distributions of each protein multiple charge states to be resolved. Developments in Fourier Transform Ion Cyclotron Resonance (FTICR) MS has been an important step towards improving our ability to analysis proteins. In an MS imaging application using FTICR and secondary ion MS, resolving power in the order of 3,000,000 has been reported [3]. This compares to a resolving power of 2000-10,000 using time-of-flight (TOF) in a similar setup.
Even with the high resolving power of FTICR MS, hyphenation of MS with other separation techniques is necessary to reduce the sample complexity. Liquid chromatography (LC) is widely used for this purpose due to its high separation power and the ability to hyphenate it with MS, typically via electrospray ionization (ESI). However, using LC for protein separations faces its own technical challenges. Proteoforms often have similar physio-chemical properties making their separation extremely difficult. In addition the diffusion coefficient of proteins is relatively small compared to small molecules, increasing the time taken for mass transfer resulting in chromatographic band broadening. Furthermore, proteins can be difficult to dissolve in solvents commonly used as mobile phase in LC. These challenges can be avoiding by digesting the proteins into peptides using enzymes such as trypsin. This approach is commonly referred to as "bottom-up". The bottom up approach is not without its own pitfalls, namely the possible loss of protein information [4,5]. Digestion of proteins into peptides can result in peptides whose sequence is present in a number of proteins, making protein identification prone to error. Missed cleavages due to inefficient digestion can hamper the ability for bioinformatics software to match peptide sequences derived from mass spectra to the sequences within software databases. The potential loss of information can include valuable information such as the position of PTMs that are of interest in the study of biological pathways. The analysis of intact proteins, referred to as "top-down" can avoid the pitfalls of the bottom-up approach if sufficient fragmentation of protein ions can occur in the gas phase during tandem MS (MS/MS) [4]. It follows that the top-down approach is attractive for applications where complete information on the protein structure is of primary importance. This requires the technical challenges of separating intact proteins with LC to be addressed.
To address the challenges faced by top-down proteomics, a range of different chromatographic selectivities and stationary phase morphologies can be employed. However, the proteomics community still remains largely faithful to reversed phase chromatography (RPLC) using C18, C8 or C4 stationary phases and fully porous particle packed columns. The latest developments in column technology, such as core-shell particles and monolithic columns appear to remain obscure to most scientists within the field of proteomics. Relatively new selectivities such as hydrophilic interaction liquid chromatography (HILIC) are becoming more recognised, namely for the analysis of glycoproteins [6,7], yet are not widely applied. This bottleneck in knowledge transfer between the analytical and biological communities, in part, may be due to the rapid advancement of MS technology in addition to the abundance of reviews and research publications focusing on the use of MS; we cite just a few recent reviews here for those readers seeking the MS perspective on protein analysis [5,[8][9][10][11]. The purpose of our review is to address this imbalance by presenting recent advancements in LC column technology for protein separations. We will discuss the selectivities and column morphologies most recently used in addition to emerging chromatographic technologies that have shown potential value for the separation of intact proteins.
Implementation of Different Selectivities in Intact Protein Analysis
Proteomic studies include different approaches such as protein profiling, monitoring PTMs and protein-protein interactions. Each of these application areas requires a certain type of chromatographic separation. RPLC has a long tradition in intact protein analysis and its compatibility with electrospray ionization MS has made it an important technique in top-down proteomics. However, other separation modes that separate based on protein structure, mass, charge or the presence of specific chemical functional groups are also employed. These techniques include hydrophilic interaction liquid chromatography (HILIC), affinity chromatography, hydrophobic interaction chromatography (HIC) and size exclusion chromatography (SEC).
Reversed-Phase Liquid Chromatography
RPLC is one of the most widely used methodologies in protein analysis [12]. Structural features of proteins such as their conformation, size and molecular weight make RPLC of intact proteins more demanding compared to small molecule separations in terms of carry-over, peak broadening, multiple peak formation and strong adsorption on the stationary phase [13][14][15]. To improve chromatographic performance in RPLC many parameters such as particle size, mobile phase, column temperature and column length can be optimized. Due to their hydrophobic properties, proteins show strong retention on long chain (C8, C18) stationary phases that leads to their low recovery, peak tailing and a decrease in intensity. Therefore, less hydrophobic stationary phases with shorter alkyl chains (e.g. C2, C4) that show faster desorption of proteins from the stationary phase are preferable. The power of RPLC for analysis of proteins was demonstrated by Rehder et al. for the separation of the light chain and two variants of heavy chains (N-terminal glutamine and N-terminal pyroglutamate) of reduced monoclonal antibodies [16]. For the separation different stationary phases including C3, C8, C18, and CN Agilent Zorbax Stable Bond SB300 columns with 3.5 μm particle size and the Varian Pursuit DiPhenyl with 3 μm particle size were tested with an increasing percentage of n-propanol in acetonitrile and 0.1 % trifluoroacetic acid (TFA) was used as the mobile phase. The Varian DiPhenyl column showed highest plate number while Zorbax CN column showed highest selectivity and resolution.
Faster desorption of proteins from the stationary phase can also be achieved using gradient elution. It is well known that in RPLC the retention of the analytes decreases with the increase in the proportion of the organic solvent in the mobile phase. For proteins, the change in retention with increasing organic modifier content is much more significant than for small molecules (Fig. 1) [17,18]. Therefore, small increases in organic modifier lead to large reductions in retention amounting to what may appear as an 'on-off' retention mechanism. Another important advantage to using gradient elution is peak compression. In brief, peak compression occurs because the rear part of the solute band (or peak) is eluting faster than the front part of the solute band. This occurs because of the increasing elution strength of the mobile phase as a function of gradient time [19]. The resulting peak compression reduces the peak width counteracting the effects of band broadening. Mobile phase additives, such as TFA, also effect retention and peak shape. Recently the effect of a wide range of common mobile phase additives was examined for 11 intact proteins [20]. While the use of TFA provided symmetrical peak shapes (due to ion-pairing), it was linked to a loss in MS sensitivity attributed to ionization suppression. The use of 10 mM formate buffer (pH 3) was a suitable MS-compatible alternative to TFA providing a boost in sensitivity by a factor of 5 compared to TFA, without compromising peak shape.
Together with the physical properties of the stationary phase other aspects such as pressure, temperature and the composition of the mobile phase, play important roles in governing the success of intact protein separations. The introduction of smaller particles and ultra-high performance liquid chromatography (UHPLC or UPLC) means that protein separations can now occur at pressure over 1000 bar. In the study of Eschelbach et al. the effect of elevated pressure on protein recovery and protein carry-over for 4 model proteins ribonuclease A, ovalbumin, myoglobin and bovine serum albumin (BSA) was examined [14]. Results showed that for separations of 4 proteins in 5 μm packed column and at pressure of 160 bar carry-over was present and examining mass spectra indicated that it was necessary to run 6 blanks to clean the column. Inspection of mass spectra from the separation of the same proteins on the 1.4 μm packed column and at pressure of 1580 bar indicated that there was no carry-over present. While this result is encouraging, it is important to note that pressure affects the way that proteins are retained on the stationary phase which means that attention must be paid when transferring separation methods from lower pressure to higher pressure. Retention changes under pressure arise from a change in the molar volume of proteins. The theory behind this mechanism is quite complex as the effect of pressure on retention is intertwined with temperature [21] and the change of protein conformation upon adsorption onto the stationary phase and the kinetics of this adsorption [22]. In brief increasing pressure reduces the solvation layer of the alkyl-bonded phase of C18 stationary phase. In conjunction to reducing the solvation of the ligands, solvent molecules form a solvation layer around the protein. It is known that proteins change conformation, exposing their hydrophobic regions, upon adsorption onto the stationary phase in reversed phase conditions [23]. Exposure of the protein's hydrophobic region in conjunction with a Fig. 1 Relationship between the retention factor and the proportion of organic modifier (acetonitrile) in the mobile phase, shown for a small molecule (benzene, represented by diamonds), a peptide (squares) and a protein (bovine cytochrome C, triangles) Adapted from [17] reduction in the solvation of the protein and the possible change in the conformation of the stationary phase ligands increases the retention in reversed-phase conditions. This increase in retention in conjunction with pressure has been shown to increase retention as high as 300 % for insulin variants [22]. Such gains in retention were not as large for other proteins [24] indicating that the effect of pressure is protein-specific.
Effects of pressure on retention have been shown to be interrelated to the effects of temperature on retention [21,24]. There are two sources of temperature in chromatography: frictional heating generated as mobile phases passes through the stationary phase at high linear velocities, under high pressure conditions such as those found in UHPLC and heating of the column and mobile phase using column heating devices. As the mobile phase travels through the stationary phase under these conditions, friction occurs and generates radial and axial temperature gradients [25]. These temperature gradients generate viscosity gradients as a result of the relationship between viscosity and temperature. The consequence is the formation of radial linear velocity gradients that distort the solute band as it moves through the column, reducing the efficiency of the separation for small molecules. While the relationship between efficiency and frictional heating has not been defined for proteins, its effect of retention has been studied. At constant inlet pressure, thereby reducing the effect of pressure, frictional heating of 1.5-1.7 W heat power reduced the retention of insulin by 20-45 % [26]. For larger proteins, namely myoglobin and lysozyme, the effect was more significant: a reduction in retention by up to 75 % [24]. While the effect of frictional heating arises from the generation of heat within the column, applying temperature externally by heating up the column and mobile phase also affects retention for small molecules and proteins. For small molecules, increasing temperature is known to increase the rate of diffusion of the solute molecules, speeding up their mass transfer and thereby gains in efficiency can be seen with a change in the value of the optimum flow rate. The application of temperature and pressure is expected to cause denaturation of the protein, that is, the unfolding of the protein structure. Given that different proteins have a different ratio of secondary structural elements, such as α-helices and β-sheets, the relationship between denaturation and protein conformational changes may be protein-specific. The protein-specific nature of the relationship between temperature and retention was shown for filgrastim, interferon alpha-2A, lysozyme and myoglobin [24]. At a constant pressure of 200 bar, retention decrease significantly with increasing temperature for lysozyme and myoglobin from retention factors of around 6 at 20 °C to less than 2 at 70 °C. Filgrastim and interferon alpha-2A showed a different relationship: retention increased with increasing temperature until a specific temperature range around 50 °C after which retention started to decrease. In general, these relationships were seen at other constant pressures up to 1000 bar with some amplification of the effect owing the relationship between pressure and retention, where retention increased with increasing pressure. Denaturation exposing the hydrophobic core of proteins does agree with the increase in retention initially seen for some proteins in reversed phase conditions. However, the observed decreasing trend in retention with increasing temperature past a certain value seems to contradict this and suggests perhaps another factor is influencing retention other than exposure of the hydrophobic core. A decrease in the molar volume of the protein at temperatures higher than 50 °C has been put forward as an explanation of the decreasing retention at elevated temperatures.
Hydrophilic Interaction Liquid Chromatography (HILIC)
HILIC presents powerful alternative for separation of proteins that show strong retention in RPLC. Advantages such as selectivity towards highly polar compounds, reduced backpressure due to the higher amount of organic solvent used compared to RPLC and increased LC-MS sensitivity have become important aspects in use of HILIC in analysis of proteins. The retention mechanism of HILIC, characteristics of stationary and mobile phases and also applications have been the topic of several recent reviews [27][28][29][30][31][32]. HILIC alone or in combination with other techniques has been used for the desalting and fractionation of proteins prior to MS, analysis of membrane proteins, or study of protein PTMs [6,[33][34][35]. The advantage of HILIC is that by utilizing a polar stationary phase it promotes the retention of polar compounds, which are then eluted using isocratic or gradient elution by increasing the hydrophilicity of mobile phase by increasing the proportion of polar aqueous solvent. Therefore in HILIC, hydrophilic compounds elute later than hydrophobic compounds and the elution order is typically the opposite to their elution in RPLC. However, high amounts of organic solvents used in HILIC chromatography could cause protein denaturation, which limits its use in native protein analysis. Moreover, ion-pairing agents such as TFA are often added in mobile phase to improve peak shape and resolution and change the retention of the proteins. Addition of TFA can promote ion-pairing mechanism and retention of the proteins may be more driven by their hydrophilic residues and modifications. However, when mobile phase consisting of TFA is coupled to MS, analyte signal intensity might be reduced [20]. In the study of Periat et al., comparison of mobile phases containing 0.1 % TFA, 50 mM ammonium formate and 0.5 % acetic acid for the analysis of RNase B showed that TFA significantly reduced retention time in addition to enhancing peak shape and resolution compared to the other two solvents [34].
The retention mechanism in HILIC is considered to be a combination of partition and adsorption of compounds between the mostly organic mobile phase and the partially immobilized water layer on stationary phase. Interactions between analytes and the stationary phase are complex and can arise from the combined effect of electrostatic, hydrophobic and ion-exchange interactions while hydrogen bonding also might be involved. The dominant interaction depends on the type of stationary phase together with the pH and composition of the mobile phase. In HILIC mobile phases generally consist of 5-50 % water with respect to the proportion of water miscible organic solvents where the most common solvent used is acetonitrile [29]. Alcohols can also be used; however, a higher proportion of these solvents in the mobile phase is needed to provide similar retention of the analyte as when an aprotic solvent is used. Other suitable organic solvents can be selected based on the eluotropic series showing the relative elution strength of organic solvents used in HILIC, such as acetone < isopropanol ~ propanol < acetonitrile < ethanol < dioxane < DMF ~ methanol < water [28]. HILIC separations can be performed in isocratic or gradient elution mode. Isocratic elution usually consists of a high percentage of organic solvent in mobile phase while in gradient elution the starting composition of mobile phase gradient consists of high percentage of organic modifier and the elution is promoted by increasing percentage of water.
Diverse stationary phases are used in HILIC separations depending on application. Stationary phases often include highly polar functional groups similiar to those in normal phase LC such as hydroxy, amino, amido and cyano functionalities. [28]. New stationary phases for HILIC are continuing to be developed and applied for protein separations. Carrol et al. reported the use of a polyhydroxyethyl-aspartamide HILIC column for the separation of mitochondrial membrane proteins [36]. An evaluation of different HILIC columns was carried out by Tetaz et al. where they compared four HILIC stationary phases: Polyhydroxyethyl A (silica coated with polyhydroxyethylaspartamide), ZIC-HILIC PEEK (zwitterionic ligand covalently attached to porous silica), ProntoSIL 300-5-Si (bare silica) and TSKgel Amide 80 (polymer coated silica) for the separation of human apoA-I, recombinant human apoM and equine cytochrome C [6]. It was shown that human apoM and human apoA-I eluted later using ZIC-HILIC PEEK column compared to polyhydroxyethyl A under the same mobile phase conditions. Electrostatic interaction of analyte with the zwitterionic ligand on the ZIC-HILIC column might be a reason for later elution of the aforementioned proteins and for non-elution of cytochrome C within 30 min.
HILIC has proven useful for addressing one of the most challenging PTMs; glycosylation. Due to complexity emerging from microheterogeneity of sugar moieties, RPLC of intact glycoproteins often does not show enough resolution for distinguishing between different glycoforms. Therefore, the potential of HILIC for the analysis of glycans while they are still attached to protein backbone as complementary approach to analysis of released glycans has been examined [37]. Pedrali et al. showed potential for separation of ribonuclease A and five intact isoforms of its naturally glycosylated variant ribonuclease B on a HILIC amide column [37]. Moreover, HILIC chromatography has showed large potential in analysis of histone forms from human cells [38] and H4 forms from the HeLa cell lines [39]. Histone separation can be achieved by the separation of their subtypes based on the number of acetyl groups and then by the level of methylation, greatly reducing sample complexity. In the study of Pesavento et al. the combination of RPLC and HILIC was used to separate different forms of histones [35]. Histone H4 was first purified from crude HeLa S3 histone using RPLC and multiply modified histone H4 forms were then separated using HILIC PolyCAT A column with each 1 min fraction analyzed separately using Fourier transform MS. Fractions eluted according to the degree of their acetylation or methylation with the most hydrophobic fractions (tetra-acetylated and triple-methylated fractions) eluting first compared to N-terminal acetylation only allowing 42 unique histone forms to be characterized and quantified.
Affinity Chromatography
Another effective technique for protein separation, enrichment and purification is affinity chromatography which is based on reversible adsorption of targeted protein to the ligand that is immobilized on to the matrix [40]. The possibilities affinity techniques provide for rapid purification of proteins, high protein loading, compatibility with different buffers and additives have made them popular. Typical workflows include the use of conditions that favour maximum protein adsorption from sample loading, the washing step for removal of unbounded substances and desorption of the target protein from the column in elution step. Binding of the protein to ligand can be influenced by several parameters such as the amount of targeted protein and immobilized ligand, the flow rate used for binding and the nature of protein-ligand interaction. Higher flow rates might reduce protein binding if the interaction proteinligand is weak or mass transfer-rate is slow [41].
One of the most widely used affinity techniques used for intact proteins is immobilized metal affinity chromatography (IMAC). This technique is based on the affinity of certain amino acid residues like cysteine, tryptophan and histidine exposed on protein surface for binding with metal ion coordination sites [40]. The metal ion is covalently attached to a chelating agent that is immobilized on the stationary phase surface and together they form an immobilized metal ion chelate complex. Retention of proteins can be affected by the nature of the metal ion, the structure and density of chelating compound, the presence of salt and additives in the buffer, organic solvent and protein size. Adsorption of proteins is based on interaction of electron donor groups located on protein surface and the immobilized metal ions that act as an electron pair acceptor. Interactions between metal ions and proteins are complex and have been shown to be combination of electrostatic (or ionic), hydrophobic and/or donor-acceptor coordination interactions [42]. The most commonly used are transition metal ions: Cu 2+ , Zn 2+ , Co 2+ , Ni 2+ and Fe 2+ . Some electron-donor atoms such as N, S and O that are present in the chelating compounds attached to the support can coordinate metals. Remaining metal coordinating sites are mostly occupied by water or buffer molecules and can be exchanged with electron-donor groups from proteins. While many protein residues such as aspartic acid, histidine, glutamic acid, arginine, lysine, methionine, tyrosine and cysteine can participate in binding, the imidazole side chain of histidine residues as an electron-donor contribute more to binding than the N-terminus of proteins [40,43].
The most commonly used solid supports in IMAC are based on soft-gel matrices such as agarose, cross-linked dextran or inorganic adsorbents like silica [40,44]. Characteristics of the solid support and immobilization conditions should allow the maximum amount of protein to be adsorbed, show low non-specific adsorption, have uniform pore size and stability under wide range of experimental conditions. Low mechanical strength of gel matrices limits their use in high-pressure systems therefore other materials with better mechanical properties have been considered for use. Silica has the potential to be used as rigid support for fast and efficient separations and possesses higher mechanical strength compared to soft-gel matrices. However, its surface needs to be modified by coating with hydrophilic materials to minimize irreversible non-specific adsorption of proteins. Silica surface modifications can include adsorption of agarose, dextran or chitosan to reduce irreversible adsorption of the proteins. Another alternative to soft-gel matrices is the use of microporous membranes as supporting matrices since they show higher sample throughput and stability by allowing higher flow rates [45]. Immobilized Membrane Affinity Membrane (IMAM) phosphate Zr 4+ -IMAM was used for the evaluation of adsorption and selectivity of phosphorylated proteins (β-casein and ovalbumin) and non-phosphorylated proteins (bovine serum albumin and lysozyme). The adsorption isotherms showed that phosphate Zr 4+ -IMAM had higher binding capacity and selectivity for phosphorylated proteins compared to non-phosphorylated proteins. Adsorption of β-casein and ovalbumin increased in the range of protein concentration of 0.1-0.6 mg mL −1 while for BSA and lysozyme no significant increase was observed even at concentration of 1 mg mL −1 showing potential of use of phosphate Zr 4+ -IMAM for enrichment of phosphorylated proteins [46]. The development of new stationary phases for specific and selective binding of proteins and good protein recovery lead to IMAC being extensively used in antibody purification [47][48][49][50][51]. Evaluation of performance of iminodiacetic acid (IDA) and Tris(2-aminoethyl)amine (TREN) as a chelating agents in purifications of IgG with immobilized nickel affinity polyethylene vinyl alcohol (PEVA) hollow fiber membrane chromatography showed that Ni(II)-TREN had lower binding capacity for IgG compared to NI(II)-IDA; 9.8 and 9.4 mg for Ni(II)-IDA-PEVA and 1.4 and 1.5 mg Ni(II)-TREN for protein elution and regeneration, respectively [52]. Another study using tridentate (IDA), tetradentates (NTA, CM-Asp), and pentadentate (TED) chelate agents showed that using higher dentate agents increases selectivity in binding the proteins but show lower protein binding capacities compared to IDA [45].
Due to the ability of phosphate groups to chelate metal ions, IMAC has become important tool to enrich phosphorylated proteins prior to analysis with MS [53][54][55][56][57]. It has been shown that phosphorylated proteins prefer binding to Fe 3+ , Al 3+ or Ga 3+ [56,58,59]. While Fe 3+ is typically used Machida et al. compared Ga 3+ , Fe 3+ , Zn 2+ and Al 3+ showed that Ga 3+ proved to be the most efficient [59]. IMAC has proven to be an effective tool for comprehensive phosphoproteomic studies in plants. Enrichment of phosphoproteins using PHOS-Select iron affinity gel beads allowed detection of 132 phosphoproteins from Arabidopsis leaves. Depletion of d-ribulose biphosphate carboxylase/oxygenase (Rubisco) and other highly abundant proteins using polyethylene glycol (PEG) fractionation alone significantly increased number of identified phosphorylated proteins while in combination with IMAC more than double phosphorylated proteins were identified in depleted samples [56,57]. A somewhat similar technique, metal oxide affinity chromatography (MOAC) has also been employed in phosphoproteomics [60]. In this technique, metal oxides such as aluminium hydroxide (Al(OH) 3 ), titanium dioxide (TiO 2 ) and zirconium dioxide (ZrO 2 ) are typically used for the enrichment of phosphorylated proteins and more commonly, phosphorylated peptides [61][62][63][64]. In a recent study, the enrichment of phosphorylated proteins from a mixture of phosphorylated proteins (β-casein and ovalbumin) and non-phosphorylated proteins (BSA, myoglobin and cytochrome C) was preformed using ZrO 2 nanofibers prepared by electrospinning [65]. Results demonstrated selective adsorption of acidic, neutral and basic phosphorylated proteins on the ZrO 2 nanofibers when loading buffers of different pH were used.
While metal affinity techniques have proven the most popular for the protein analysis, other affinity techniques have also been used. An efficient tool for the collection of highly purified recombinant proteins is to use genetically engineered polyhistidine tags attached to the proteins of interest. The presence of multiple histidine residues improves binding of the protein to the support, usually containing Cu 2+ or Ni 2+ . While the number of tags attached to the protein might vary depending on the study, the most popular is the His6 tag [66][67][68]. In one study, Magnusdottir et al. observed a tenfold increase in the yield of His6-GFP using IMAC for Escherichia coli lysate in which periplasm components were removed prior to lysis [69]. Aside from enrichment, using affinity techniques to remove specific proteins or protein groups has been proven to be an efficient way in overcoming the obstacle presented by the wide dynamic range of protein concentrations.
Depletion of highly abundant plasma proteins using different depletion techniques based on immunoaffinity protein removal enabled selective profiling of low-abundant proteins [70][71][72]. One of these methods, known as multiple affinity removal system (MARS) is based on presence of different high-affinity antibodies which are designed for rapid removal of high abundant proteins such as albumin, IgG, IgA, transferrin, haptoglobin and antitrypsin from human biological fluids [73,74]. A step forward in technology was online immunodepletion in two dimensional systems where automatic depletion, desalting and fractionation was achieved [75]. The combination of MARS immunodepletion and multi-lectin affinity chromatography, M-LAC, was investigated for rapid screening of changes in protein levels in particular diseases [76]. Double separation of samples enabled the identification in changes in the level of the proteins angiotensinogen and apolipoproteinCl in patients compared to controls and might be used as a potential tool to reduce complexity of plasma samples. The limitation of immunoaffinity techniques is that they require antibodies with affinity to the protein of interest. It is not always feasible to acquire such antibodies, which may explain the relatively large number of applications of metal affinity techniques compared to immunoaffinity methods.
Hydrophobic Interaction Chromatography
Using Hydrophobic Interaction Chromatography (HIC) native protein structure is more preserved in comparison to RPLC and is widely used in protein purification, often in combination with other chromatographic techniques [77][78][79][80]. In HIC protein separation is based on their hydrophobicity in non-denaturing mode with high resolution and selectivity that is orthogonal to RPLC [81][82][83][84]. Hydrophobic regions of the proteins interact with hydrophobic ligands from the stationary phase (butyl, octyl, phenyl) in conditions which promote hydrophobic interactions with the stationary phase, such as high concentrations of salt present in mobile phase [79]. In an aqueous medium, hydrophilic amino acids residues in the protein form hydrogen bonds with surrounding molecules and water molecules to form ordered structures around macromolecules. Addition of salts promotes solvation of salt ions and decreases number of water molecules interacting with hydrophilic regions of protein. Under these conditions protein molecules will have stronger intermolecular interactions and will self-associate or aggregate which is a thermodynamically favoured process [79,85]. The impact of certain ions on hydrophobic interactions can be estimated using Hoffmeister series and optimal concentration of the salt for separation can vary due to individual differences in interaction between protein and stationary phase. High salt concentration promotes protein-ligand interaction and protein desorption is stimulated using gradient elution with decreasing salt concentration. The most commonly employed salts are sulfates, phosphates or citrates and by changing salt type and concentration in the mobile phase, protein retention can be manipulated ('salt-promoted retention') [86].
HIC columns are mostly based on silica or polymer particles; however, varying the support and ligand type has led to a wide range of stationary phases being developed [87]. Most commonly used are moderately hydrophobic ligands such as n-alkanes (butyl, octyl, phenyl) [82,88]. However, newly developed materials have emerged which use cholesterol [89], dendronic ligands [90] and dual functional stationary phases [91]. Dual function HIC/strong cation exchange (SCX) silica-based stationary phase containing benzyl and sulfonic functional groups was used to separate seven proteins. The separation using this novel stationary phase showed to be comparable to the HIC column TSKgel Ether 5PW and SCX PolyC columns when operating in HIC and SCX mode, respectively. Mass recoveries on SCX/HIC column for cytochrome C, RNase A, RNase B, lysozyme were more than 97 % in both modes while bioactivity for lysozyme was 96 and 98 % for HIC and SCX mode, respectively [91]. Ligand chain length, density and type of support or matrix are important aspects for consideration regarding the selectivity and the strength of interaction with the protein [79,92]. However, protein retention also depends on the mobile phase composition (salt type and concentration, presence of additives), temperature and pH [93,94]. Cusumano [95]. Different stationary phases showed different selectivities towards the monoclonal antibody (mAb) mixture. When using sodium acetate buffer for mAbs analysis, HIC columns TSKgel Butyl-NPR and MAbPac HIC-Butyl showed the highest peak capacities while the TSK gel ether column showed the lowest efficiency. Using different buffer systems changed the retention and selectivity of mAb, showing that on HIC-10 column more than double the concentration of sodium acetate is needed to provide the same retention as with ammonium sulfate. The drawback of HIC is that the salts used in mobile phase are usually not compatible with MS and a desalting step is required. The presence of ammonium acetate as a MS compatible salt showed weak retention of proteins on polypropil A stationary phase [81]. Another possibility to increase protein retention was increasing the stationary phase hydrophobicity as demonstrated by Chen et al. who prepared new HIC materials (polypentyl A, polyhexyl A, polyheptyl A, polyoctyl A, polynonyl A and polyhydroxydecyl A) for protein elution using MS compatible concentrations of ammonium acetate (1 M or less) [82].
Size Exclusion Chromatography
Size exclusion chromatography (SEC) is a technique used for the separation of proteins based on their molecular size (hydrodynamic volume) rather than on their chemical properties. It is widely used in analysis of protein biotherapeutics and monitoring protein aggregation [96][97][98] and its main advantages are mild elution conditions that have minimal impact on protein conformation and environment [99]. The separation of the biomolecules in SEC is based on the exclusion of proteins from the controlled particle pore sizes of the stationary phase through which they diffuse due to their molecular size differences. Large molecules elute quickly through the column as they are either totally or only partially excluded from entering the pores while small molecules penetrate deeper into the pores and therefore elute later [100]. In size-based separations using a set of known proteins and plotting the logarithm of the molecular weight vs. the retention volume allows the construction of a calibration curve that can be used for the estimation of the molecular weight of unknown proteins. In SEC the analysis time is determined by the flow rate of the mobile phase with a given column.
Increasing flow rate of the mobile phase or a reduction of the column length is a straightforward way to shorten analysis time; however, backpressure needs to be at a reasonable level when using high flow rates since it can affect stability of the packing material and resolution [98].
There are two main types of SEC columns: silica, with or without surface modifications, and cross-linked polymeric packings [101]. Comparison of three SEC columns with different particle sizes of 1.7 µm (ACQUITY UPLC BEH200 SEC), 3 µm (Zenix SEC-250) and 5 µm (TSKgel 3000 SWxl) for the analysis of antibody aggregates showed that the analysis time can be shortened using smaller particle sizes [102]. A column packed with sub-2 µm particles showed more than twofold improvement in throughput compared to the TSKgel column and further throughput was increased using parallel-interlaced mode allowing sample analysis in less than 2 min. Another report on the same with ethylene-bridged hybrid inorganic-organic (BEH) silica packing material with 1.7 µm particle size showed that high pressure and elevated temperature generated by small particles might cause on-column generation of additional protein aggregates [98].
The fractionation of different protein based on size reduces sample complexity that is particularly beneficial for top-down MS methods. However, some drawbacks of SEC including low resolution highlight the necessity for alternative methods. Ultra-high pressure fast size exclusion chromatography (UHP-SEC) has shown potential for rapid and high resolution separation of intact proteins [103]. UHP-SEC demonstrated fast separation of 6 standard proteins (BSA, ovalbumin, cytochrome C, aprotinin, thyroglobulin and IgG) in the mass range 6-669 kDa. Proteins were separated at flow rate of 0.2 mL min −1 in less than 7 min with comparable resolution and retention time using 50 mM sodium dihydrogenphosphate and a MS compatible solvent (50 mM ammonium acetate) [103]. Fractions collected with ammonium acetate were directly analyzed by ion cyclotron resonance Fourier transform MS without prior desalting and high resolution spectra confirmed protein molecular weights. Subsequent MS analysis of SEC fractions collected offline from MS is a useful tool for protein characterization [104]; however, coupling SEC online with MS minimizes the possibility of composition changes within the collected fractions prior to MS. One example is study of Munnerrudin et al. [105] where SEC was coupled online with native electrospray MS to characterize serum albumin, human transferrin and recombinant glycoprotein human arylsulfatase A. Proteins were dissolved in ammonium acetate and SEC separation was carried out using Biosuite ultra high resolution column. Online SEC-native ESI/MS enabled distinction between incompletely resolved proteins based on their mass differences and study of protein ion charge state distribution gave information on their conformational integrity.
The application of certain chromatographic techniques depends on protein-specific properties and the research aims. The implementation of different selectivities for protein analysis offers advantages and limitations where compromises between sensitivity, resolution, protein retention, mobile phase composition and the possibility of coupling separation online with MS, has to be made.
While significant progress has been made in developing new separation methodologies including modifications of stationary phases, implementation of MS compatible buffers and protein engineering, the challenge of intact protein analysis is yet to be conquered.
Different Stationary Phase Morphologies for Intact Protein Separations
While different selectivities of stationary phases are often the first thing many analysts consider when developing analytical methods, the morphology, that is the structure, of the stationary phase is not always considered. Stationary phases can be packed with fully porous particles, the traditional option, core-shell particles, or non-porous particles. Additionally, the stationary phase may consist one a single porous rod structure (monolithic columns) or even coated on the walls of a capillary (open tubular). We will discuss these morphologies in relation to the analysis of intact proteins.
Particle-Packed Columns
Stationary phase selectivity is an important factor governing resolution in chromatography. That said; the morphology of the stationary phase material is equally important. Unlike small molecules, where the eddy diffusion is arguably the predominant factor limiting the separation power, for proteins the rate of mass transfer dictates the separation power [106,107]. A key factor governing the rate of mass transfer of proteins is the size of the pores of the stationary phase [106,[108][109][110][111]. When the pores of the stationary phase are too small relative to the hydrodynamic radius of the protein, they cannot enter the complete pore volume therefore experiencing a size exclusion chromatographic effect. The fraction of the total pore volume that is not accessible to pores is equal to (1 − (R H /r)) 3 where R H is the hydrodynamic radius of the protein and r designates the radius of the pores [112]. Size exclusion of proteins in RPLC reduces the mass of protein that can be loaded onto the column but also the separation power, in terms of plate height (h). This has been examined experimentally and theoretically by a number of studies for what can be regarded as common model proteins: BSA, β-lactoglobulin, carbonic anhydrase isozyme, cytochrome C, IgG, insulin, lysozyme, myoglobin and ovalbumin. The results are in agreement; pores should be larger than the hydrodynamic radius of the proteins being separated [106,[108][109][110][111], at least 3 times as large [111]. Increasing the pore size from 90 to 160 Å increased the rate of mass transfer by up to 3.5 times for insulin, a relatively small protein of 5.6 kDa [106]. The benefit of large pore size was also seen for a large protein, BSA (66.8 kDa), as shown for the Aeris WIDEPORE column with 300 Å pore size. This stationary phase gave much better peak shape compared to stationary phases containing 160 and 175 Å, where peaks showed notable tailing [110]. The degree by which pore size limits the rate of mass transfer and in turn, h, increases with increasing size of the protein reducing the protein's ability to access the internal pore volume thereby reducing external film mass transfer. While it is perhaps more intuitive to expect that the trans-particle mass transfer plays the primary role, the external film mass transfer was estimated to govern over 90 % of the overall mass transfer term of the van Deemter equation [106]. This means that larger pores increase the access of the protein and the mobile phase to the external surface area thereby increasing the rate of transfer of the protein through the film of stagnant mobile phase that coats the particles. This can lead one to believe that there is no benefit in using core-shell (otherwise known as superficially porous, partially porous or pellicular particles) relative to fully porous particles. However, numerous experiments have demonstrated the benefits of inclusion of a non-porous core [107,111,113]. In short, core-shell particles enable more efficient protein separations than fully porous particles. While the same trend noted above for pore size applies to both fully porous and core-shell particles [107,111], the coreshell particles have the advantage of shortening the length through which proteins must diffuse during their migration through the column. This reduced distance arises directly from the inclusion of the non-porous core that is inaccessible to molecules. While this is not the primary advantage of core-shells in small molecule applications, where the reduction of eddy diffusion term plays the key role, it plays the dominant role in protein separations where the slow rate of diffusion of proteins compared to small molecules reduces their ability to undergo fast mass transfer [107,111]. Studies that focused on developing core-shell particles specifically for protein separations found that the thinner the porous layer of the core-shell, the more efficient the separation provided that pore size and overall porosity is sufficient [111,113]. Of course the thickness of the shell must be balanced with the need for a certain degree of mass loading capacity.
When mass loading is not a concern because dilute solutions can be used with sufficient sensitivity, as is often the case in LC-MS, non-porous particles can be used. Although not widely used in practice, silica non-porous particles derivatized with C18 ligands have been applied for RPLC of intact proteins in mixtures of protein standards, antibodies, liver mitochondrial proteins, mouse liver extract, bovine endothelia cell membranes and human hepatocytes extract [114][115][116][117][118][119]. Because these particles do not contain pores, the resistance to mass transfer is eliminated resulting in reduced band broadening and consequently greater separation power, particularly for intact proteins which suffer significant broadening due to slow mass transfer as discussed above. This was demonstrated for the separation of liver mitochrondrial proteins where a 2 μm diameter non-porous silica-based C18 column was compared to a 3 μm, 300 Å wide-pore column. Both columns had the same dimensions and the same RPLC gradient elution programme and column temperature was used for each column. The nonporous column outperformed the wide-pore column resolving 420 protein peaks compared to 160 peaks for the widepore column [115]. Studies using non-porous columns have predominately used column lengths longer than 100 mm. Such columns produce significant backpressure due to the lack of permeability of the stationary phase. This necessitates the use of low flow rates resulting in long analysis times (as long as 999.8 min in one case [118]) despite the reduced retention due to their low surface area relative to porous particle columns. To get around this limitation a short (2 mm) yet wide (10 mm) "chromatographic cake" was employed [119]. The unusual dimensions of the cake allowed the use of 630 nm diameter non-porous particles without being encumbered by excessive backpressure. By using such small particles, the surface area of the cake was increased relative to a cake of the same dimensions packed with 1.2 μm particles. The benefit of using the 630 nm particles compared to the 1.2 μm particles was evident for the separation of a mixture of intact protein standards (Fig. 2) where 8 proteins were almost baseline resolved within 2 min. It should be noted that the flow rate used to produce such a fast separation was 5 mL min −1 ; such flow rates are not currently compatible with LC-MS. While the performance of chromatographic cake in terms of N may be limited due to its very short length, such columns may prove useful for producing relatively efficient separations when used as the second dimension of a two-dimensional comprehensive LC system where proteins enter the second dimension significantly diluted due to the high flow rates imposed on this dimension to produce extremely fast separations.
Monolithic Columns
The presence of inter-particle void volume between the packed particles and the time required for diffusional mass transfer of solutes into and out of the mobile phase present in the pores of porous stationary phases are the major factors limiting the separation efficiency of porous packing materials, especially for proteins and peptides having low diffusivities [120]. Monolithic stationary phases were introduced in a search for new stationary phases with enhanced mass-transfer properties in which the separation medium consists of a continuous rod of a rigid, porous polymer that has internal porosity and no interstitial volume consisting of micro and macropores [121][122][123][124]. A key feature of these stationary phases is the presence of large through-pores, which enables the mass transfer to be driven mainly by convection, rather than by diffusion in pores of traditional particulate packing [125]. This accelerated mass transport is very valuable in the gradient separation of large molecules for which diffusion is slow. Hence, among the advantages of monoliths over packed materials in terms of chromatographic performance is the ability to achieve higher porosity that enables higher linear flow velocities and hence faster separations without a notable decrease in efficiency [119] of separation. In comparison to particle packed columns, the monolithic ones display higher efficiencies at high flow rates [126]. Other advantages include the simplicity of their in situ preparation to create miniaturized capillary column formats.
Polymer-based monolithic chromatographic supports are usually prepared through the in situ polymerization of a mixture of suitable monomers and porogens within a tube that acts as a mold. Various precursors have been reported for the preparation of polymer-based monolithic stationary phases [126][127][128][129]. Monomers such as acrylamide [122], styrene and divinylbenzene [130,131], acrylates [132], methacrylates [133][134][135][136] and norbornene [137] have been reported. The porous monolith can be covalently attached to the capillary wall increasing, by these means, the robustness of the column [138]. Bonding the monolith to the wall by surface-modification procedures was proven to be a crucial step, especially for large i.d. columns [139,140]. The porous properties of the monolithic materials can be influenced by the composition of the polymerisation mixture by altering the ratio of porogens [141,142] and the reaction conditions (polymerisation temperature and time) [143,144]. Size and morphology of the pores strongly depend on several factors, including polymerisation kinetics and solvency of the porogens for the resulting polymer [145]. Tuning the morphology of the polymer-monolith is an important aspect to maximise the peak capacity.
The good separation performance of monolithic capillary columns with gradient elution has been demonstrated for complex mixtures of proteins [135,[146][147][148]. Detobel et al. [146], studied the effect of column parameters (morphology and length) and gradient conditions on the performance of capillary poly(styrene-co-divinylbenzene) monoliths. In agreement with the theory, the peak capacity increased according to the square root of the column length. It was also shown that decreasing the macropore size of the polymer monolith while maintaining the column length constant resulted in an increase in peak capacity. By using long (250 mm) monolithic columns with optimized morphology a peak capacity of 620 could be achieved for the separation of intact E. coli proteins using a 120 min gradient and UV detection. The maximum peak capacity obtained with shorter columns, 50 and 100 mm, were 330 and 440, respectively. The combined effects of flow rate and gradient time over the peak capacity are shown in Fig. 3. As shown, longer gradient times increased the peak capacities. At constant gradient times the peak capacities increased when increasing the flow rate. Eeltink et al. [149] reported the use of a 50 mm long poly(styrene-co-divinylbenzene) monolithic column (1 mm i.d.), operated at 90 μL min −1 , and 80 °C, for the separation of an E. coli intact protein mixture by HPLC-UV. For a gradient time of 2 h a maximum peak capacity of 475 was obtained. In a more recent study, Eeltink et al. [148] studied the potential of long poly(styrene-co-divinylbenzene) monolithic capillary columns (250 mm × 0.2 mm) for the gradient elution of ABRF 48 intact protein standard mixture, including protein isoforms by LC-TOF-MS was studied. The separation of the 48 protein mixture using a gradient time of 120 min, at a flow rate of 1.5 μL min −1 and column oven temperature of 60 °C gave peak capacities >600. This allowed protein isoforms that differ only in their oxidation and biotinylation state, to be separated. However, protein identification, based on comparison of the experimentally determined molecular weight with theoretical masses was tentative, making it unsuitable for the analysis of actual biological samples. In total, 30 different protein masses were obtained from the 120 min gradient run. Based on molecular weight alone, only 24 charge envelopes could be tentatively assigned to proteins that are known to be in the 48 protein mixture. In another study, the use of a 50 mm long capillary poly(styrene-co-divinylbenzene) macroporous monolith (1 mm i.d.) using microLC-Orbitrap-MS setup showed limit of detection in the low femtomol range for a standard mixture of 9 proteins with a molecular weight ranging between 5.7 and 150 kDa. [150].
Recently, the use of strong cation exchange (SCX) sulfoalkylated monolithic cryogel for the separation of 3 model proteins was reported [135]. The continuous network comprising interconnected macropores (10-100 μm) [146] gave little or no mass-transfer resistance allowing the use of high flow rates without losing separation power. In addition the ease of preparation make monolithic cryogels suitable media for separation of high molecular species [151]. The selected proteins were successfully separated using a linear gradient and the obtained chromatogram was compared to the one obtained with a non-functionalized cryogel column where the analytes coeluted. No evidence of irreversible protein adsorption was observed and the performance of the new synthesised material in chromatographic separations was found to be reproducible even after passing liters of eluents and 10-20 column volumes of 1 M sodium hydroxide. These results indicate that the novel cryogelbased monolithic columns should be further investigated for the separation and purification of proteins.
A different approach for the separation of intact proteins was proposed by Liu et al. [147], using hybrid monolithic capillary columns based on polyhedral oligomeric silsequioxane and nano-LC with UV detection. The cage-like silsequioxane-polyhedral oligomeric silsesquioxane (POSS) were used as cross-linker for "one-pot" preparation of hybrid monolith columns embodying an inorganic-organic hybrid architecture with an inner inorganic framework. The authors compared the performance of the POSS-based hybrid monolithic columns for the separation of 7 standard proteins mixture and of E. coli proteins using gradient elution at 500 and 750 nL min −1 , respectively. The results were compared with the ones obtained by analysing the same intact protein standard mixture by using a commercially available PS-DVB monolithic capillary column. The obtained chromatographs for the standard protein mixture using the three different monolithic stationary phases are shown in Fig. 4. As shown, the selected proteins showed stronger retention on the stearyl methacrylate-POSS (SMA-POSS) capillary columns than on the benzylmethacrylate-POSS (BeMA-POSS) ones, due to its higher hydrophobicity (Fig. 4a, b). Also a slightly different separation selectivity was observed. Based on these results the authors concluded that besides the hydrophobicity of the stationary phase and the π-π stacking interactions, introduced by the BeMA function monomer, may exert positive effect on the separation of some types of intact proteins. For this reason, a POSS based hybrid monolithic column was developed with an equal functional monomers SMA and BeMA was synthesized and tested for the same standard protein mixture (Fig. 4c). By using the SMA-BeMA hybrid monolithic capillary column, all the intact proteins were baseline separated. Peak capacities between 62 and 79 were reported for the standard protein mixture analyzed. The results showed that a combination of two functional monomers (stearyland -methacrylate) (BeMA-SMA-POSS) functional monomers presented a better LC separation selectivity than using one type only. Compared to the performance of the commercial PS-DVB capillary column, lower peak capacity [147] was obtained with the SMA-BeMA hybrid monolithic column (79 and 68, respectively), but comparable run-to-run reproducibility (approx. 1 %).
Open-Tubular Columns
Open tubular (or open channel) chromatography (OTLC), initially developed by Halasz and Horvath [152], offers a significant gain in column efficiency compared to packed columns in terms of time and separation, which has been demonstrated theoretically and empirically [153]. Their performance is directly connected to the internal diameter; efficiency increasing with decreasing inner diameter [154]. In order to achieve efficiencies comparable to those of good packed columns, porous layer open tubular (PLOT) columns must have an inner diameter of the order of 15 μm or less. Jorgenson and Guthrie [155] published the first report on open tubular columns with an inner diameter close to what is theoretically required for efficiency similar to packed columns (~15 μm). Recently, 10 μm i.d. PLOT polystyrene divinylbenzene (PS-DVB) columns have been designed and used for high resolution, ultratrace LC-MS separations of peptides [156,157]. Causon et al. [158] performed a kinetic optimisation of OTLC capillaries coated with thick porous layers taking into account the effect on the retention, column resistance, band broadening and mass loadability. Their calculations showed the need to develop coating procedures that will produce porous films filling up approximately 50-70 % of the total column diameter offering very good reduced plate heights (h min < 2 for k' = 3). In the same study it was shown that by using elevated temperatures (90 °C) the allowable column diameter can be increased up to 9 μm (for lengths >0.8 m) achieving a large range of N values (100,000-880,000), and hence presenting an advantage over packed LC columns.
The use of PLOT columns for LC gained renewed interest after the coupling to nanospray-MS proving to be simple and efficient [159]. Their extremely small volumes require small injector and detector volumes. Such columns can be operated at low nanolitre flow rates and easily interfaced with ESI-MS, producing smaller droplets and thus minimising ion-suppression and yielding improvements in sensitivity [156]. PLOT columns were used for the analysis of intact proteins in a few studies [160,161]. Kazarian et al. [161] reported the use of wall-modified with photonic crystal fiber based PLOT capillary columns with polystyrene-divinylbenzene (PS-DVB) porous layer for the analysis of cytochrome c under isocratic conditions obtaining run-to-run retention time reproducibility of below 1 %. The columns consisted of 126 internal parallel 4 mm channels, each containing a wall bonded porous monolithic type PS-DVB layer in PLOT column format. Rogerberg et al. [160] investigated the potential of PS-DVB PLOT columns (10 μm i.d. × 3 m) for the separation of three intact proteins (cytochrome C, myoglobin and carbonic anhydrase) by LCnanospray MS under gradient elution (Fig. 5). They provided narrow peaks (approx. 0.2 min), very low carry-over (<1 %) and good repeatabilities (relative standard deviation (RSD) below 0.6 % and below 2.5 %, respectively). The effect of column length (1-3 m), gradient time (20-120 min) and column temperature (20-50 °C) was investigated. With shorter columns (1 and 2 m), peak widths were larger and increased more steeply with gradient time. Theoretical peak capacity (n c ) increased with column length. The n c increased with t G until a plateau was reached. The highest peak capacity achieved (n c = 185) was obtained with a 3 m column, where a plateau was reached with gradient time (t G ) 90 min. A decrease in retention and increase in selectivity was observed with the increase in temperature. Studies of retention in relation to temperature indicated that the stationary phase undergoes changes at high temperatures. Peak heights decreased as a function of temperature which was attributed to a poorer charged droplet formation at elevated temperatures. The developed method was successfully applied for the analysis of intact protein in skimmed milk (0.1 % fat). Figure 6 presents the selected ion monitoring chromatogram (SIM) of lower abundant proteins in milk and the chromatogram obtained for 30 times diluted milk sample of highly abundant proteins.
Emerging Chromatographic Technologies
To successfully address complex separation problems in proteomics, the development of novel separation [160] technologies capable of achieving ultra-high peak capacities within a reasonable time allowing the analysis of a multitude of samples is crucial. There are a number of new chromatographic technologies that have shown promise for small molecules, peptides and in some cases, intact proteins, but have yet to be widely applied within the field of proteomics. In the interests of directing the reader towards the latest technologies for intact protein separations, we will briefly review three techniques that have demonstrated promise but have yet been widely accepted by the proteomics community.
Slip Flow Chromatography
Slip flow is a new variant of liquid chromatography. Essentially the different between slip flow chromatography and conventional liquid chromatography lies in the way solute bands migrate through the column. In conventional chromatography it has been well-established that solute bands migrate with a Hagen-Poiseuille flow profile. That is, solute bands can be thought of as migrating through the column with a profile somewhat resembling an empty bowl. This arises from many, well-documented factors [162,163]. One of which, is that due to friction the velocity of the mobile phase at the wall approaches zero. However, in theory it is generally accepted to consider the flow velocity as equal zero due to strong interactions at the wall with solvent molecules, despite the fact that this is not exactly the case in practice. Conversely, in slip flow chromatography the velocity of the mobile phase is not zero. This is because the flow does not follow Hagen-Poiseuille principles but slips by the column wall due to weak interactions Fig. 7 a Illustration of a Hagen-Poiseuille flow profile compared to (b) and illustration of a slip flow profile. L corresponds to the slip length which is an additive factor which imparts a non-zero velocity at the wall Reproduced with permission from [167] between the mobile phase molecules and the wall itself [164][165][166][167][168][169]. Figure 7 illustrates the difference between slip flow and Hagen-Poiseuille flow. The benefit slip flow is that the solute band is less distorted than in Poisieuille flow conditions due to a more homogenous radial flow velocity profile. Consequently, the peaks eluting from the column in slip flow are more Gaussian compared to the Poisieuille flow case. Furthermore, the flow enhancement arising from slip flow allows sub-micrometer particles to be operated in columns without generating unreasonable pressure [164][165][166][167][168][169]. The use of smaller particles further increases chromatographic efficiency, as we know from the van Deemter equation. It is important to note that certain experimental conditions are required to generate slip flow in packed columns. Firstly, the fluid passing through the column must show weak interactions with the column wall. This has been demonstrated using a column packed with silica particles with C 4 ligands and particle diameters ranging from 125 to 1300 nm [165,167]. The flow rate of toluene passing through this column was compared to the flow rate when water was passing through the column using the Kozeny-Carman equation, which is known to described the flow velocity of fluids through packed bed (Eq. 1) where P denotes the pressure, L is the column length, ν is the flow velocity, η is the viscosity of the fluid passing through the packed bed, d p is the particle diameter and ε is the porosity. Accounting for the viscosity differences between toluene and water, the latter produced a flow rate that was higher than the expected velocity according to Eq. 1 because water molecules had weaker interactions with the hydrocarbon column wall compared to toluene [165,167]. Another parameter important for slip flow is the particle diameter which is related to the hydraulic radius, r hyd , by Eq. 2.
It was found that as the particle diameter decreased from 1300 to 125 nm, the flow enhancement due to slip flow increase exponentially with a flow enhancement as high as 20 for 125 nm particles [165,167]. However, simulations of slip flow in packed beds with colloidal silica particles predicted a much reduced flow enhancement than the experimental results described above [166,167]. While the theory underlying slip flow in open capillaries is understood, the theory behind slip flow in packed beds has recently been (1) investigated. As such the discrepancy between the experimental and simulated results is not definitively understood as yet. That said, simulations of flow moving through the packed bed indicated that regions of stagnant flow exist where the fluid stream makes contact with the particles indicating that tortuosity may affect the flow velocity profile in slip flow chromatography thereby reducing the flow enhancement [166,167].
Even though the theory underpinning slip flow chromatography is still being formulated it has proven itself effective for the separation of intact proteins. A preliminary investigation using 2.1 cm column packed with C4 functionalized colloidal silica crystals and fluorescence detection showed extremely high efficiency for bovine serum albumin; higher than 1,000,000 theoretical plates [164]. The ability of slip flow chromatography to separate intact proteins was investigated in a follow-up study combining slip flow chromatography with LC-MS [168]. A 4 cm long column with a pulled tip, packed with 470 nm silica particles with C18 ligands was used in conjunction with ESI-MS for the analysis of model proteins, namely ribonuclease A, trypsin inhibitor and carbonic anhydrase [168]. Despite the small particle diameter of the stationary phase, the separation was performed with at a flow rate of 200 nL min −1 which is sufficiently high for LC-MS. The backpressure was 600 bar which is surprising reasonable given the particle size. This is a result of the flow enhancement effect under slip flow conditions. The LC-MS separation showed high efficiency with peak capacity of 195 for a 10 min gradient [168]. This high resolving power facilitated the identification of four proteoforms for the ribonuclease A and carbonic anhydrase standards. Two proteoforms were identified for superoxide dismutase and trypsin inhibitor. The presence of proteoforms in commercial protein standards is known to increase the peak width of intact proteins separations as the proteoforms are very hard to separated and typically co-elute. The ability of slip flow chromatography to facilitate the identification of different proteoforms by MS illustrates its potential for proteomics despite that fact that to date it has only been demonstrated for model protein separations. That said, the colloidal silica used in slip flow columns is non-porous therefore they share the same limitation that is encountered when using columns packed with non-porous particles; the relatively low surface area reduces the retention of solutes and reduces the mass that can be loaded onto the column. Attention should also be paid to the interactions between the capillary wall and the mobile phase. As discussed above, interactions between the wall and the mobile phase must be weak in order to generate slip flow. While this may not be an issue for RPLC as the solvent is polar, it may become an issue for more nonpolar solvents for example, in HILIC or normal phase LC. This may be alleviated by coating the wall so it becomes less hydrophobic like what has been reported in capillary electrophoresis [170][171][172][173][174][175].
Microfluidics
Recently many common analytical assays, including DNA separation, cell manipulation and protein analysis have been reduced in size and manufactured in cm-scale devices as an alternative to the column-based approach [176]. These devices are called microfluidics and are analytical tools where fluids are driven (hydrodynamically or electrokinetically) through microstructured channels. They incorporate different functions required to analyse a particular sample (e.g., injection loop, stationary phase, valve, detector, etc.) into a single platform providing high levels of process automatization. This enables the construction of miniaturized analytical tools often called "lab on a chip" devices.
The main advantages of using microfluidic devices for separations than conventional LC in a column are the low dead-volume connections that minimize band-broadening effects, the ability to analyse ultra-low sample volumes, reduced solvent consumption and operating costs. Furthermore, short analysis times can be achieved due to the reduced length scales without sacrificing efficiency producing high peak capacities. In addition, they can integrate multiple sample preparation steps into one device. The types of stationary phase morphologies that can be used for liquid chromatography microfluidic platforms are (1) open-tubular channels, (2) microfabricated pillar-array columns, (3) packed particles and (4) polymer monoliths. Liu et al. [177] reported the application of a microfluidic device with an integrated solid phase extraction segment coupled to a 15 cm column packed with polymer monolith for the separation of labelled intact proteins within 15 min, which is fast considering the low flow rate was only 200 μL min −1 . Taniguchi et al. [178] reported a polydimethylsiloxane microfluidic chip-based approach for the quantitation of E. coli proteome with single molecule sensitivity. This level of sensitivity is important in facilitating the study of gene expression and regulation of low abundance proteins. Recently, Desmet et al. [179] reported the development of a micro-fabricated packed bed column filled with radially elongated pillars (Fig. 8) showing the same performance as non-packed open tubular columns. As previously mentioned, these offer a much higher separation speed and efficiency than conventionally packed bed columns. Efficiency in terms of N was as high as 70,000 plates for retained coumarin dyes. However, it should be noted that these dyes are small molecules and as such their separation efficiency is primarily limited by eddy dispersion, which is virtually non-existent in radially elongated pillar columns due to the high degree of order within the separation bed. As discussed in "Particle-Packed Columns", unlike small molecules, the ability to separate proteins with high efficiency is predominantly limited by their slow rate of diffusion which increases the time taken for mass transfer. The radially elongated pillar columns should also compensate for slow mass transfer as they are non-porous, therefore we expect that the efficiency observed for intact proteins separations using non-porous columns ("Particle-Packed Columns") should be comparable to the potential power of pillar columns.
Although microfluidic devices showed several promising applications in the proteomics field, there are still various challenges that need to be addressed before they gain wide acceptance for intact protein analysis. First, assays that show higher sensitivity must be developed. The most interesting proteins are present at very low concentrations and cannot be measured by antibody-based approaches and the majority of the studies were carried out using highly concentrated protein mixtures or model biological samples. Second, it is imperative to translate the proof-of-principle experiments into robust and easy to use methods that biologists and analytical chemists in the biomedical field would adopt. Moreover, pre-treatment and post-analysis elements are expected to be incorporated resulting in automatic and user-friendly systems.
Active Flow Technology
In an effort to increase sensitivity, capillaries packed with particles or containing monolithic beds are used to reduce the dilution of the sample. Capillaries typically range in [179] size from 50 μm to as large as 1 mm i.d. Flow rates in the order of hundreds of nL min −1 are employed with these columns which makes them compatible for coupling with MS; low flow rates facilitate the removal of mobile phase during ionization using ESI. However, the need to use low flow rates increases the time required for analysis, which is undesirable, particularly when numerous degradable samples must be analyzed in a given day. Active flow technology (AFT) allows the use of wider i.d. columns at high flow rates (in the order of mL min −1 ) without compromising the MS. Separations of amino acids in fruit and vegetable juices have been accomplished within 24 s using AFT with LC-MS [180]. The ability of AFT columns to enable fast separations with MS lies in the unique design of end fitting used with these columns (Fig. 9).
The AFT end fitting consists of four ports. The central port is connected to the detector, or MS, while the flow from the remaining exits is typically directed to a waste reservoir. This end fitting was designed to minimize the effect of the heterogeneity of the stationary phase bed, which has been long known to cause band-broadening due to the difference between the mobile phase velocity in the radial centre of the bed compared to that of the mobile phase travelling near the wall of the column, which is slower. This radial velocity difference imparts a bowllike flow profile to the solute band rather than a flat disc profile. The AFT fitting uses a novel frit design to convert the broader, bowl-like band to a flat disc profile using what could be equated to a cookie cutter: the inner frit, in essence, cuts the flat, disc-like part of the solute band which exits via the central port and travels to the MS. The flow from the tailing wall-region of the solute band exits via the remaining ports, commonly referred to as the peripheral ports, and is diverted to waste. This produces a more Gaussian, narrow peak which translates to higher efficiency, in terms of N [181][182][183][184][185][186]. Gains in N as high as 70 % relative to conventional columns have been reported for separations of small molecules [184]. However, a recent study comparing the results of AFT for separations of small molecules compared to larger molecules, namely insulin, has shown that the improved separation power reported for small molecules does not occur for large molecules [187]. As previously discussed, the efficiency of protein separations are primarily limited by slow mass transfer. AFT does not have any effect of the mass transfer properties of the stationary phase: AFT columns are packed with the same procedure and materials as conventional chromatography columns. The benefit of AFT for small molecules arises from the reduction of the long-range eddy dispersion [186,187]. Where AFT is of use for proteins lies primarily in its ability to operate at fast flow rates thereby shortening the required analysis time. This is accomplished by controlling the flow rate from each port by applying different amounts of backpressure using tubing connected to the peripheral ports. The ratio of flow from the central port relative to the total flow rate is often referred to as the segmentation ratio, which is a useful means for tuning the performance of the AFT column depending on the specific application. For detailed information on how the segmentation ratio affects the performance of the column, readers are directed to the following references [184,188]. To achieve fast separations within 24 s for amino acids in fruit and vegetable juice [180], the total flow rate was 4.5 mL min −1 . Of this total flow rate, 21 % of the flow was sent through the central port to the MS. This equated to a flow rate of 0.9 mL min −1 to the MS. The column (50 mm × 2.1 mm i.d) was packed with 5 μm, fully-porous particles with C18 selectivity. The performance of the AFT column was compared to that of a conventional column (30 mm × 2.1 mm i.d.) with the same selectivity but packed with 1.9 μm fully-porous particles. This column more closely represents the current state-ofthe-art for fast separations: short columns with small particles operated with UHPLC instrumentation to enable high flow rates despite the high backpressure. As expected, the column with 1.9 μm particles was more efficient at its optimal flow rate than the AFT column packed with 5 μm particles at its optimal flow rate. However, when the 1.9 μm particle packed column was operated at a flow rate (~0.9 mL min −1 ) near the maximum possible flow rate, the analysis time required for the same separation was ~70 % longer than that using the AFT column at a flow rate that gave the same value for N. Recently separations as fast as 12 s have been reported for monitoring the degradation of amino acids by nitric acid using AFT with LC-MS [189]. While flow splitting using a t-piece connection can be used Fig. 9 Illustration of the AFT column end fitting. The end fitting consists of an annular frit and a multi-port end cap. Flow is split through the annular frit, which is expanded to illustrate its design. Once the flow has been segmented via the frit it is diverted out of the various ports. The radial section of the solute band exits the central port whilst the wall-region portion of the band exits from the peripheral ports. The ratio of flow exiting the ports with respect to each other can be altered by altering the amount of backpressure using tubing typically connected to the peripheral ports Reproduced with permission from [183] as a means of operating fast separations yet reducing the flow rate sent to the MS, a comparison of this approach with AFT has shown that the AFT flow splitting is more efficient [190]. This is because the frit used in the AFT end fitting allows the central portion of the solute band to be separated from the wall region of the bowl-like solute band preventing dilution. This is not possible with a t-piece connection as such a connection includes the entire bowl-like solute band including the mobile phase within the hollow centre of the band prior to splitting the flow.
To date all studies using AFT have focused on small molecules rather than large molecules. While the benefits of operating at high flow rates to reduce analysis time should also be evident for proteins, it is possible that the slow mass transfer of proteins, which exerts a more dominant effect on N at higher flow rates, may reduce the benefit of operating at such flow rates. One way to compensate for this may be to combine the benefits of non-porous particles with AFT.
Conclusion
The study of the human proteome is key to a better understanding of the various biological processes that take place within our bodies. Potentially this may lead to the identification of biomarkers for a variety of diseases. However, this requires powerful separation techniques to combat the complexity and variation in abundance of various proteins. LC-MS is arguably the most well-used tool in the field of proteomics, combining the separation powers of LC and MS to enable identification and characterization of proteins. In conjunction with top-down proteomics, where proteins are kept intact, the complexity and dynamic range of the proteome can be reduced further enabling more effective characterisation using MS.
To date RPLC using C18, C8 or C4 fully porous particle packed columns remain the workhorse in proteomics. Developments in column technology have not been readily adapted into the field. That said, the potential of core-shell particles, non-porous particles, monolithic columns and in particular slip flow chromatography have demonstrated potential for the analysis of intact proteins. Emerging technologies such as microfluidics and active flow technology may also prove to be valuable for reducing analysis and time and increasing sensitivity. Relatively new retention mechanisms such as HILIC show promise as they are able to retain polar proteins and the high organic modifier content of the mobile phase improves electrospray ionization. A closer link between chromatographers and biologists in the future may help introduce these developments into proteomics where they can be of great use in improving our understanding of disease and our biology.
|
2019-10-10T16:34:47.122Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "6a4c1c49f70c6eebab8040901d390ee261720d0e",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc5413533?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a4c1c49f70c6eebab8040901d390ee261720d0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
251886357
|
pes2o/s2orc
|
v3-fos-license
|
Decoding histone ubiquitylation
Histone ubiquitylation is a critical part of both active and repressed transcriptional states, and lies at the heart of DNA damage repair signaling. The histone residues targeted for ubiquitylation are often highly conserved through evolution, and extensive functional studies of the enzymes that catalyze the ubiquitylation and de-ubiquitylation of histones have revealed key roles linked to cell growth and division, development, and disease in model systems ranging from yeast to human cells. Nonetheless, the downstream consequences of these modifications have only recently begun to be appreciated on a molecular level. Here we review the structure and function of proteins that act as effectors or “readers” of histone ubiquitylation. We highlight lessons learned about how ubiquitin recognition lends specificity and function to intermolecular interactions in the context of transcription and DNA repair, as well as what this might mean for how we think about histone modifications more broadly.
Introduction
The genomic DNA of eukaryotes is organized into chromatin, a polymer whose fundamental structural unit is the nucleosome. The composition of the nucleosome core is essentially invariant across eukaryotes: an octamer composed of two copies of each of the core histone proteins H2A, H2B, H3, H4, and 146 base pairs of DNA wrapped around it. Linker DNA between neighboring nucleosomes varies in length from~10 to 50 base pairs depending on species and chromatin context and is bound by the linker histone H1 (Fyodorov et al., 2018). The packaging of genomic DNA into arrays of nucleosomes limits its access by cellular machineries that carry out transcription, replication, and repair (Kornberg and Lorch, 2020). Thus, mechanisms that allow access to DNA in the context of chromatin are integral to all of these processes. One mechanism that allows for dynamic organization of chromatin structure during these processes is histone post-translational modifications (PTMs). Histone PTMs are a key cellular mechanism for regulating chromatin structure and function. Generally, they are thought to comprise chromatin-based signaling networks in which modified histones form binding sites for chromosomal proteins that execute downstream functions (Strahl and Allis, 2000;Rothbart and Strahl, 2014). At the heart of these networks is the interface between modified histones and their cognate recognition proteins, often called histone modification "readers." Many protein domains with dedicated reader functions have been identified, and numerous high-resolution structural views of reader-modification interactions are available. Reader domains that recognize distinct modifications have unique structural features. For example, reader domains for histone methylation can be distinguished based on the specific methylated site that they recognize: Plant homeodomain (PHD) fingers for histone H3 lysine 4 (K4) methylation, chromodomains for lysine 9 (K9) or lysine 27 (K27) methylation (sub-families of chromodomains are specific for each site), and proline-tryptophan-tryptophanproline (PWWP) domains for lysine 36 (K36) methylation (Musselman et al., 2012). These domains allow histone modifications to guide biochemical activities on chromatin in multiple biological contexts.
In contrast, relatively few readers for histone ubiquitylation (Mattiroli and Penengo, 2021;Vaughan et al., 2021) have emerged from the detailed studies of this modification, and structural information illuminating ubiquitylated histone recognition has only recently become available, largely through cryo-EM studies (summarized in Table 1). Here we review the current state of knowledge of ubiquitylated histone readers, with a focus on their structure and function. This is a field that spans many aspects of chromatin biology and that illuminates novel functions for ubiquitin. Unlike other classes of histone modification readers, ubiquitylated histone readers do not share a particular domain or structural motif. As such, we discuss interactions between readers and ubiquitylated histones with a view toward identifying common themes and functional consequences.
Overview of protein ubiquitylation
Ubiquitin is a 76 amino acid protein that can be posttranslationally attached to other proteins. Ubiquitin attachment to a target protein (ubiquitylation; also referred to as ubiquitination) occurs in three enzymatic steps and results in an isopeptide bond linking the carboxy terminus of the ubiquitin polypeptide to the terminal amino group of a lysine side chain in the target (Komander and Rape, 2012).
Modification
Reader Experimental methods
Protein ubiquitylation is most well known as the first step in the ubiquitin-proteasome system for protein degradation, the primary mechanism for programmed protein turnover and protein quality control in eukaryotic cells. Protein recognition by the proteasome requires polyubiquitylation, in which chains of ubiquitin are formed through isopeptide linkages at lysines within ubiquitin itself. All seven lysines in the ubiquitin polypeptide can in fact be targeted for chain formation, although the canonical type of chains recognized by the proteasome are linked at lysine 48. Non-proteolytic signaling functions of ubiquitin regulate protein activity and protein-protein interactions in a range of biological contexts. These can involve polyubiquitylation, particularly lysine 63linked chains, but also frequently depend on monoubiquitylation of specific lysines in target proteins (Komander and Rape, 2012;Salas-Lloret and González-Prieto, 2022). Since histone ubiquitylation comprises mainly monoubiquitylation events, these will be the focus of our discussion.
Studies of monoubiquitylation signaling outside the realm of histones have defined recurrent ubiquitin receptor motifs. One group of motifs consists of bundles of alpha helices, or in some cases a single alpha helix. These are commonly found in proteins involved in endocytosis and include UBA (ubiquitin-associated), CUE (coupling of ubiquitin to ER degradation), UIM (ubiquitininteracting motif), and MIU (motif interacting with ubiquitin) domains (Husnjak and Dikic, 2012). A second group has a zinc finger as the central structural element. This group is exemplified by the UBZ (ubiquitin-binding zinc finger) domain of DNA polymerase eta that recognizes monoubiquitylated PCNA to facilitate post-replication DNA repair; orthologous UBZ domains are found in other repair proteins as well (Husnjak and Dikic, 2012). Most characterized ubiquitin receptor motifs contact a hydrophobic patch on the ubiquitin surface surrounding Ile44. Study of histone modification readers has revealed additional examples of canonical ubiquitin interaction modes, and has also identified several novel types of ubiquitinbinding motifs that have important chromatin regulatory functions.
Histone H2A ubiquitylation
Monoubiquitylation occurs on several residues in the N-and C-terminal tails of histone H2A. Ubiquitylation on a C-terminal lysine corresponding to K119 in mammals was the first proteinubiquitin conjugate to be characterized (Goldknopf and Busch, 1977). Identification of the cognate E2 and E3 enzymes as Polycomb group regulators linked this modification to transcriptional repression (Wang et al., 2004). More recent work has identified additional sites of monoubiquitylation on both the H2A N-terminal and C-terminal tails with important roles in coordinating the DNA damage response (Mattiroli et al., 2012;Kalb et al., 2014b) (Table 1).
H2AK119ub1 readers
The H2AK119-specific E3 ligase Ring1B is a member of the Polycomb family of transcriptional repressors. The Polycomb family proteins, first discovered in the fruit fly D. melanogaster, coordinate formation of facultative heterochromatin in metazoans (Blackledge and Klose, 2021). They are essential for the maintenance of gene expression programs associated with embryonic development and in adult tissues. As such, Polycomb regulators are among the most frequently mutated proteins in human cancers. The mechanism of Polycomb repression remains a topic of intense study, and is known to involve the formation of a repressive chromatin structure that is epigenetically inherited through cell division. Polycomb proteins reside in one of two types of Polycomb repressive complexes in cells, termed PRC1 and PRC2; Ring1 enzymes are core components of PRC1. PRC2 complexes also have histonemodifying activity conferred by the Ezh2 histone H3 lysine 27 methyltransferase. A key DUB for H2AK119ub1 is BAP1, a tumor suppressor that restrains Polycomb silencing genomewide (Campagne et al., 2019;Fursova et al., 2021). The PRC1 complex mediates compaction of nucleosome arrays in vitro and formation of intranuclear Polycomb clusters in vivo (Francis et al., 2004;Grau et al., 2011;Isono et al., 2013). These functions correlate with liquid-liquid phase separation properties of PRC1 (Plys et al., 2019). Chromatin compaction by PRC1 in vitro is independent of any histone modifying activities and is insensitive to removal of histone tails, suggesting that Polycomb-associated histone modifications are important for targeting silencing to the appropriate genomic regions (Francis et al., 2004). PRC1 and PRC2 are functionally linked on chromatin through self-reinforcing regulatory loops involving histone reader interactions (Blackledge and Klose, 2021). Recruitment of PRC1 by PRC2 activity is well understood and is mediated by a chromodomain protein (one of the CBX family proteins in mammals) that reads methylated H3K27 (H3K27me) and that is a component of canonical PRC1 complexes. However, the Frontiers in Cell and Developmental Biology frontiersin.org reverse relationship was only more recently described, and involves H2AK119ub1 reader proteins that associate with the PRC2 complex. Intriguingly, variant PRC1 complexes, which lack a chromodomain subunit, can still recognize and propagate H2AK119ub1, suggesting that these complexes can function through a PRC2-independent pathway to generate a silent state. H2AK119ub1 also exerts transcriptional regulatory functions through reader proteins that are entirely independent of the PRCs (Table 1).
PRC2
A functional link between H2AK119ub1 and the PRC2 complex was first demonstrated by proteomic analysis that identified factors enriched on recombinant nucleosome arrays modified with H2AK119ub1 (Kalb et al., 2014a). PRC2 components were identified among the most robust interactors using extracts derived from fly embryos or mammalian ES cells, and the AEBP2 and Jarid2 subunits showed the highest enrichment. Moreover, a reconstituted PRC2 complex containing both AEBP2 and Jarid2 was shown to methylate H3K27 more efficiently on H2AK119ub1-modified nucleosome substrates than on unmodified ones. Subsequent studies uncovered direct interaction of Jarid2 with H2AK119ub1, confirming this factor as a direct reader of this modification (Cooper et al., 2016). A consensus UIM (ubiquitin interaction motif) in Jarid2 was shown to interact with the Ile44 hydrophobic patch of ubiquitin on H2A. Furthermore, mutation of UIM residues impaired the PRC2 recruitment function of Jarid2 in cells.
The cryo-EM-derived structure of PRC2 in complex with a H2AK119ub1-modified nucleosome offers a more complete picture of H2AK119ub1 recognition by PRC2 (Kasinath et al., 2021). The structure reveals direct contact between both Jarid2 and AEBP2 components with ubiquitin, suggesting a multi-valent H2Aub reader function for PRC2 ( Figures 1A,B). Remarkably, both components engage H2AK119ub1 independently on opposite sides of the nucleosome, essentially forming a sandwich with the nucleosome in between them. On one side, the Jarid2 UIM forms an alpha helix that is wedged between the ubiquitin on H2A and nucleosomal DNA; this is stabilized by an adjacent Jarid2 segment bound to the "acidic patch" on the nucleosome surface ( Figure 1B). The acidic patch is a cluster of acidic residues in H2A and H2B that forms a hotspot for factor engagement with the nucleosome (McGinty and Tan, 2021). On the opposite side of the nucleosome, two tandem zinc fingers in AEBP2 contact the ubiquitin attached to the other H2A subunit in the complex through the Ile44 hydrophobic patch. Additional contacts are also seen with H2A/H2B residues on this surface of the nucleosome. This structure explains the requirement for both Jarid2 and AEBP2 for PRC2 to respond to H2AK119ub1 in vitro and provides a general framework for understanding how PRC1 enzymatic activity regulates PRC2 in the context of Polycomb silencing.
Proteomic analysis of PRC2 has complicated this regulatory picture by demonstrating the existence of multiple PRC2 complexes composed of a group of shared core subunits (Ezh2, Suz12, Rbbp4, Rbbp7) and a set of auxiliary subunits. Jarid2 and AEBP2 are only present in one species of PRC2, termed PRC2.2, whereas PRC2.1 is endowed with Polycomb-like (PCL) subunits, DNA-binding proteins with preference for unmethylated CpG islands (Hauri et al., 2016). Thus, a regulatory role for H2AK119ub1 seems to only apply to a subset of PRC2 complexes in vivo, as confirmed by analysis of catalytic-dead Ring1B point mutants in mammalian ES cells (Blackledge et al., 2020;Tamburri et al., 2020). Interestingly, these point mutants also show complete loss of Polycomb silencing function even at PRC2.1 target genes, presumably because PRC1 functions are generally compromised (as is the case in a complete Ring1B Cartoon illustration of the cryo-EM structure of PRC2 bound to a H2AK119ub1 nucleosome (based on Kasinath et al., 2021). The H2AK119ub1 reader subunits Jarid2 and AEBP2 are highlighted; dashed lines denote presumed mobile segments of these proteins not visible in the structure. The Jarid2 UIM is depicted on the top surface by a thick line between ubiquitin and the acidic patch region. The AEBP2 zinc finger domain is shown on the bottom surface. The tail of histone H3 is shown positioned in the Ezh2 catalytic site. (B) Pymol rendering of the cryo-EM structure (PDB code 6WKR) showing the nucleosome and H2AK119ub1-binding modules. The H2A/H2B acidic patch on the nucleosome surface is also indicated. See text for details. Created with BioRender.com.
PRC1
As is the case for PRC2, PRC1 does not refer to a single entity but a family of related complexes (Blackledge and Klose, 2021). All PRC1 complexes contain Ring1B and one of six PCGF paralogs. In canonical PRC1 complexes, PCGF2 or PCGF4 are bound to a chromobox (CBX) subunit. CBX proteins are readers for methylated H3K27, and so canonical PRC1 complexes strongly depend on PRC2 enzymatic activity for their genomic localization and function. In contrast, variant PRC1 complexes contain PCGF1, 3, 5, or 6 bound to RYBP/YAF2. These complexes are independent of H3K27 methylation, and the H2AK119-specific ubiquitin ligase activity of these complexes renders overall levels of H2AK119ub1 essentially PRC2independent in cells (Tavares et al., 2012;Blackledge et al., 2014). What drives recruitment of these complexes to chromatin if not PRC2? RYBP is a ubiquitin-binding protein: this interaction is mediated by its zinc-finger domain, which is similar in sequence to NZF zinc-fingers known to bind ubiquitin (Arrigoni et al., 2006). Ubiquitin binding by RYBP is important for PRC2-independent PRC1 recruitment in the context of X-chromosome inactivation in female ES cells (Almeida et al., 2017). Recently, RYBP was shown to interact directly with H2AK119ub1 nucleosomes in vitro; binding to H2Bub1 or unmodified nucleosomes in the same assays was comparatively weak. Moreover, RYBP was found to stimulate catalytic activity of PRC1 specifically in the presence of H2AK119ub1 nucleosomes (Zhao et al., 2020). In vivo, H2AK119ub1 and RYBP are interdependent for their chromatin association in ChIP-seq experiments. Taken altogether, these data support a model in which variant PRC1 propagates H2AK119ub1 through positive feedback. This model also emphasizes the central role H2AK119ub1 has in formation of Polycomb silencing assemblies in mammalian cells.
Other H2AK119ub1 readers involved in transcriptional repression
The repressive function of H2AK119ub1 extends beyond its direct effects on PRC1 and PRC2. The ATP-dependent chromatin remodeling factor RSF (remodeling and spacing factor) was identified as a H2AK119ub1 reader in experiments that purified native chromatin highly enriched in H2AK119ub1 (Zhang Z. et al., 2017). RSF is a heterodimer consisting of a catalytic subunit (SNF2h) and a targeting subunit called RSF1; it is involved in multiple nuclear processes including transcription, DNA repair, and centromere function. RSF1 interacts specifically with H2AK119ub1 nucleosomes through its ubiquitylated-H2Abinding (UAB) domain. The UAB domain is highly conserved among RSF1 orthologs but lacks obvious sequence similarity to other ubiquitin-binding domains; structural analysis of this domain in complex with H2AK119ub1 would be revealing. ChIP-seq and RNA-seq analyses in cell lines demonstrated that RSF1 and Ring1B regulate overlapping cohorts of genes, suggesting that the RSF1-H2AK119ub1 interaction is physiologically relevant. In vitro transcription experiments on chromatin templates suggested that RSF directly represses transcription through H2AK119ub1. Further investigation is needed to determine how general the requirement for Ring1B or H2AK119ub1 is for RSF function, and, conversely, how important RSF might be for Ring1B functions in Polycomb silencing.
H2AK119ub1 was also recently shown to bridge the Polycomb system with de novo DNA methylation. DNA methylation at CpG dinucleotides is a key component of constitutive heterochromatin in mammals and is necessary for silencing of retrotransposons and repetitive elements (Edwards et al., 2017). Polycomb silencing generally operates independently of DNA methylation: variant PRC1 complexes directly bind to CpG islands at target gene promoters, but only when they are not methylated (Blackledge and Klose, 2021). However, the de novo DNA methyltransferase DNMT3A has a latent ability to target CpG islands occupied by H2AK119ub1 (Weinberg et al., 2021). This was revealed through analysis of mutant forms of DNMT3A in which the PWWP domain is impaired; such mutations are frequently found in patients with paragangliomas and microcephalic dwarfism (Weinberg et al., 2021). The DNMT3A PWWP domain is a reader for methylated H3K36, a histone mark that is depleted from promoter CpG islands. PWWP mutations cause DNMT3A redistribution and aberrant de novo DNA methylation at PRC1-regulated CpG island promoters. This effect is dependent on H2AK119ub1, as it is abolished by Ring1B removal, and is due to a direct interaction between DNMT3A and H2AK119ub1 nucleosomes that is mediated by a ubiquitin-dependent recruitment region (with no sequence homology to known ubiquitin-binding domains) in DNMT3A. Wild-type DNMT3A is preferentially localized by H3K36me through its PWWP domain. However, latent H2AK119ub1 recognition may be relevant in specific cell types or at certain times in development. For example, de novo methylation of CpG islands by DNMT3A has been reported in hematopoietic stem cells (Spencer et al., 2017). Additionally, during neuronal differentiation, many Polycomb-regulated gene promoters acquire DNA methyation (Mohn et al., 2008). Further studies are necessary to determine how specificity is conferred by the UDR of DNMT3A for H2AK119ub1, to what degree H2A monoubiquitylation contributes to DNMT3A localization in physiological settings, and how the balance between physiological and pathological targeting is achieved.
ZRF1, a H2AK119ub1-reader interaction involved in transcriptional activation
Immunoaffinity purification of H2AK119ub1-containing chromatin identified zuotin-related factor 1 (ZRF1) (Richly Frontiers in Cell and Developmental Biology frontiersin.org et al., 2010). ZRF1 orthologs contain a DnaJ domain and two SANT domains; ubiquitin binding was mapped functionally to a different region of the protein that lacks sequence similarity to known ubiquitin-binding domains. ZRF1 occupies a subset of Ring1B/H2AK119ub1-occupied promoters, but seems to antagonize PRC1 binding to these targets, as suggested by a decrease in PRC1 occupancy upon ZRF1 overexpression. Moreover, upon differentiation of NT2 cells with retinoic acid, ZRF1 binding to its targets is enhanced, in concert with reversal of Polycomb-mediated repression and transcriptional activation. This suggests the intriguing model that H2AK119ub1 has a dual role in the Polycomb system, both in the establishment of the transcriptionally repressed state and in the activation of transcription at Polycomb-regulated genes following stimuli or differentiation. There are a number of outstanding mechanistic questions surrounding this model that have yet to be addressed. How does competition between ZRF1 and PRC1 operate? Can ZRF1 displace RYBP/YAF2 binding to H2AK119ub1 nucleosomes? How does this mechanism relate to H2AK119ub1 removal by the de-ubiquitylase BAP1, also proposed to be important for PRC1 antagonism during gene activation (Campagne et al., 2019)? As the derepression of Polycomb targets remains poorly understood, further insight into a role for H2AK119ub1 in this process would be of great interest.
H2Aub1 in DNA repair
Monoubiquitylation of H2A is a key component of the cellular response to DNA double strand breaks (DSBs) (Mattiroli and Penengo, 2021). These pathways do not directly involve H2AK119ub1 and instead lead to modification of distinct lysines on the N-and C-terminal H2A tails. Ubiquitylation of the N-terminal tail on lysine 13 or 15 (H2AK13ub1 or H2AK15ub1) occurs in the immediate vicinity of DSBs downstream of signaling cascades initiated by the damage checkpoint kinase ataxia telangiectasia mutated (ATM) and is catalyzed by the E3 ligase RNF168. (H2AK13ub1 and H2AK15ub1 are functionally interchangeable, so we will confine our discussion to the latter.) Readers for this modification regulate DSB repair pathway utilization and participate in both major DNA repair pathways in eukaryotic cells: non-homologous end joining (NHEJ) (53BP1), and homologous recombination (HR) (BARD1, RAD18, and RNF169). This suggests a general function for H2AK15ub1 in repair that is modulated by additional pathway-specific signals. These signals often work by resolving competition between 53BP1 and HR factors for binding to H2AK15ub1. They include the methylation state of the tail of histone H4, another chromatin feature sensed by repair factors that helps to dictate repair pathway choice. Ubiquitylation of H2A is also important for HR downstream of this decision point. BARD1, as part of a heterodimeric E3 ligase complex with BRCA1, catalyzes H2A C-terminal tail ubiquitylation at K125, K127 or K129, thereby promoting HR through the readers SMARCAD1 and USP48 (Table 1).
53BP1
53BP1 engagement at chromatin surrounding a DSB is a major signal promoting NHEJ as it blocks resection at the broken ends, a necessary step for strand invasion leading to HR (Panier and Boulton, 2014). 53BP1 selectively binds to H2AK15ub1 and forms a scaffold, allowing other core NHEJ response proteins to assemble near the DSB end (Fradet-Turcotte et al., 2013). 53BP1 binding to the nucleosome involves recognition of two histone modifications: H2AK15ub1 and mono-or di-methylated H4K20 (H4K20me1/2) (Botuyan et al., 2006). The latter Wilson et al., 2016). H2AK15ub1 is shown connected to the H2A N-terminal tail and projecting over the nucleosome surface. The UDR is represented by a thick blue line contacting the H2B/ H4 cleft, H2AK15ub1, and the acidic patch. The TTD bound to H4K20me2 is separated from the UDR by an unstructured region (dashed line). Right: Pymol rendering of the cryo-EM structure (PDB code 5KGF) showing the 53BP1 UDR bound to a H2AK15ub1 nucleosome. The TTD is not shown in this view. The H2B C-terminal helix is indicated. (B) Left: Cartoon illustration of the cryo-EM structure of the BARD1 ARD-BRCT region bound to a H2AK15ub1 nucleosome (based on Dai et al., 2021;Hu et al., 2021). Right: Pymol rendering of the cryo-EM structure (PDB code 7LYC) with the BARD1 ARD-BRCT region uniformly coloured black. The H2B C-terminal helix is indicated. See text for details. Created with BioRender.com.
Frontiers in Cell and Developmental Biology frontiersin.org modification is not induced by DNA damage but is cell cycle regulated, such that high levels of methylation are present genome-wide only in cells in G1 (Saredi et al., 2016;Pellegrino et al., 2017). This helps to restrict 53BP1 binding and error-prone NHEJ from occurring when homologous chromosomes are present and HR is favorable. 53BP1 binds to H4K20me1/2 through its tandem Tudor domain (TTD) and to H2AK15ub1 through a ubiquitin dependent recruitment (UDR) motif located just C-terminal to the TTD (Wilson et al., 2016). The UDR was defined experimentally in domain swap experiments that conferred RNF168-dependent localization of a yeast 53BP1 ortholog to DNA damage foci in mammalian cells. Yeast lack RNF168 and H2AK15ub1, arguing that H2AK15ub1 affinity resides in this motif (Fradet-Turcotte et al., 2013). Although the UDR is not obviously similar to other ubiquitin-binding motifs, it is highly conserved among metazoan 53BP1 orthologs and is required for 53BP1 function in cells. Insight into how the 53BP1 UDR recognizes H2AK15ub1 came from a cryo-EM structure of a 53BP1 fragment containing the TTD and UDR bound to a nucleosome core particle modified with both H4K20me2 and H2AK15ub1 (Wilson et al., 2016) (Figure 2A). The ubiquitin is poorly resolved in the cryo-EM structure of the modified nucleosome alone, but association of the TTD-UDR fragment imparts structural rigidity, allowing clear inference of the ubiquitin conformation. The UDR forms an extended coil that is sandwiched between the nucleosome surface and the ubiquitin moiety, contacting both the ubiquitin Ile44 patch and a solventexposed cleft between histones H2B and H4. The C-terminus of the UDR forms a predicted alpha helix that contacts the H2A/ H2B acidic patch. The structure also revealed roles for the nucleosome itself in positioning the H2AK15-linked ubiquitin. First, direct contacts were observed between ubiquitin and the H2B C-terminal alpha helix (Figure 2A, right). Second, ordering of the H2A N-terminal tail residues surrounding K15 conferred further structural rigidity and facilitated the UDR interaction; this involved interaction of the H2A R11 and R17 side chains with DNA. Thus, not only does the 53BP1 UDR engage in multivalent interactions with both ubiquitin and the nucleosome surface; it also potentiates ubiquitin interactions with other components of the nucleosome.
RNF169 and RAD18
RNF169 is a paralog of RNF168, but its RING finger is not required for its function in DNA repair. Instead, DNA repair function requires a consensus MIU motif that is part of a ubiquitin-dependent recruitment module (UDM). The UDM directs RNF169 to double-strand breaks in a RNF168dependent manner and binds specifically to H2AK15ub1 in a nucleosome context (Panier et al., 2012). Interestingly, RNF168 harbors a similar UDM, which presumably leads to amplification of the H2AK15ub1 signal at damaged sites. Unlike RNF169, RNF168 has a second UDM that is important for initial RNF168 recruitment to damaged chromatin through recognition of RNF8-dependent K63-linked polyubiquitylation on histone H1 (an event that is immediately downstream of the ATM kinase) (Doil et al., 2009;Stewart et al., 2009). RAD18 is similar to RNF169 in many respects: it is a E3 ubiquitin ligase harboring a UDM, its localization to DSBs is RNF168-dependent, and its positive role in HR requires ubiquitin binding but not E3 ligase activity Panier et al., 2012).
The UDMs in RNF169 and RAD18 (and in RNF168) are bipartite in nature and are composed of a consensus ubiquitin-recognition motif and an adjacent nucleosomebinding motif termed the "LR" motif (LR refers to a conserved dipeptide within the motif). Both motifs are necessary for targeting to DSB foci in cells, and the LR motif can in fact be transferred to confer a similar localization on unrelated ubiquitin-binding proteins (Panier et al., 2012). The methyl-TROSY NMR structure of the RNF169 UDM bound to a H2AK15ub1 nucleosome elegantly validates this bipartite organization. The RNF169 MIU motif contacts the Ile44 patch of H2AK15linked ubiquitin, and the interaction interface is oriented away from the nucleosome. The LR motif, extending from the MIU alpha helix, contacts the H2A/H2B acidic patch (Hu et al., 2017;Kitevski-LeBlanc et al., 2017). A similar division of labor applies to the RAD18 UDM. In this case, the ubiquitinbinding motif is a consensus UBZ motif; this contacts ubiquitin with the Ile44 patch facing the nucleosome, such that the UBZ helix is sandwiched between ubiquitin and the H2A/H2B acidic patch. The RAD18 LR motif then makes additional stabilizing contacts with the H2A/H2B acidic patch.
These structures, along with complementary biochemical experiments, suggest that RNF169 and RAD18 promote HR by competing with 53BP1 for binding to nucleosomes proximal to DSBs. There is striking overlap in binding sites for these factors on the nucleosome acidic patch. Moreover, RNF169 and RAD18 UDMs bind H2AK15ub1 nucleosomes (with or without H4K20me) with affinities that are two orders of magnitude greater than that of the 53BP1 TTD-UDR segment. Competitive binding assays in vitro showed that RNF169 or RAD18 displace 53BP1 from H2AK15ub1/H4K20me nucleosomes (Hu et al., 2017). This is consistent with experiments in which these factors displaced 53BP1 from DSB foci when overexpressed in cells (Poulsen et al., 2012;Helchowski et al., 2013;An et al., 2018;Nambiar et al., 2019).
BARD1
Investigation of the recruitment of the BRCA1/ BARD1 complex to sites of DNA damage revealed that the HR pathway utilizes a multivalent nucleosome recognition mechanism that parallels that of 53BP1. BRCA1 is a wellknown tumour suppressor; its roles in promoting HR and opposing 53BP1 function have been studied extensively (Zhu Frontiers Witus et al., 2021). The BRCA1/BARD1 complex is a heterodimer of E3 RING-finger ubiquitin ligases; RING fingers in both proteins are required for ubiquitylation of its target sites on the C-terminus of histone H2A (see below). BARD1 is essential for the complex to bind to chromatin (Becker et al., 2021;Dai et al., 2021;Hu et al., 2021). Chromatin binding is conferred by the ankyrin repeat domain (ARD) and two tandem BRCA1 C-terminal domain (BRCT) repeats, two ubiquitous and versatile domains with a variety of interaction partners. In BARD1, the ARD binds to the unmethylated histone H4 tail (H4K20me0), and the BRCT domain binds to H2AK15ub1 (Nakamura et al., 2019;Becker et al., 2021). The interaction of the ARD with H4K20me0 stabilizes BRCA1/BARD1 at DSBs in the S and G2 phases of the cell cycle, times at which unmethylated H4K20 is abundant and HR is favored. This contrasts with the binding of 53BP1 to H4K20me in G1/M. A cryo-EM structure has been determined for the complex of the ARD-BRCT segment of BARD1 bound to a nucleosome ubiquitylated at H2AK15 and at the functionally related site H2AK13 (Dai et al., 2021;Hu et al., 2021) ( Figure 2B). In the structure, the ARD makes extensive interactions with H4K20me0 by forming an acidic cavity around it; this may prevent methylation of H4K20. The ARD and the second (C-terminal) BRCT repeat are folded together in a V-shaped conformation that sits on the nucleosome surface; the BRCT interacts with the single ubiquitin moiety visible in the structure. This may correspond to H2AK13ub1 or H2AK15ub1, but binding seems to be restricted to one histone-linked ubiquitin. Interestingly, the structure reveals a unique ubiquitin-binding interface that is shared between the BRCT and the H2B C-terminal helix, with the Ile44 patch primarily contacted by the latter (Figure 2B, right). As in other H2AK15ub1 nucleosome structures, the H2A/H2B acidic patch is a critical interaction surface, making extensive contact with the second BRCT. BARD1 interaction with ubiquitin includes a direct contact with ubiquitin residue K63, implying that the complex would prevent K63-linked polyubiquitin chain formation. This may be relevant to how the BRCA1/BARD1 complex suppresses NHEJ in favor of HR, although further studies are needed to confirm this. How is this mode of nucleosome recognition by BRCA1/ BARD1 coupled to its enzymatic activity toward the H2A C-terminal tail? Remarkably, independent cryo-EM structures of the RING fingers of BRCA1 and BARD1 in complex with an unmodified nucleosome indicate binding to the same nucleosome surface as the ARD-BRCT regions (Hu et al., 2021). This may indicate that binding of the ARD-BRCT regions to one face of the nucleosome promotes binding of the RING fingers (and catalysis) on the opposite surface of the same nucleosome. An alternative model is that ARD-BRCT and the RING fingers create a bridge between neighboring nucleosomes, an intriguing possibility that remains to be investigated.
SMARCAD1 and USP48
BRCA1/BARD1 is a E3 ubiquitin ligase that monoubiquitylates H2AK125/127/129 (Kalb et al., 2014b). The physiological relevance of this activity for BRCA1 function in vivo is controversial, although there is compelling evidence for some role for H2A monoubiquitylation downstream of BRCA1 in the context of DNA damage repair and heterochromatin formation (Zhu et al., 2011;Densham et al., 2016). There is also evidence pointing to two DNA repair factors as candidate readers for these H2A ubiquitylation sites. One is the SWI/SNF-related ATP-dependent chromatin remodeler SMARCAD1 (Densham et al., 2016). SMARCAD1 and BRCA1/BARD1 function in the same pathway in cells to regulate DNA resection at DSBs and promote HR. This seems to occur through nucleosome eviction and the redistribution of 53BP1 to sites distal from the DSB (Uckelmann et al., 2018). SMARCAD1 is an attractive reader candidate as it contains two ubiquitin-binding CUE domains and can bind to nucleosomes assembled in vitro with a H2A-ubiquitin fusion protein.
Mutations in the CUE domains also compromise its function in cells (Densham et al., 2016).
A second repair factor that acts downstream of BRCA1/ BARD1 activity is USP48, a DUB that removes H2AK125/127/ 129ub1 (Uckelmann et al., 2018). USP48's recruitment to damage-proximal nucleosomes is markedly reduced in cells with depleted BRCA/BARD1. Importantly, the removal of H2AK125/127/9ub1 by USP48 prevents the recruitment of SMARCAD1 and subsequently, reduces chromatin remodeling around the damage site. 53BP1 also remains present around the site, which antagonizes DNA end resection and HR. In cells where either USP48 or 53BP1 are depleted, unregulated DNA end resection results in single-strand annealing, which is mutagenic for the cell. Interestingly, USP48 DUB activity toward K125/127/129ub1 requires the presence of an auxiliary ubiquitin somewhere on the nucleosome (i.e., the preferred substrate is a nucleosome that is multi-monoubiquitylated) (Uckelmann et al., 2018). The specificity of the auxiliary site has not been defined, but this suggests that USP48 may be a DUB for certain histone ubiquitylation sites and a reader for others. The requirement for an auxiliary ubiquitin may add an additional layer of regulation to the DNA repair response by allowing crosstalk between multiple ubiquitylated sites.
H2Aub1 summary
Detailed study of H2Aub1 readers has solidified the roles of these modifications as chromatin binding determinants, enhancing the affinity of a variety of factors (chiefly facultative heterochromatin proteins or DNA repair proteins) for specific genomic sites. For H2AK119ub1, a key issue moving forward is the extent to which readers outside of the Polycomb Frontiers in Cell and Developmental Biology frontiersin.org group of regulators contribute to its functions. Further illumination of the roles of the more recently discovered H2Aub1 modifications in the DNA damage response is also of great interest.
Histone H2B ubiquitylation
Histone H2B monoubiquitylation (H2Bub1) marks transcribed genes in all eukaryotes, suggesting that it is a fundamental feature of RNAPII transcription (Nickel et al., 1989;Tanny, 2014). The strong evolutionary conservation contrasts with ubiquitylation of H2A, which is absent in unicellular eukaryotes. The predominant form of H2Bub1 is modified on a conserved lysine corresponding to K120 in human H2B; this residue is in the H2B C-terminal helix, which, in the context of the nucleosome, is positioned on the nucleosome surface adjacent to the H2A/H2B acidic patch. H2Bub1 is catalyzed by the E2 ubiquitin-conjugating enzyme RAD6 and a dimeric E3 ligase complex composed of orthologs of yeast BRE1 (the RNF20/40 heterodimer in humans) (Robzyk et al., 2000;Hwang et al., 2003;. These enzymes deposit H2Bub1 during the elongation phase of RNAPII transcription. The precise mechanisms that underlie this cotranscriptional process are still being elucidated, but it is clear that H2Bub1 is deposited through action of the core RNAPII transcription elongation machinery. The key molecular events are thought to be the following: phosphorylation of Spt5 by positive transcription elongation factor b (P-TEFb), binding of phosphorylated Spt5 by the elongation factor Rtf1, and stimulation of H2Bub1 catalysis through interactions between Rtf1, Rad6, and the H2A/H2B acidic patch (Tanny, 2014;Van Oss et al., 2016;Cucinotta et al., 2019). Polymerase Associated Factor (PAF) complex is important for stabilizing Rtf1 interaction with the elongating RNAPII (Mbogning et al., 2013;Cao et al., 2015;Van Oss et al., 2016). H2Bub1 is rapidly turned over during transcription and various DUBs have been implicated in this, most notably the DUB module of the SAGA co-activator complex (Morgan et al., 2016).
Although the functions of H2Bub1 during transcription are not fully understood, it clearly plays important gene regulatory roles. Ablation of H2Bub1 impairs embryonic development in the mouse and prevents stem cell differentiation (Fuchs et al., 2012;Karpiuk et al., 2012). Furthermore, altered H2Bub1 levels are associated with various cancers (Tarcic et al., 2016(Tarcic et al., , 2017Marsh and Dickson, 2019). Some of these effects are likely attributable to the direct link between H2Bub1 and histone methyltransferases specific for lysines 4 and 79 on histone H3 (H3K4 and K79 methylation) (Chandrasekharan et al., 2010). Like H2Bub1, these methylations are near-universal features of transcribed chromatin with key roles in embryonic development and cell growth (Shilatifard, 2012;Tanny, 2014;Krivtsov et al., 2017). H2Bub1 also acts independently of histone H3 methylation; relevant readers for these functions have not been identified (Tanny et al., 2007;Fleming et al., 2008;Minsky et al., 2008;Chandrasekharan et al., 2010). As there is significant evidence pointing to a role for H2Bub1 in regulating nucleosome structural transitions that accompany transcription, we highlight connections to ATP-dependent chromatin remodeling factors and chromatin assembly factors that may help to mediate these functions (Table 1).
Dot1L
The requirement of H2Bub1 for H3K4 and K79 methylation was the first example of regulatory crosstalk between modifications on different histone tails (Chandrasekharan et al., 2010). It was first established genetically in budding yeast; in this system, methyltransferase activity for H3K4 and H3K79 reside in one enzyme for each site (Set1 and Dot1, respectively (Wu et al., 2013;Xue et al., 2019;Kwon et al., 2020). Generally, MLL family methyltransferases exhibit smaller effect sizes than either Dot1 or Set1 enzymes.
H3K79 methylation is distributed in transcribed regions of genes in a pattern similar to that for H2Bub1 (Vlaming and van Leeuwen, 2016). H3K79 is located within the globular domain of histone H3 and is positioned on the nucleosome surface, suggesting physical proximity with H2Bub1 that would lend itself to H2Bub1-H3K79me crosstalk. Cryo-EM-derived structures show that the C-terminal portion of bound Dot1L (the human Dot1 ortholog) engages ubiquitin and the H2A/H2B acidic patch (Anderson et al., 2019;Jang et al., 2019;Valencia-Sánchez et al., 2019;. Ubiquitin binding involves evolutionarily conserved helix and loop regions in Dot1L; these define a novel ubiquitin-binding motif that contacts a non-canonical hydrophobic patch on the ubiquitin moiety centered on Ile36. An invariant arginine adjacent to the ubiquitin-binding region is inserted into the acidic patch. Comparison with a structure of Dot1L bound to an unmodified nucleosome shows that the acidic patch interaction is maintained, and biochemical analyses have shown that Dot1L binds unmodified and H2Bub1 nucleosomes with similar affinities (McGinty et al., 2008;Valencia-Sánchez et al., 2019;.
Frontiers in Cell and Developmental Biology frontiersin.org
However, bound Dot1L exhibits greater conformational flexibility on unmodified nucleosomes than on H2Bub1 nucleosomes, as indicated by cryo-EM and by sitespecific crosslinking studies (Zhou et al., 2016;Valencia-Sánchez et al., 2019). H2Bub1 restricts this flexibility, allowing formation of a Dot1L-nucleosome complex that is compatible with activity. Interestingly, H2Bub1 is necessary, but not sufficient, for Dot1L activation. Transition to the active state also requires positioning of the Dot1L catalytic site through interaction with the N-terminal tail of histone H4, as well as a conformational change in histone H3 that positions the H3K79 side chain for catalysis. Although binding to H2Bub1 does not contribute to Dot1L affinity for the nucleosome, the binding energy derived from this interaction may pay for the conformational change that is necessary for activity .
COMPASS
H2Bub1 is also required for H3K4 di-and tri-methylation (H3K4me2/3), marks that are near-universal features of eukaryotic promoters. COMPASS family H3K4 methyltransferase complexes contain a catalytic subunit related to yeast Set1 and several auxiliary subunits, all of which are conserved from yeast to humans. Cryo-EM structures of yeast COMPASS bound to an H2Bub1 nucleosome were determined using a catalytically competent version of COMPASS which recapitulates H2Bub1 dependence in vitro (but lacking the N-terminal half of Set1 and two auxiliary subunits). COMPASS engages one surface of the nucleosome, with Set1 and Swd1 (ortholog of mammalian RBBP5) subunits making extensive contact with ubiquitin Worden et al., 2020) ( Figure 3A). The tail of histone H3 loops between the gyres of the DNA superhelix to position H3K4 in the Set1 active site. Set1 interacts with ubiquitin via an alpha helix that includes the arginine-rich motif (ARM); this region of the protein is immediately adjacent to the catalytic SET domain and has been implicated in H2Bub1-dependent activity in biochemical assays (Kim et al., 2013). The hydrophobic C-terminal end of the ARM helix engages the Ile36 patch of ubiquitin, whereas the N-terminal portion of the helix extends over the nucleosome surface and makes electrostatic contact with the H2A/H2B acidic patch ( Figure 3A, right). Swd1 is a key organizing component of COMPASS, making contacts with almost all the other subunits. In the nucleosome bound complex, the central ß-propeller domain of Swd1 interacts with all four core histones and with DNA, whereas the Nand C-terminal extensions contact the Ile44 hydrophobic patch of ubiquitin.
As is the case for Dot1L, H2Bub1 does not alter COMPASS binding affinity for nucleosomes, suggesting that H2Bub1 affects COMPASS catalytic activity (Worden et al., 2020). A comparison to the structure of COMPASS bound to an unmodified nucleosome reveals that although the overall structures are very similar, H2Bub1 induces folding of the C-terminal half of the ARM helix. This is associated with stabilization of the N-terminal portion of the SET domain and the H3 N-terminus in the active site, likely facilitating catalysis .
MLL complexes
MLL complexes are unique to metazoans and are organized around an MLL H3K4 methyltransferase subunit, so named for the involvement of these factors in mixed lineage leukemia. Catalytic subunit aside, MLL complexes have a similar composition to COMPASS, including several shared auxiliary subunits (Krivtsov et al., 2017). These conserved subunits (WDR5, RBBP5, ASH2L, and DPY30), along with the MLL1 SET domain, constitute a fully active form of the MLL1 complex that was analyzed by cryo-EM. Structures of the complex bound to an H2Bub1 nucleosome have revealed a key role for RBBP5 in nucleosome and ubiquitin recognition ( Figure 3B). This is similar to its role in COMPASS complexes, albeit with a distinct mode of ubiquitin binding (Xue et al., 2019). In the MLL1 structure, a helical insertion within the ß-propeller domain packs against the Ile44 hydrophobic patch of ubiquitin, an interaction that is stabilized by adjacent electrostatic contacts ( Figure 3B, right). There is considerable plasticity in the RBBP5-H2Bub1 interaction, as alternate ß-propeller surfaces contact ubiquitin in some cryo-EM images. The RBBP5 ß-propeller domain contacts the nucleosome surface in the H2Bub1 nucleosome complex at the H2B-H4 cleft, with secondary contacts proximal to H3K79 and the H2B Cterminal helix. The catalytic SET domain of the MLL1 subunit is also closely engaged with the nucleosome surface through contacts with the C-terminal helix of histone H2A, with H3K4 looping between the DNA gyres into the active site (as observed for COMPASS). In this structure, there is no close contact between the catalytic domain of the complex and H2Bub1 ( Figure 3B, right). Interestingly, this catalytically engaged structural arrangement is also observed in some cryo-EM images of MLL1 complex bound to an unmodified nucleosome. However, complexes with an unmodified nucleosome also adopt an alternate conformation (not observed with H2Bub1 nucleosomes) in which RBBP5 and MLL1 SET domain are not in close contact with the nucleosome surface. In this conformation, RBBP5 is pushed toward the periphery of the nucleosome and primarily contacts DNA (Xue et al., 2019). Thus, these structures point to another example of H2Bub1 favoring an active conformation of an enzyme complex.
FACT
It is clear that H2Bub1 also functions independently of H3K4me and H3K79me to regulate gene expression and chromatin structure, but the relevant mechanisms remain uncertain. Connections between H2Bub1 and the Facilitates chromatin transcription (FACT) complex have been documented in several model systems. FACT is a histone chaperone complex composed of Spt16 and SSRP1/ Pob3 subunits. Its primary function in vivo is to maintain nucleosome structure during RNAPII transcription and DNA replication (Formosa and Winston, 2020). In vitro and in vivo experiments have demonstrated that RNAPII elongation through chromatin is associated with removal and re-deposition of H2A/ H2B dimers in transcribed nucleosomes (Belotserkovskaya et al., 2003;Ramachandran et al., 2017;Yaakov et al., 2021). Subnucleosomal particles in which one of the two H2A/H2B dimers is missing (called hexasomes) are intermediates of this exchange process and are enriched in transcribed genes. RNAPII elongation is preferentially associated with loss of the promoterdistal H2A/H2B dimer (Ramachandran et al., 2017). FACT is the primary histone chaperone implicated in ensuring this exchange occurs in a manner that preserves genic nucleosome structure (Ramachandran et al., 2017). Evidence linking H2Bub1 to FACT includes the following: H2Bub1 occupancy is highly correlated with that of elongating RNAPII in vivo ; loss of H2Bub1 and FACT both result in disruption of chromatin structure in gene coding regions (Fleming et al., 2008;Chandrasekharan et al., 2009;Batta et al., 2011;Murawska et al., 2020); H2Bub1 stimulates FACT-dependent transcription through nucleosomes in vitro (Pavri et al., 2006); H2Bub1 influences FACT-dependent nucleosome assembly in vitro (Murawska et al., 2020). FACT can also stimulate deubiquitylation of H2Bub1 by the DUB Ubp10 (Nune et al., 2019). It remains unclear to what extent FACT acts as a bona fide H2Bub1 "reader." The unstructured C-terminal domains of both Spt16 and SSRP1/Pob3 bind free H2A-H2B dimers in the context of a partially unfolded nucleosome, an interaction that is thought to represent a disassembly/reassembly intermediate (Kemble et al., 2015;Liu et al., 2020;Farnung et al., 2021). These interactions involve key DNA-binding residues on H2A and H2B, and likely shield the dimer from DNA interactions that could interfere with co-transcriptional nucleosome disassembly/ reassembly . How H2Bub1 may directly impinge on these interactions is unclear and will require further structural analysis. In vivo studies in other systems suggest that the functional similarities of H2Bub1 and FACT could involve effects on other regulators (Sanso et al., 2012(Sanso et al., , 2020.
ATP-dependent chromatin remodelers
H2Bub1 has been implicated in the function of various ATPdependent chromatin remodelers. A proteomic approach to isolate human proteins that bound preferentially to nucleosome arrays harboring H2Bub1 identified the SWI/SNF complex (Shema-Yaacoby et al., 2013). SWI/SNF and RNF20/40 were shown to jointly promote transcription of a subset of genes, but the mechanistic basis for this, and whether SWI/SNF interaction with the nucleosome is directly impacted by H2Bub1, have yet to be established. In vitro, H2Bub1 nucleosomes have been shown to be refractory to remodeling by several ATP-dependent remodelers, including ISWI and, interestingly, SWI/SNF (Dann et al., 2017;Mashtalir et al., 2021). Follow-up on these studies is needed to determine the in vivo significance of these effects.
The closest functional relatives to H2Bub1 among ATPdependent remodelers are those related to budding yeast Frontiers in Cell and Developmental Biology frontiersin.org Chd1. Chd1 and related orthologs are involved in nucleosome organization within genes, and have regulatory links to H2Bub1 (Hennig et al., 2012;Lee et al., 2012;Pointner et al., 2012;Smolle et al., 2012;de Dieuleveult et al., 2016). Chd1 also physically interacts with FACT, and is required for FACT distribution along transcribed genes (Farnung et al., 2021;Jeronimo et al., 2021). In vitro assays have demonstrated that Chd1 remodeling activity is stimulated 2-3 fold by installation of H2Bub1 on at least one of the two H2A/H2B dimers in the nucleosome (Levendosky et al., 2016). This suggests that H2Bub1 may stimulate the nucleosome spacing and positioning function of Chd1 in vivo. However, cryo-EM structural analysis has shown that this effect is not due a H2Bub1 "reader" function of Chd1. Chd1 (trapped in a nucleotide-bound state using ADP-BeF) engages the nucleosome primarily through contacts with DNA, and unwraps two helical turns of DNA from one end. DNA unwrapping depends on bound nucleotide and is thus associated with the active state of the enzyme. The ATPase domain contacts the tail of histone H4 (as is observed for other remodelers) and helix 1 of histone H3, but there is no direct contact with H2A/H2B dimers or with ubiquitin (Sundaramoorthy et al., 2018). The basis for the effect of H2Bub1 on Chd1 activity remains unclear, but may be related to interactions between the ubiquitin and the unwrapped DNA. These interactions involve the K48 and R54 residues of ubiquitin and lead to repositioning of ubiquitin closer to the nucleosome periphery on the side on which DNA is unwrapped. Thus, H2Bub1 may enhance Chd1 activity by stabilizing the unwrapped state. This raises the possibility that H2Bub1 may affect chromatin remodeling activity independently of dedicated reader proteins, by influencing the intrinsic stability of nucleosome remodeling intermediates.
H2Bub1 summary
H2Bub1 has emerged as an allosteric regulator of multiple euchromatic histone methyltransferases. Its nonmethyltransferase readers remain poorly defined. Further investigation of these mechanisms will likely lead to important insights into co-transcriptional regulation of gene expression.
Histone H3 ubiquitylation
Histone H3 ubiquitylation was first identified in vivo in elongating spermatids of rat testes (Chen et al., 1998), and its low abundance in somatic cells precluded detailed functional studies until recently. As is the case for H2Aub1, there is no single predominant site associated with H3 monoubiquitylation, and it is catalyzed by multiple E3 ligases. One key emerging function for H3ub1 is the regulation of heterochromatin formation. Whereas H2Aub1 regulates PRC-dependent facultative heterochromatin, H3ub1 has been implicated in constitutive heterochromatin that requires the DNA methyltransferase Dnmt1 and H3K9 methylation. This is exemplified in mammalian cells by the E3 ubiquitin ligase Uhrf1, which ubiquitylates lysines 14, 18 and 23 on H3 to promote recruitment of Dnmt1. H3K14ub1 also supports heterochromatin formation by activating H3K9 methyltransferases. In contrast, modification of H3K23, K36, and K37 by the E3 ubiquitin ligase Nedd4 may target the histone acetyltransferase GCN5 to stimulate gene expression (Table 1).
DNMT1
The DNA methyltransferase DNMT1 is a "maintenance" methyltransferase: its preferred substrate is unmethylated cytosine that is paired with a methylated cytosine on the opposite DNA strand. These "hemi-methylated" substrates arise during S phase of the cell cycle, immediately after replication of DNA regions containing methylated cytosines. This is consistent with the role of DNMT1 in maintaining DNA methylation through cell division and with its localization to replication foci in S phase (Leonhardt et al., 1992;Edwards et al., 2017).
Uhrf1 is the key DNMT1 regulatory factor mediating the maintenance methylation function of DNMT1 (Nishiyama et al., 2016). Uhrf1 is a multi-domain protein that includes a ubiquitinlike domain (UBL), a SET-and RING-associated (SRA) domain and an E3 RING ubiquitin ligase domain. Each of these domains promotes DNMT1 function through distinct but inter-related mechanisms. The UBL domain directly binds the replication foci targeting sequence (RFTS) domain of DNMT1. The SRA domain binds to hemi-methylated DNA; this binding, in conjunction with the RFTS interaction, restricts DNMT1 activity to hemimethylated substrates and enhances its methylation maintenance function (Avvakumov et al., 2008;Li et al., 2018). Finally, the E3 RING domain activates and recruits DNMT1 by ubiquitylating histone H3, expanding the catalog of enzyme regulatory functions for histone ubiquitylation (Nishiyama et al., 2013;Qin et al., 2015).
Uhrf1 catalyzes monoubiquitylation of histone H3 on lysines 14, 18, and 23; appearance of these modified forms are dependent on ongoing DNA replication in a cell-free system derived from Xenopus oocytes (Nishiyama et al., 2013). Dnmt1 is the reader for these modifications, and specific binding of Dnmt1 to ubiquitylated H3 occurs through a consensus UIM found within the RFTS domain (Nishiyama et al., 2013;Qin et al., 2015). The preferred binding target of this UIM is a doubly monoubiquitylated form of the histone H3 tail, indicating that it engages two ubiquitin moieties simultaneously (Ishiyama et al., 2017). X-ray crystal structures of the human Dnmt1 RFTS domain bound to the histone H3 tail monoubiquitylated at lysines 18 and 23 reveals that the UIM takes the form of an Frontiers in Cell and Developmental Biology frontiersin.org extended loop (the ubiquitin recognition loop or URL) sandwiched between the two ubiquitins (Ishiyama et al., 2017;Li et al., 2018) (Figure 4). Hydrophobic residues on one side of the URL contact both the Ile44 patch and the Ile36 patch of H3K18ub1. The opposite side of the URL contacts an atypical interaction surface on H3K23ub1 consisting of Lys48 and Gln49, whereas the Ile44 patch of this ubiquitin is contacted by RFTS hydrophobic residues adjacent to the UIM. Interestingly, the RFTS domain also interacts with other parts of the H3 tail, as does H3K23ub1. The H3 tail is well-resolved in the structure and adopts a kinked conformation with a sharp (>90°) turn at Gly12 and Gly13. H3 residues N-terminal to this wind through a cleft between the RFTS (a region C-terminal to the URL) and H3K23ub1 (Ishiyama et al., 2017) (Figure 4). Thus, the DNMT1 ubiquitylated histone reader interaction involves a network of contacts between the RFTS domain, two adjacent ubiquitin moieties, and the histone H3 tail. RFTS binding to H3ub not only targets Dnmt1 to S-phase chromatin, but also alleviates autoinhibitory binding of RFTS to the Dnmt1 catalytic pocket ). An independent interaction of RFTS with the UBL domain of Uhrf1 has a similar dual role, arguing that Uhrf1 employs multiple mechanisms to ensure Dnmt1 is specifically active at sites of DNA replication.
H3K9 methyltransferases
Uhrf1 also restricts Dnmt1 action to heterochromatic regions of the genome. This occurs through interaction between the Uhrf1 tandem Tudor domain (TTD) and methylated H3K9 (H3K9me), the major histone modification associated with constitutive heterochromatin (Rothbart et al., 2012(Rothbart et al., , 2013. Proteins related to Heterochromatin protein 1 (HP1) bind to H3K9me through their chromodomains and create a condensed chromatin structure through a mechanism that involves liquidliquid phase separation (Larson et al., 2017;Strom et al., 2017;Sanulli et al., 2019). In mammalian cells, there are 5 different SET-domain containing methyltransferases that deposit H3K9me (SETDB1, SUV39H1, SUV39H2, GLP, and G9a). However, in fission yeast, there is only a single H3K9 methyltransferase, Clr4. Interestingly, Clr4 is found in a complex (the Clr4 methyltransferase complex or CLRC) that also contains a Cullin-RING ubiquitin ligase, the substrate of which was recently identified as lysine 14 of histone H3 (Oya et al., 2019).
H3K14ub1 greatly stimulates the methyltransferase activity of Clr4 toward H3K9 on the same histone tail in vitro, and Clr4 preferentially methylates H3K14ub1-containing nucleosomes (Oya et al., 2019;Stirpe et al., 2021). Biochemical studies indicate that a region of Clr4 adjacent to the SET domain (termed the ubiquitin-binding region or UBR), as well as a region distant from the catalytic domain near the N-terminus, are both involved in ubiquitin recognition; clarification awaits more comprehensive structural studies. Notably, a similar effect of H3K14ub1 is apparent on the activity of the mammalian ortholog of Clr4, SUV39H1 (at least in vitro), suggesting that ubiquitin regulation of H3K9me is a conserved feature of heterochromatin formation (Stirpe et al., 2021).
There are a number of outstanding questions that need to be resolved to fully appreciate the significance of this mechanism. In vivo studies in fission yeast show that H3K14ub1 is important at some, but not all, sites of heterochromatin, and leave open the possibility that Clr4 could be regulated by other ubiquitylated substrates (Oya et al., 2019). The effect of H3K14ub1 is selective in vitro as well: SUV39H1, which has a UBR that is homologous to that in Clr4, is stimulated by H3K14ub1, but G9a, which lacks this region, is not (Stirpe et al., 2021). It is also unclear as of yet what the identity of the relevant H3K14 ubiquitin ligase is in mammalian cells (although Uhrf1 would seem a promising candidate).
Gcn5
Nedd4-catalyzed H3 ubiquitylation has been associated with promoting transcriptional activation . Upon glucose stimulation, Nedd4 catalyzes monoubiquitylation at multiple sites on H3 including lysines
FIGURE 4
The DNMT1 RFTS domain reads a multi-monoubiquitylated histone H3 tail. Top: Cartoon illustration of the X-ray crystal structure of the DNMT1 RFTS domain bound to H3K18/K23ub1 (based on Ishiyama et al., 2017). The ubiquitin-binding URL is depicted projecting from the RFTS between the two ubiquitin moieties. The C-terminal end of the H3 tail was not visible in the structure and is shown as a dashed line. Close contact between the RFTS, the H3 tail, and H3K23ub1 is shown. Bottom: Pymol rendering of the X-ray crystal structure (PDB code 5WVO). See text for details. Created with BioRender.com.
Frontiers in Cell and Developmental Biology frontiersin.org 23, 36, and 37. Loss of this activity is associated with a decreased expression of glucose-regulated genes and decreased levels of H3 acetylation, specifically at lysines 9 and 14 (K9/K14). This was shown to reduce tumour formation in cellular models. Crosstalk between H3 ubiquitylation and H3 acetylation was specifically linked to the histone acetyltransferase (HAT) Gcn5, which is known to preferentially acetylate H3K9/K14. When mutated, the loss of Gcn5 phenocopies the loss of Nedd4 and H3 ubiquitylation. Co-immunoprecipitation and in vitro assays showed that Gcn5 preferentially interacts with monoubiquitylated H3. The mechanistic basis for H3ubdependent stimulation of Gcn5 has yet to be determined and awaits more detailed biochemical analyses.
H3ub1 summary
By virtue of its interaction with Dnmt1, H3ub1 has emerged as an important regulator of constitutive heterochromatin formation in metazoans. Further investigation of H3ub1 regulation of histone H3 lysine 9 methyltransferases may reveal an even broader heterochromatin function. It will also be of interest to determine the extent to which other H3ub1 modifications could have roles in gene activation.
General conclusion and perspectives
We have highlighted diverse modes of ubiquitylated histone recognition by structurally distinct reader proteins. Three common themes have emerged from the in-depth biochemical and structural analyses that will be useful in guiding future studies of this important class of regulators.
Ubiquitylated histone readers also recognize other sites on the nucleosome Binding specificity of histone methylation or acetylation readers is dictated by the presence of the modification and the amino acids surrounding the modified site. However, the affinity of reader domains for isolated histone tails is often weak: K d values for these interactions commonly reach the millimolar range. Investigation of reader interactions with modified histones in the context of the nucleosome have demonstrated that several reader proteins interact with nucleosomal DNA cooperatively with the modified histone tail, suggesting that singular interaction of a reader with a modified histone is not sufficient to drive physiological interactions on chromatin (Musselman et al., 2014;Morrison et al., 2017). Readers of histone ubiquitylation exhibit similar cooperativity, engaging ubiquitin together with the nucleosome while exhibiting little affinity for free ubiquitin. The most common cooperative interaction of this type involves ubiquitylation of histone H2A or H2B and the H2A/H2B acidic patch on the nucleosome surface. This has been most clearly defined for readers of H2AK15ub1, which have a UDM consisting of separable but linked motifs recognizing either ubiquitin or the acidic patch (53BP1, RNF169, RAD18) (Panier et al., 2012;Mattiroli and Penengo, 2021). The acidic patch also serves as a critical partner in ubiquitylated histone recognition by a more loosely associated group of readers, each with unique structural determinants for ubiquitin and nucleosome recognition. These include H2Aub1 readers BARD1 and Jarid2, as well as H2Bub1 readers Dot1L and Set1 (Cooper et al., 2016;Hu et al., 2021). The strong association between H2Aub1/H2Bub1 recognition and the acidic patch suggests that these modifications may have more general roles in modulating the function of factors that engage this feature of the nucleosome surface. Expanded investigation of such factors, which continue to be identified through biochemical and proteomic studies, could point to new readers for histone ubiquitylation (Skrajna et al., 2020).
Studies thus far indicate that H3ub1 recognition only involves the H3 tail. The RFTS domain of Dnmt1 binds to free ubiquitin, as well as to the ubiquitylated H3 tail, consistent with the idea that it relies on binding determinants centered on the ubiquitylated site itself (Qin et al., 2015;Ishiyama et al., 2017). Ubiquitin also influences the activity of Clr4 in the context of the isolated H3 tail (Stirpe et al., 2021). It remains to be seen whether further study of these readers in a nucleosome context will capture additional important interactions.
Histones and DNA can participate in ubiquitin recognition
An intriguing feature of some of the complexes with ubiquitylated histones is that the ubiquitin moiety itself contacts histones or nucleosomal DNA. H2AK15-linked ubiquitin contacts the H2B C-terminal helix when bound to the 53BP1 UDR or BARD1 BRCT domains (Wilson et al., 2016). H2B-linked ubiquitin contacts DNA unwrapped from the nucleosome by Chd1, whereas H3K23ub1 interacts with the H3 N-terminal tail when bound to the DNMT1 RFTS domain (Ishiyama et al., 2017;Sundaramoorthy et al., 2018). These interactions result from the specific biochemical context of the relevant complexes, but hint at the possibility that, under certain circumstances, ubiquitylated histones can alter chromatin structure on their own, independently of downstream readers. There is support for this idea from studies examining the effect of H2Bub1 on the biophysical properties of nucleosomes and nucleosome arrays in vitro. H2Bub1 decompacts nucleosome arrays, as assessed by analytical ultracentrifugation analysis; this effect was not observed with Hub1, a ubiquitin-like modifier Frontiers in Cell and Developmental Biology frontiersin.org protein that is not conjugated to histones in cells (Fierz et al., 2011). Decompaction is mediated by an acidic surface on ubiquitin (Glu16 and Glu18) that drives ubiquitin-ubiquitin electrostatic interaction (Debelouchina et al., 2017). Moreover, the intrinsic stability of nucleosomes has been shown to be enhanced by H2AK119ub1 and reduced by H2Bub1, although conflicting data from various studies preclude drawing a definitive conclusion (Fierz et al., 2012;Krajewski et al., 2018;Xiao et al., 2020). Nonetheless, these results demonstrate that direct interaction of ubiquitin with chromatin has potential functional consequences. Going forward, it will be important to evaluate more thoroughly their significance in vivo.
Biochemical outcomes of reader interactions are different for different ubiquitylated histones
H2Aub1 enhances the affinity of its cognate reader proteins for the nucleosome. This has been studied extensively for readers of H2AK15ub1, establishing this modification and its readers as focal points in bringing mediators of the DNA damage response to sites of damage in vivo (Mattiroli and Penengo, 2021). Reader interactions with H2AK119ub1 are less well defined, but the evidence clearly points to enhanced affinity of Jarid2 for the nucleosome as important for the biological effects of H2AK119ub1 (Kalb et al., 2014a;Cooper et al., 2016). From this perspective, H2Aub1 readers align with previously defined readers of H3 and H4 tail modifications such as acetylation and methylation, which serve as binding sites for discrete protein modules. H3ub1 enhances affinity of the interaction of the DNMT1 RFTS domain with the H3 tail, suggesting that H3ub1 readers may be similarly classified (Ishiyama et al., 2017). In contrast, H2Bub1 does not enhance the affinity of interaction of COMPASS, MLL complexes, or Dot1L with the nucleosome, and instead acts allosterically, stabilizing active enzyme conformations ). This sets H2Bub1 readers apart in a unique class, not only among readers of ubiquitylated histones but among histone modification readers in general. The notion that histone modifications act as allosteric modulators of reader proteins suggests that approaches aimed at identification and characterization of reader proteins need to encompass nuanced, in-depth analyses that go beyond interaction affinity.
Histone ubiquitylation has a clear connection to human disease, as demonstrated by the gain of H2Aub1 and loss of H2Bub1 in various malignancies (Marsh and Dickson, 2019). As reader interactions with histone modifications emerge as druggable targets in human disease (Arrowsmith and Schapira, 2019), further study of histone ubiquitylation readers and their regulatory mechanisms is likely to result in clinically translatable insights.
|
2022-08-29T13:17:49.643Z
|
2022-08-29T00:00:00.000
|
{
"year": 2022,
"sha1": "4e90cdfc3dbd3de637197e8be198b4f7a2f016bb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4e90cdfc3dbd3de637197e8be198b4f7a2f016bb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226988262
|
pes2o/s2orc
|
v3-fos-license
|
Provider–patient communication and hospital ratings: perceived gaps and forward thinking about the effects of COVID-19
Abstract Objectives To highlight clinical and operational issues, identify factors that shape patient responses in Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and test the correlations between composite measures and overall hospital ratings. Design Responses to HCAHPS surveys were used in a partial correlation analysis to ascertain those HCAHPS composite measures that most relate to overall hospital ratings. The linear mean scores for the composite measures and individual and global items were analyzed with descriptive analysis and correlation analysis via JMP and SPSS statistical software. Setting HCAHPS is a patient satisfaction survey required by the Centers for Medicare and Medicaid Services for hospitals in the USA. The survey is for adult inpatients, excluding psychiatric patients. Participants 3382 US hospitals. Intervention None. Main Outcome Measure Pearson correlation coefficients for the six composite measures and overall hospital rating. Results The partial correlations for overall hospital rating and three composite measures are positive and moderately strong for care transition (0.445) and nurse communication (0.369) and weak for doctor communication (0.066). Conclusions From a health policy standpoint, it is imperative that hospital administrators stress open and clear communication between providers and patients to avoid problems ranging from misdiagnosis to incorrect treatment. Additional research is needed to determine how the coronavirus of 2019 pandemic influences patients’ perceptions of quality and willingness to recommend hospitals at a time when nurses and physicians show symptoms of burnout due to heavy workloads and inadequate personal protective equipment.
Introduction
On any given day in the USA, 600 000 people seek medical care in hospitals. Since 2008, the Centers for Medicare and Medicaid Services (CMS) surveys patients via the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey to assess patients' perspectives of care to enable objective and mean-ingful comparisons of hospitals on topics that are important to consumers. However, most of the US population was unaware that on 20 January 2020, the first case of a novel coronavirus of 2019 (later named would significantly change the way health care would be accessed and delivered for many months to come; and as of this date, the effects are yet to be fully understood [1]. Few studies to date have used HCAHPS to explore the relationship between patients who were isolated due to an infectious disease and the effects on patient satisfaction. Those studies reported that patients who were in isolation provided lower scores for questions related to staff responsiveness, physician communication, respect, receiving assistance and cleanliness compared with patients who were not in isolation [2,3]. Furthermore, nurse understaffing and physician burnout, so prevalent prior to COVID-19 [4], have intensified safety incidents, poor quality of care and reduced patient satisfaction [5].
The purpose of this study was to identify factors that affect patient responses in HCAHPS and the relationship these factors have in determining overall hospital ratings. While our study suggests an avenue to gauge improved patient-provider relationships on patientreported hospital ratings prior to the COVID-19 pandemic, it also provides initial insights into how disasters and pandemics influence patient perceptions of hospital delivery of care.
Our theoretical approach is consistent with the current conceptual frameworks, useful for understanding complex human behaviors and for highlighting important factors that influence the quality of provider-patient communication [6]. One such framework encompasses four major features-patient and provider needs, with particular focus on those susceptible to change; the communication process; the goals of the communication encounter and the context for the encounter [7]. The patient's needs may include vital information about prognosis, treatment options, adverse effects and financial hardship, as well as emotional support, respect and autonomy. The provider's needs may be clinical, logistical or resource based due to shortages. Often, the needs of the patient and provider are congruent, but may also be in conflict. Moreover, needs are often complex and vary considerably among patients.
Since the exchange is iterative, it requires clinicians to elicit the patients' current beliefs as well as communicate meaningful information to successfully achieve the goals of the encounter. Likewise, patients need to understand their role in the shared decision-making process to determine the best treatment plan. The value of the conceptual framework is in keeping the different components in mind to reduce the potential for communication failures, especially since patients often leave without a full understanding of their medical situation [7]. Fittingly, developing interventions aimed at improving providers' communication skills are of utmost importance [6].
In this paper, we highlight clinical and operational issues, identify factors that shape patient responses in HCAHPS surveys and test the relationships between composite measures and overall hospital ratings. We focus on two research questions: (i) What is the degree of correlation associated with each HCAHPS composite measure and the overall hospital rating? (ii) What are the partial correlations between doctor and nurse communication and other patient communication composite measures?
The paper begins with a description of the HCAHPS surveys and 'Hospital Compare', which depicts a star quality rating system that informs patient choices. We then discuss the significance of hospital ratings and factors related to provider-patient communication that affect the overall ratings of hospitals and patients' willingness to recommend hospitals. Next, we examine the degree of correlation associated with HCAHPS composite measures and hospital ratings, provide the results of our analysis and discuss the relevance of our study to COVID-19 which challenges hospital resources and the communication between healthcare providers and patients. Conclusions and directions for future research are also included.
The HCAHPS survey
Daily, >30 000 patients are surveyed by CMS about their recent hospital experience and >8400 patients complete it [8]. The goal of the survey is to promote consumer choice, public accountability and greater transparency in health care. The basic sampling procedure for HCAHPS is the drawing of a random sample of eligible discharges on a monthly basis. Data are collected from patients throughout each month of the 12-month reporting period and then aggregated quarterly to create a rolling four-quarter data file for each hospital. The most recent four quarters of data are used in public reporting. To ensure comparability, hospitals may not switch type of sampling, mode of survey administration or survey vendor within a calendar quarter.
The HCAHPS survey contains 29 questions with 19 core questions about critical aspects of patients' hospital experiences (communication with nurses and doctors, the responsiveness of hospital staff, the cleanliness and quietness of the hospital environment, communication about medicines, discharge information, overall rating of hospital and willingness to recommend). The survey also includes three items to direct discharged patients to relevant questions, five items to adjust for the mix of patients across hospitals and two items that support congressionally mandated reports. The 'hospital experience' questions offer the patient Likert-scale response choices. There are four different modes of survey administration: mail only, telephone only, mail with telephone follow-up and interactive voice response. To be eligible to participate in the survey, patients must be over the age of 18, have had at least one overnight inpatient hospital stay, have a non-psychiatric principle diagnosis at discharge and be alive at discharge. Patients selected to participate will receive the survey between 48 h and 6 weeks of their discharge.
The survey response rate and the number of completed surveys are publicly reported on the 'Hospital Compare' website. The site allows consumers to select multiple hospitals and directly compare performance measures spanning seven different performance areas with predetermined weights: mortality (22%), patient safety (22%), readmission rates (22%), patient experience (22%), effectiveness of care (4%), timeliness of care (4%) and efficient use of medical imaging (4%). Some hospitals submit more data points than others, although only hospitals that have at least three measures within at least three measure groups or categories, including one outcome group (mortality, safety or readmission), are eligible for an overall hospital rating. Significant associations were identified between the overall hospital rating and HCAHPS measures, all with P values <0.0001 and Spearman correlation coefficients ranging from weakly to moderately correlated [9].
'Hospital Compare' depicts a star quality rating system that aggregates the patient experience. The overall hospital rating ranges from 1 to 5 stars and shows how well each hospital performs, on average, compared with other US hospitals. In the year that coincides with the timeframes for the current study (2017-18), the most common overall hospital rating was 3 stars ( Table 1). While the number of hospitals with 4 and 5 star ratings declined slightly, it was attributed to issues with the execution of CMS methodology with hospitals reporting hard-toexplain shifts in their performance that could not be captured in underlying measure performance. However, CMS has made only very modest changes to the methodology for the first quarter of 2019.
On 29 April 2020, CMS announced that if the COVID-19 outbreak will prevent it from validating data or create systemic data integrity issues for the 2021 star ratings, it will use data from the 2020 star ratings (based on care delivered in 2018) for the 2021 star ratings. The measurement period and data for all other measures, where there was not a health and safety risk from the COVID-19 outbreak in collecting the data, will not change from what was finalized in the April 2018 Budget Act. CMS will treat newer contracts (where 2021 would be the first year that they would receive a star rating) as 'new' for an additional year since it would not have enough data to assign a rating [10].
Significance of hospital ratings
The star ratings drive systematic improvements in care and safety as hospitals strive to sustain high ratings to leverage competition, lower costs, and improve care quality. Consumers and patient advocates point to 'Hospital Compare' and the most recent star ratings as important resources they rely on to make informed choices. Many hospitals also rely on these ratings to identify areas for improvement. In fact, the percentage of hospital website usage has increased from 22% in 2012 to 32% in 2016 with 41.5% of consumers who have visited a hospital website indicating that patient ratings and reviews of doctors are the most important information they sought [11].
Cross-domain analyses have shown that hospitals in the top HCAHPS quartile with better patient experience also have better records in safety, technical quality, length of stay and readmission rates. Further, the data revealed a compounding effect of improvements in both experience and engagement on key global HCAHPS measures. Health systems with higher overall patient experience performance on the HCAHPS 'likelihood to recommend' and 'overall rating' showed higher net margins, had lower spending in the first 30 days post-discharge and received higher reimbursement per beneficiary during the episode of care than those in the bottom quartile of patient experience performance [12].
Research shows that while consumers tend to select hospitals with high clinical quality scores, satisfaction with a prior hospital admission has a larger impact on future hospital choice and the willingness to share experience with others [13]. However, gaps between observed and best possible 'Hospital Compare' scores in US hospitals appear to indicate that hospitals are not performing at their best possible level given their resources or due to organizational-level factors that affect providers' time, commitment and incentives, which in turn affect patient perceived satisfaction [14].
Provider-patient communication and perceived gap
The success of doctor-patient communication and its relationship to overall ratings of a hospital has been identified as a critical driver for patient satisfaction [15]. When patient satisfaction is high, it triggers multiple benefits. These benefits may include an increase in compliance with treatment and care directives and enhanced tendency to follow up on instructions from doctors [16]. Other benefits include a decrease in the inclination to initiate medical malpractice lawsuits against healthcare providers. However, gaps in communication may also occur because of insufficient interactions. In a study of 2756 hospitals, no patients reported that physicians 'sometimes or never' communicated well in the best-performing hospitals, whereas 21% of patients in the worst-performing hospitals reported that physicians 'sometimes or never' communicated well [17].
Often physicians tend to misjudge the success of their communication skills in interpersonal exchanges by considering the communication suitable while their patients think otherwise. Clinicians were found to elicit the patient's agenda in just 36% encounters, and when they did, they interrupted the patient's discourse in 67% of the encounters [18]. Researchers observed that communication skills tend to deteriorate as medical students' transition through their education and over time doctors-in-training tend to lose their focus away from holistic patient care [19]. An earlier study of primary care physicians and surgeons, using audiotapes of informed decision-making, found that exchanges about alternatives occurred in 5.5-29.5% of the interactions, of pros and cons in 2.3-26.3% and of uncertainties associated with the decision in 1.1-16.6%. Moreover, physicians hardly explored whether patients understood the decision (0.9-6.9%). Others reported that 75% of the orthopedic surgeons surveyed in their sample believed that they communicated reasonably well with their patients; however, only 21% of the patients reported satisfactory communication [20].
By recognizing and addressing potential gaps, physicians can develop better relationships with patients including paying close attention to personal attitudes and their effects on patients' perceived fairness of treatment. Patients reporting that their doctors listened to them carefully were also 32% less likely to be readmitted [21]. Understanding the context of patient safety and social environment through effective partnership and physician consultative style is empowering and correlates with positive hospital outcomes. Thus, a critical factor in the effectiveness of healthcare delivery is sustaining patient centeredness through meaningful provider-patient communication [6].
Methods
We set out to examine the degree of correlation associated with each HCAHPS composite measure and the overall hospital rating. Further, we examine the partial correlations between doctor and nurse communication and other HCAHPS patient communication composite measures.
Publicly available hospital-level HCAHPS data were used to assess the relationship between the composite measures, individual and global items The data set contained HCAHPS results for 3522 hospitals for the period from 1 October 2017 to 30 September 2018. Data were excluded from analysis for 140 hospitals that reported discrepancies in the data collection process or if the results were based on a shorter period than required. The linear mean scores for the composite measures and individual and global items were analyzed with descriptive analysis and correlation analysis [22] via JMP and SPSS statistical software.
The publicly available HCAPHS data contains hospital level information on two global items (overall hospital rating and hospital recommendation), two individual items (cleanliness and quietness) and six composite measures (nurse communication, doctor communication, staff responsiveness, communication about medicines, care transition and discharge information). The composite measures are based on combinations of individual questions where each of the patient ratings is scored. The composite measure represents the mean score for all patients responding for the associated group of questions. Adjustments are then made for survey delivery mode and patient mix. These composite measures are treated as continuous levels of measurement. Table 2 shows the survey questions associated with each of the HCAPHS items. Table 3 shows the Pearson correlation coefficients for the six composite measures and overall hospital rating. All correlations are positive, relatively strong and statistically significant (P < 0.0001). Given the relatively high correlation between the composite measures, partial correlations can quantify the linear relationship between overall hospital rating and a single composite measure, while controlling for the other composite measures. The linearity assumption for significance tests was satisfied for all correlations based on F-tests. Normal Q-Q plots showed no serious departures from Normality. Residual analysis confirmed homoscedasticity and revealed one extreme outlier associated with a financially troubled, privately owned hospital that closed in 2019. Given the large sample size, the presence of this outlier had little effect on the correlations. Table 4 shows that the partial correlations for overall hospital rating and care transition, nurse communication and doctor communications are positive and highly significant. Among these three composite measures, the partial correlations with overall hospital rating are all positive and moderately strong for care transition (0.445) and nurse communication (0.369) and weak for doctor communication (0.066).
The care transition questions (see Table 2) are related to communication with doctors, nurses and other clinicians such as physical therapists and social workers. To ascertain the relationships between the care transition measure and doctor communication and care transition and nurse communication, partial correlations were calculated by controlling for the other four composite measures. Table 4 shows that both doctor and nurse communications have highly significant correlations and have moderate, positive correlations with care transition. Similarly, the two questions related to the composite measure of communication about medicine (see Table 2) are related to communication with clinicians. Partial correlations between communication about medicine and nurse communication and communication about medicine and doctor communication were calculated controlling for the other four composite measures. Table 4 shows that both doctor and nurse communications have positive, weak correlations with communication about medicine.
Discussion
Effective communication practices increase patients' willingness to disclose information along with their motivation to adhere to medical treatment plans. This leads to an approximate 50% reduction in diagnostic tests and referrals [23], shorter length of stay, and fewer complications, better recovery and improved emotional health long after discharge. Positive communication is associated with better patient outcomes, safer work environments, decreased preventable errors, decreased transfer delays, lower readmission rates and lower mortality rates [24]. Positive communication also aligns with the findings from new cross-domain analyses indicating that safety, quality and experience of care are highly interrelated with one another and with global measures including financial outcomes [12]. The macro-level analysis in this paper corroborates these research findings and highlights a strong positive correlation between the six composite HCAHPS measures and the overall hospital rating. Both doctor-patient communication and nurse-patient communication are highly correlated with overall hospital ratings, trailing only care transition as the factor with the strongest relationship. These three measures are also correlated with each other, hence the need for partial correlations to control for other composite measures. As one may suspect, given that patients spend more time communicating with their nurses than their doctors, nurse communication has a much stronger relationship than doctor communication with respect to hospital ratings, with a partial correlation that is five times greater for nurse-patient communication than for doctor-patient communication. These correlations may also be explained by patients' inability to recall distinctions in the communications they received while hospitalized and therefore report a more aggregated perception for each of the individual HCAHPS questions. Interestingly, when controlling for other composite measures, both nurse-patient communication and doctor-patient communication were weakly correlated with communication about medicines. A possible explanation for this is that patients rarely discuss prescriptions with their care providers, often opting to speak with their pharmacists [25]. This is particularly true at times of outbreaks such as COVID-19 where community pharmacists, healthcare professionals with a high public availability, are likely to be patients' first option for health information [26].
The COVID-19 pandemic provides a new context from which to analyze this study. A recent report in Medscape found that 44% of physicians reported at least one symptom of burnout in 2020 [27], corroborating earlier studies about increased risk of patient safety incidents [28]. Others reinforced this finding and suggested that physician burnout months before the COVID-19 posed a significant threat to public health [4]. During the COVID-19 pandemic, physician and nurse burnout may have been exacerbated due to shortages in ventilators, personal protective equipment (PPE), face shields and testing kits [29].
Much of the emerging COVID-19 literature has focused on the healthcare providers' needs and the healthcare systems' responses to managing patient care. The rapid adoption of telehealth as an effective response to stay at home orders and self-quarantine measures has led to the transition of many patients to telemedicine solutions [30]. Patient perceptions of quality associated with what had been routine care have probably changed, which may also affect the way future HCAHPS surveys will be filled out and used. Indeed, understanding patients' perceptions during COVID-19 and their effects on patient satisfaction requires a complete re-imagination of how the patient experience can be improved in an age of digital acceleration triggered by the COVID-19 pandemic [31].
Limitations
The HCAHPS is designed to increase patient engagement and responsibility and encourage patients to reflect on their role and contribution to the process of the medical team's work. Public reporting also serves to enhance public accountability in health care by increasing transparency and by offering incentives for hospitals to improve patient communication and quality of care. However, the HCAHPS survey potentially increases healthcare's workload by needing to brief patients on the process as well as ongoing guidance on performing self-evaluation. It has a risk of being perceived as a process of presenting inflated grades and being unreliable. Its assignments can take more time and education. Patients may feel ill-equipped to undertake the assessment, and their different perceptions can result in conflicts. Indeed, healthcare providers must consider not only contextual issues and the explicit exchange with patients but also the substance of the communication and intrinsic characteristics of the patients they treat [6].
Further, conclusions drawn from our analysis were based only on HCAHPS responses without considering hospital-specific characteristics such as the hospitals' operational structure, size, diversity and level of services provided. Additionally, patient-specific characteristics are also left out of HCAHPS such as duration of the medical condition, chronic comorbidities and patient demographics. The analysis considered only HCAHPS responses from one time period. Since patients complete the surveys voluntarily, the sample is non-random and may be subject to self-selection bias. Other findings raised concerns that HCAHPS measures may not meet the standards for reliability and validity with mixed results in terms of the impact of HCAHPS dimensions on overall hospital ratings [32]. Finally, the results of sensitivity analysis reveal that the relationship among the communication-related composite measures could load onto a latent, single-factor, interpersonal care experience that patients evaluate [33].
Conclusions and directions
This study identified several factors that help shape the patient responses in HCAHPS and the relationship these factors have in determining overall hospital quality in the pre-COVID-19 era. The study corroborates the general agreement that higher quality of provider-patient (be it doctor-patient or nurse-patient) communication coincides with higher reported hospital quality measures. Of note is the opposite effect that may be found where patients might be in isolation due to a pandemic such as COVID-19 in which providerpatient communication might be limited and results in lower quality measures [3]. Further research can add to this study by identifying whether the correlations examined in this study can be attributed more specifically to certain types of hospitals and/or patient mix or even methods of engagement, including telemedicine [34]. In addition, examining HCAHPS results over longer time intervals may provide additional insights. In particular, future research could compare the results in this study to HCAHPS survey results taken during the COVID-19 era and the post-COVID-19 era to discern the trend impact of the pandemic on patient experiences.
While pandemic crises overwhelm healthcare resources and existing standards of care [35], patients expect that their quality of care will continue. When staffing shortages and lack of vital medical equipment strain hospitals' resources and bed capacity, hospitals may be blamed for service disruptions, which could skew future HCAHPS results. This might also lower the hospital ratings, decrease the willingness of patients to recommend and potentially affect hospitals' reputation. Hospitals should seek answers to how their delivery systems performed during the COVID-19 pandemic as a means to create quality improvement processes for future disasters and crises. Fittingly, the patient feedback mechanisms need to be recalibrated to reflect potential disruptions in health care. Surveys may include not only the inpatient settings but also virtual and non-clinical settings. Importantly, multiple caregivers could also be included in future HCAHPS surveys with rapid dissemination of results in real time.
Data availability statement
The data used in this article are publicly available but can be shared upon reasonable request to the corresponding author.
|
2020-10-28T19:17:34.948Z
|
2020-11-17T00:00:00.000
|
{
"year": 2020,
"sha1": "f49417e7f56496d0250c5ea8118d1ad133d2a2db",
"oa_license": null,
"oa_url": "https://academic.oup.com/intqhc/article-pdf/33/1/mzaa140/36326202/mzaa140.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1211bc534c80b04a85540a5032935c7143b401ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
239059816
|
pes2o/s2orc
|
v3-fos-license
|
Polyaniline Supported Ag-Doped ZnONanocomposite: Synthesis, Characterization, and Kinetics Study for Photocatalytic Degradation of Malachite Green
Ag-ZnO/PANI nanocomposite was prepared via the sol-gel technique following in situ oxidative polymerization of polyaniline (PANI). XRD, UV-Vis, and FT-IR spectroscopy were employed to study the crystal size, bandgap energy, and bond structure of assynthesized nanocomposites. *e mean crystallite size of the nanocomposite determined from XRD was 35.68 nm. Photocatalytic degradation of malachite green (MG) dye using as-synthesized photocatalysts was studied under visible light irradiation. *e highest degradation efficiency was recorded for Ag-ZnO/PANI nanocomposites (98.58%) than Ag-ZnO nanoparticles (88.23%) in 120min. *e kinetics of photocatalytic degradation of MG follows pseudo-first-order reaction with rate order of 1.16 10min. Moreover, the photocatalytic activity of Ag-ZnO/PANI nanocomposites was evaluated and compared with Ce-Cd oxide, electrospun P(3HB)-TiO2, and with other catalysts in the literature. *e optimal conditions for photocatalytic degradation are as follows: the concentration of malachite green (0.2 g/l), pH (8), and the concentration of catalyst load (0.2 g/l) under visible light with an irradiation time of 120min.
Introduction
Malachite green (MG) is an organic compound that has arisen as a dubious agent in hydroponics in aquaculture, and it is utilized as a dyestuff for coloring materials such as cotton silk, paper, wool, and leather [1]. It is strongly dissolvable in water and ethanol to form blue-green solutions. Dye-contaminated wastewaters mostly enter the environment as discharges, and their release into the environment in processing wastewater poses a serious risk to both human health and the ecosystem [2][3][4][5][6][7][8]. MG has strong effects on the immune and reproductive systems and exhibits potential carcinogenic and genotoxic effects [9]. ere are a range of methodologies, such as physical (adsorption), biodegradation, chemical, and electrochemical techniques, that have been developed to eliminate these pollutants from wastewater. However, they are nondestructive, since they just transfer organic compounds from water to another phase, thus causing secondary pollution.
Heterogeneous semiconductor photocatalysis is the most extensively read technique for the degradation and decolorization of numerous wastes in the watery medium under UV-Visible light [10,11]. It takes out defilements as opposed to just moving them to another stage without the usage of possibly hazardous oxidants [12]. Photocatalysis is started by photons from UV light, which cause the electrons on the superficial of the photocatalyst to become excited in the valance band; these cause the electrons to go up into the conduction band and then leave positive openings [13]. e produced electron/hole pair incites a complex arrangement of reactions that can bring about the total degradation of organic contaminants, for example, a dye adsorbed on the semiconductor surface [14,15].
ZnO is a significant semiconductor material with a wide bandgap (3.37 eV) and enormous excitation binding energy (60 eV), effective nonlinear resistance, and better thermal conductivity [19,20]. ZnO has a few limitations including the quick recombination rate of photogenerated electronhole pairs, low quantum yield in the photocatalytic reaction in aqueous solution, and photocorrosion which impede the commercialization of the photocatalytic degradation process.
Even if various modification techniques are reported in the literature, neither metal nor nonmetal doping alone can solve the above problems; there is still a dearth of knowledge on metal-nonmetal codoping. Reports related to Cr-N codoped ZnO, Cr-doped ZnO, or N-doped ZnO are well documented [21,22].
Conducting polymers (CPs) are appreciably used as adsorbents for the removal of heavy metal ions or dyes from waste water and have attracted great attention due to facile synthesis, electrical conductivity, porosity and low fabrication cost, and environmental stability. Among the conductive polymers (CPs), polyaniline (PANI) was picked as one of the promising conductive polymers to tune composites' optical, electrical, and photocatalyst properties [23]. However, it possessed poor life span owing to the fragile backbone chain.
ere for synthesizing PANI/ZnO composite photocatalysts has benefits of the capacity to prevent corrosion dissolution of ZnO during photocatalysis as well as the capacity to empower photocatalysis under solar irradiation due to a decreased bandgap [24].
ere are earlier reports on the improvement of the electronic properties and catalytic potential of ZnO by introducing noble metals such as silver, gold, platinum, and palladium [25,26]. Among these, silver merits exceptional consideration due to its stability, conductivity, nontoxic nature, and comparatively less expensive. e incorporation of Ag-doped ZnO nanoparticles in the polymer matrix upgrades the mechanical properties and conductivity of polymer composite [27,28]. Doping factor is another parameter to amplify the conducting properties of the inorganic filler. Subsequently, in the present study, Ag-ZnO/ PANI composites have synthesized via the chemical oxidative in situ polymerization process of metal nanoparticles with the monomer unit.
To the best of our knowledge, there is no study in the literature that investigates the reaction kinetics of photocatalytic degradation of malachite green using polyaniline/ Ag-ZnO nanocomposite to improve optical and photocatalytic performance. us, this research was deliberated to synthesize the photocatalysts by incorporating Ag and PANI comodified ZnO nanoparticles and discovering the optical properties and kinetics of the photocatalytic activity under visible light irradiation for the degradation and decolorization of MG dye. Figure 1).
Synthesis of Ag-Doped ZnO Nanoparticle.
Silver-doped zinc oxide (Ag-doped ZnO) nanoparticles were prepared by a sol-gel method [29]. About 50 mmol of Zn(II) acetate dehydrate was dissolved in ethanol (100 ml), and the solution was stirred for 30 min.
Oxalic acid dehydrate (2.51 g) was dissolved in ethanol (40 ml), and the solution was added slowly with constant stirring to the above Zn(II) acetate dehydrate solution. After the addition of oxalic acid, a white sol was formed, and the stirring was continued for three hours. To this, 2 wt% of silver nitrate (AgNO 3 ) was added and stirred for a further three hours. Sol was dried on a waterbath to form xerogel. e xerogel was then calcined at 500°C in a muffle furnace at a heating rate of 5°C/min and held at this temperature for 120 min. en, it was grinded by using the mortar and pestle. e powder was kept in the desiccator at room temperature.
Characterization of As-Synthesized Photocatalysts.
X-ray diffraction patterns were obtained using a BRUKER D8 (West Germany and equipped with Cu Kα radiation (λ �1.5405Å) at room temperature in the scan range 2θ between 10 and 90 o . Accelerating voltage and the applied current were 40 kV and 30 mA, respectively. e absorbance of the photocatalysts was recorded by the Sanyo UV-Vis spectrophotometer model (SP65, UK). 0.2 g of the photocatalyst was dissolved in 100 ml of deionized water. e absorbance was measured using a quartz tube with a scanning range of 400-800 nm. Fourier transform infrared (FT-IR) spectroscopy was used in the region between 4000 and 400 cm −1 to determine the functional groups and surface structure of the samples using a model of Shimadzu 8400S (German). About 5-10 mg of photocatalyst powder was mixed with a drop of paraffin and sandwiched between two KBr plates to measure the wave number.
Photocatalytic Activity.
Catalytic activities of the synthesized photocatalysts were studied for the degradation of malachite green (MG) under dark and visible light. 0.02 g of MG was dissolved in 500 ml (at pH � 8 and T � 25°c) of deionized water and MG solution was prepared. 0.2 g of Agdoped ZnO nanoparticles and Ag-ZnO/PANI nanocomposite samples were dispersed in 100 ml of MG solution separately and stirred for 30 min in the dark to establish the adsorption-desorption equilibrium of dye with the catalyst. NaOH was used to adjust the pH value of the solution. en, the reaction was carried out for 120 min because no further degradation was observed after that. e photocatalytic degradation was assessed by recording the absorbance values at definite time intervals. Percentage degradation of MG dye was calculated using the following formula: where C t is the concentration of dye at time t and C o is the concentration of dye at initial stage.
Initial Concentration of Malachite Green and pH.
Effect of initial concentration of MG on its degradation was observed by taking the difference dye initial concentrations 0.1-0.5 g/L and fixing other parameters constant. And the effect of pH was also investigated by taking the pH range from 6 to 10 keeping other parameters constant (photocatalyst load � 0.2 g, pH � 8).
Catalyst
Load. e effect of catalyst load was observed by taking the difference amount of Ag-ZnO and PANI-Ag-ZnO nanocomposites (0.1-0.4 g) at constant dye concentration (0.2 g/L) and constant pH 8. e relation between the percent degradation of MG with irradiation time was studied over the reaction time 20-120 min, using a fixed concentration of dye (50 mg/L), catalyst load (0.2 g/L), and pH 8.
Kinetic Studies of Photocatalytic Degradation of MG.
e kinetics of the photocatalytic degradation of MG solutions was investigated using optimized photocatalyst load, dye initial concentration, and pH at ultraviolet and visible irradiations.
X-Ray Diffraction Analysis.
e average crystallite size (d) was calculated from the XRD pattern according to the Scherrer equation [31]: where k is a constant (about 0.9), λ is the wavelength . It was observed that PANI has a broad amorphous nature, and the diffraction peak is located at 2θ � 25.42°. e average crystal grain size of the composite materials calculated at 2θ � 38.06, 44.23, and 64.36°is chosen to determine the average diameter; the average size of the nanoparticle is 31.52 nm. e change in the intensity or broadness of the nanocomposite is due to the strong interfacial interaction between nanoparticles and the polymer matrix.
UV-Vis Absorption
Spectra. UV/Vis absorption spectra of the as-synthesized photocatalysts Ag-ZnO, PANI, and Ag-ZnO/PANI are shown in Figure 3. e bandgap of the catalyst directly influenced the photocatalytic activity in this way that the direct absorption of bandgap photons would lead to the generation of electron-hole pairs within the catalysts; subsequently; the charge carriers started to difuse to the surface of the catalysts. Journal of Chemistry is could be due to the bandgap narrowing resulting from the creation of dopant energy levels below the conduction band [33]. e wavelengths of the absorption edges in the UV-Vis spectra were determined by plotting a vertical line from the apex of the curve which is given in Table 1, and the energies are calculated using the following Debye-Scherer's equation e delocalized metal electron of Ag 3d state accounts for narrowing bandgap energy of Ag-ZnO nanoparticles from 3.2 eV to 2.87 eV. Ag-doped ZnO has also showed a red shift compared to the bare ZnO nanoparticles by transferring electrons from the conduction band of ZnO to the conduction band of metal 4d states. Addition codoping of PANI enhances further shift in wavelength to large wavelength of 474 nm and narrowing the bandgap to 2.61 eV of former Ag-ZnO/PANI photocatalysts.
However, some holes are also created in the VB of PANI due to electron transfer from VB of PANI to VB of Ag/ZnO, as the VB of PANI (3.25 eV) is at higher energy than the VB of Ag/ZnO (2.87 eV).
e transferred electrons will neutralize some holes in the VB of Ag/ZnO. In this way, a certain fraction of the holes in the VB and electrons in the CB of PANI get separated, reducing the chance of recombination and thereby enhancing the chance of photocatalytic activity.
is is the advantage of using a coupled system [33,34]. Further reduction in the bandgap of the Ag-ZnO/PANI might be due to the synergetic effect of the two dopants Ag and PANI that control the crystal size and enhance the photo harvesting of nanocomposites.
FT-IR Analysis
. Ag-doped ZnO shows a peak at the wavenumber from 3400 to 3452 cm −1 . e peak at 3442 cm −1 is attributable to the HO − stretching vibration of water and an O-H group on the surface of the photocatalyst. e strong absorption peak centered at 508 cm −1 is the metallic stretch assigned to the Ag-doped ZnO. e symmetric and asymmetric bending modes of C-O bonds were in 1636 cm −1 .
ere were some bands that originated from the presence of water moisture and carbon dioxide in the air in the process of making the pellet (Figure 4(a)). e FT-IR spectrum of Ag-ZnO/PANI shows a strong peak of Ag-doped ZnO at 609 cm −1 . e absorption band of PANI has occurred at 1126 cm −1 . e peak associated with the N-H, C-H, and C-N stretching vibrations is located at 3442, 2932, and 1470 cm −1 , respectively. e spectrum at 1293 and 801 cm −1 corresponds to the C-N in plane deformation and � CH in plane vibration (Figure 4(b)) of PANI. us, from the FT-IR spectra, it is confirmed that the nanoparticles are well inserted into the macromolecular chain of PANI, and aniline monomers are successfully polymerized on the surface of Ag-doped ZnO nanoparticles.
Photodegradation of Malachite Green under Visible
Irradiation.
e photodegradation of malachite green dye was performed for a total of 2 hours under visible light irradiation as shown in Figure 5. e absorption band intensities of the dyes decrease, and this indicated that the dyes have been degraded completely by the photocatalysts. Ag-ZnO/PANI nanocomposite showed the highest photoactivity (98.58%) compared with Ag-ZnO (88.23%). e pronounced enhancement of the photocatalytic activity of the Ag-ZnO/PANI nanocomposites may be attributed to their having more than one path to form electron-hole pairs because of the existence of different interfaces, and the electron-hole recombination prevented to the maximum extent in such system. e experiments were also realized in dark conditions to understand the effect of the light source when the catalyst material was added into the dyes. As a result, no significant changes were observed in the absorption spectrum of the dyes. Remarkable degradation was observed in visible light. 0.1 g/L to 0.5 g/L and fixing other parameters constant (photocatalyst load 1 mg/L, pH � 8) ( Table 2). e degradation efficiency of MG was increased with an increase in dye concentration up to 0.2 g/L ( Figure 6). Increasing excess dye concentration leads to covering the active sites of the photocatalysts by the dye, and the path length of the photons entering the solution is decreased, resulting in only a few photons reaching the catalyst surface.
Effect of Operational Parameters on Activities of Photocatalyst
Hence, the productions of hydroxyl radicals are reduced. erefore, the degradation efficiency is reduced [35]. It shows that more dye molecules are adsorbed on the active sites of the photocatalysts. e decrease in degradation above 0.2 g/L may be due to the competition of adsorption between dye molecules and dissolved O 2 on the catalyst surface.
Catalyst Load.
e effect of the photocatalyst loading on the decolorization rate of the dyes was examined by varying the photocatalyst concentration from 0.1 g/L to 0.4 g/L of the dye solution as shown in Figure 7 at constant dye concentration and constant pH 8. e degradation of MG initially increases with increase in photocatalyst load from 0.1 g/L to 0.2 g/L. However, further increase of the catalyst load from 0.2 g/L to 0.4 g/L results in decrease of the degradation of dye. e increase in percent degradation at 0.2 g/L is due to the increase in the number of active sites of the photocatalysts. e decrease can be explained by the excess photocatalyst particles that can create a light screening effect that reduces the surface area that is exposed to light illumination and the photocatalytic efficiency [36]. Journal of Chemistry 3.6. pH. pH affects the surface charge properties, size of photocatalyst aggregates, and the position of conductance. e effect of pH on the photocatalytic degradation of MG was investigated by taking the pH range from 6 to 10, keeping other parameters constant. e photocatalyst exhibited a maximum rate of degradation (98.58%) at pH � 8 in 120 min (Figure 8). erefore, at alkaline pH, the number of hydroxyl groups of the photocatalyst was increased, which facilitates the adsorption of MG. e probable reason for the difference in pH can be the adsorption of MG onto the catalyst surface depending on its surface area [37].
Effect of Irradiation Time.
e relation between the percent degradation of MG with irradiation time was studied over reaction time from 20 to 120 min, using a fixed concentration of dye (50 mg/L), catalyst load (1 mg/L), and pH of 8.
It was observed that at 120 min, the dye was completely degraded and becomes colorless. is is due to the fast adsorption rate before the equilibrium is reached; this may be explained by an increased availability in the number of active binding sites on the photocatalyst surface. At the equilibrium stage, the adsorption is likely an attachment-controlled process due to less available sorption sites (Figure 9).
Kinetics of Photocatalytic Degradation of Malachite
Green. e kinetic study of the degradation of MG was determined by using different initial concentrations of MG from 0.1 to 0.5 g/L. e photocatalytic activity of the synthesized nanocomposites under visible light can be evaluated by comparing the apparent rate constants [17,38] using the following Langmuir-Hinshelwood equation.
where c o and c are the initial and final absorbance of MG, and K app is the apparent rate constant. It can be seen that the photocatalytic activity of Ag-ZnO/ PANI nanocomposites under visible light irradiation is higher than Ag-doped ZnO nanoparticles. Figure 10 shows the relationship between time and the degradation rate (ln(c 0 /c)) of MG for visible light illumination. e regression correlation coefficient (R 2 ) is found to be 0.9972 for Agdoped ZnO nanoparticles and 0.9991 for Ag-ZnO/PANI nanocomposite. From the plot, the K app value for Ag-ZnO nanoparticle and Ag/ZnO/PANI nanocomposite is 1.56 × 10 −2 min −1 and 1.16 × 10 −2 min −1 , respectively. e kinetics curve was pseudo-first-order with respect to Agdoped ZnO and Ag/ZnO/PANI nanocomposites. e proposed mechanism is presented schematically in Figure 11 according to Pham et al., 2020, described in detail as follows [18]. e MG dye adsorbed onto the full photocatalyst. e possible photocatalytic mechanism involves the following steps: Journal of Chemistry the malachite green into a nontoxic degraded product (such as water and carbon dioxide).
e role of PANI is clearly observed in movement and reduction of the carrier recombination due to the existence of linking doped ZnO particles and successive reduction of the surface resistivity of the entire photocatalyst to the photodegradation of MG. Table 3 provides a short comparison of MG degradation efficiency by different authors using different catalysts through the photocatalytic activity. It could be observed that the degradation efficiency under simulated solar light irradiation by xenon lamp of the PANI/Ag-ZnO was much higher than those of Fe 3+ /H 2 O 2 , electrospun P(3HB)-TiO 2 , Ce-Cd oxide, TiO 2 , and ZnO and CA/TiO 2 bio-nanocomposites, and the degradation rate of MG on PANI/Ag-ZnO nanocomposite was also faster. Figure 11: Diagrammatic illustration of the mechanism for MG dyes degradation using the optimized photocatalyst composite by attaching PANI to Ag-doped ZnO nanoparticles. erefore, the nanocomposite is likely a decent and environment friendly catalyst in the removal of MG from contaminated (polluted) water.
Conclusions
PANI/Ag-ZnO nanocomposite was successfully synthesized in situ chemical oxidative polymerization.
e FT-IR result proved the strong interfacial interaction between the metal oxide nanoparticles and polar segments of the PANI chain. e XRD results showed that the amorphous nature of PANI got reduced with an increase in the content of metal oxide nanoparticles. e nanoparticle is 35.68 nm, whereas the nanocomposite is 31.52 nm. Ag-ZnO has a bandgap of 2.87 eV (432 nm) and the PANI/Ag-ZnO nanocomposite has 2.61 eV (474 nm). e photocatalytic activity of the MG was investigated, and a photocatalytic oxidation mechanism was proposed. Ag-ZnO and PANI/Ag/ZnO degrade the MG dye 88.23% and 98.58%, respectively. e highest photocatalytic activity of Ag-ZnO/PANI nanocomposites over ZnO photocatalysts was qualified due to dopants, low rate of recombination of the photogenerated electrons-holes, as well as its lower crystal size and bandgap energy.
Adding PANI to the Ag-doped ZnO leads to extra benefits in both surface textures with a high surface area and a simple electron transfer process. e optimal conditions for photocatalytic degradation of malachite green are as follows: the concentration of malachite green is 0.2 g/l, at pH of 8, and the concentration of catalyst load is 0.2 g/l under visible light illumination for 120 min. e kinetics of photocatalysis (PANI/Ag-ZnO) is a pseudo-first-order reaction with a positive slope and rate constant of 1.6 × 10 -2 min -1 . Our PANI-supported Ag-ZnO is predictable to be a promising applicant for environmental claims. e prevalent performance for various dye photodegradations and recycling abilities with a similar presentation is profoundly anticipated.
Data Availability
No data were used to support this study.
98.58 with 120 min is work
|
2021-10-21T15:54:06.309Z
|
2021-09-07T00:00:00.000
|
{
"year": 2021,
"sha1": "5f81f8c9b77a65df2eb970a570ae53f48194e643",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jchem/2021/2451836.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3720370fb4fb15e127268c941b0665147969e03f",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
261926075
|
pes2o/s2orc
|
v3-fos-license
|
Divergences among ESG rating systems: Evidence from financial indexes
This paper specifically underscores the disparities among various ESG rating systems in China, highlighting their varied interpretations and emphasis on corporate financial factors. Analyzing data on Chinese listed firms from 2009-2022, we observe that while company size and leverage ratio uniformly correlate with ESG scores across rating agencies such as Bloomberg, Huazheng, Wind, and Hexun, the influence of factors like return on assets, cash flow, company age, and Tobin's Q is markedly inconsistent among these agencies. For instance, while operational cash flow and company age are positively associated with ESG ratings from Bloomberg, Huazheng, and Wind, they hold an inverse relationship with Hexun's ratings. This divergence underscores the unique data collection, weighting, and evaluation methodologies employed by each rating system. The study emphasizes the criticality of comprehending the nuances of each rating agency's approach when interpreting ESG scores and crafting ESG strategies. Moreover, it advocates for integrating insights from multiple rating systems to cater to the diverse expectations of stakeholders.
Introduction
In recent years, the significance of Environmental, Social, and Governance (ESG) factors in gauging corporate performance has surged globally (Zhai et al., 2022).Societal focus on environmental conservation, social responsibility, and robust governance has prompted corporations to look beyond mere financial metrics as comprehensive indicators of their standing among stakeholders (Li and Pang, 2023).However, a critical observation arises from the varied ESG scores given by different rating agencies.Notably, the annual changes in scores from Bloomberg, Huazheng, Wind, and Hexun, depicted in Figure 1, underscore this variability.In Figure 1, it is evident that there are significant differences in the ESG scores provided by various ESG rating
Data Sources and Sample Processing
This study leverages data from Chinese publicly listed companies on the Shanghai and Shenzhen A-share markets from 2009 to 2022.Huazheng ESG data and Wind ESG data are sourced from the Wind Financial Terminal, Bloomberg ESG data originates from Bloomberg's Environmental, Social, and Corporate Governance database, and Hexun ESG data is drawn from the Hexun website.All other data is extracted from the CSMAR database.To ensure data accuracy, the selection adheres to the following criteria: (1) Exclusion of companies from the financial and real estate sectors.(2) Omission of firms that have been listed for less than a year and have either delisted or been suspended, the exclusion of companies listed on the Beijing Exchange, and the removal of ST categorized firms.(3) Removal of companies with negative revenues and total assets.(4) Exclusion of observations missing independent and dependent variables.(5) For variables with outliers, a trimming process is applied, narrowing down to the top and bottom 1% of values.
Model Specification and Variable Definition
To investigate the influence of corporate financial variables on ESG scores assigned by various agencies, the following model is established: Where the subscripts and represent the sample entity and year respectively.(1) Dependent Variable ( ,+1 ). ,+1 denotes the ESG score for company in year + 1.This encompasses scores from: Bloomberg ESG, Huazheng ESG, Wind ESG, and Hexun ESG.(2) Independent Variable ( , ). , includes a series of financial factors that might influence a company's ESG rating.Specifically, for company in year , these comprise: Size (Size), Leverage ratio (Lev), Return on Assets (ROA), Sales Growth Rate (Growth), Ratio of Long-Term Assets (PPE), Net Operating Cash Flow Rate (CFO), Company Age (Age), Largest Shareholder's Ownership Ratio (Top1), Tobin's Q (TobinQ), and Staff Scale (STAFF).( 3) Other Control Variables.This study accounts for individual (Individual) and yearly (Year) fixed effects, with representing the random error term.Table 1 delineates the primary variables' definitions and calculations.
Empirical Findings and Discussion
Table 2 presents the benchmark regression results detailing the effects of ESG scores from various institutions on corporate financial variables.Firstly, examining the correlation between institutional ESG scores and financial metrics, we find that six financial variables are significantly correlated with Bloomberg's ESG scores; likewise, six with Huazheng's, four with Wande's, and eight with Hexun's.Notably, Hexun's ESG scores show the most significant correlations with financial variables, whereas Wande's scores exhibit the least.Furthermore, when observing the correlation from the perspective of financial metrics, the coefficients for both company size and leverage ratio remain consistently significant across the ESG scores from all institutions.Return on assets, operational net cash rate, company age, and Tobin's Q all significantly correlate with three institutional scores.However, it's pivotal to acknowledge the divergent signs in coefficients for operational net cash rate and company age: Huazheng's and Wande's scores negatively correlate with operational net cash rate, while Bloomberg's and Hexun's show a positive correlation.Meanwhile, company age positively correlates with scores from Bloomberg, Huazheng, and Wande, but negatively with Hexun's.The company's employee size, interestingly, doesn't show a significant correlation with any institutional ESG score.Lastly, in terms of R 2 , Bloomberg's ESG score interpretation strength is the most robust at 69.57%, while Wande's is the weakest at merely 4.29%.
In summary, corporate financial variables undeniably influence the ESG scores across different institutions, affirming that such financial metrics play a pivotal role in determining ESG ratings.Both company size and leverage ratio consistently show significant correlation with all institutional ESG scores, reinforcing their undeniable impact on ESG ratings.Discrepancies in coefficient signs for operational net cash rate and company age among different institutions' ESG scores suggest divergent views on their correlation with ESG ratings.The vast variation in explanatory power among institutions further indicates significant disparities in their ESG scoring methods.This study emphasizes the impact of company size and leverage ratio on ESG ratings.Current literature on the influence of company size on ESG ratings is abundant and aligns with our proposition that company size correlates with varying institutional ESG scores.Stakeholder theory posits that firms maintain their "license to operate" by disclosing information to stakeholders (Gangi & D' Angelo, 2016).In this context, larger companies face heightened public scrutiny (Udayasankar, 2008), leveraging ESG reports as a testament to their broader commitment.Numerous factors dictate the influence of company size on ESG ratings, ranging from disclosure quantity, where larger entities disclose more information (Adams et al., 1998), to the tools they employ for ethical and sustainable behavior analysis (Graafland et al., 2003).Smaller firms grapple with higher competitive and cost pressures, making sustainable data provision costlier relative to their larger counterparts.Larger firms, with their abundant human and financial resources, possess greater knowledge of sustainability management tools, such as environmental management systems or sustainability balanced scorecards (Ho risch et al., 2015).They also tend to have more formalized reporting structures, with smaller firms often resorting to informal communication related to CSR activities (Ho risch et al., 2015).As company size increases, the production of intricate sustainability reports, aligned
Figure 1 .
Figure 1.ESG Ratings by Different Institutions from 2010 to 2021.
agencies.For example, from 2014 to 2016, Bloomberg ESG Score (BESG) and Wind ESG Score showed a continuous upward trend, while Huazheng ESG Score (HZESG) consistently declined.Similarly, from 2018 to 2020, Bloomberg ESG Score and Wind ESG Score continued to rise, but Hexun ESG Score (HXESG) exhibited a continuous decline.These distinct trends in ESG scores from different agencies indicate that these organizations employ different criteria or methodologies when assessing companies' environmental, social, and governance performance.Therefore, this article aims to briefly analyze the relationship between fluctuations in ESG scores assigned by different systems and financial variables amidst the evolving business landscape.
Table 1 .
Primary Variables and Definitions.
HXESG Hexun ESG ScoreESG score provided by Hexun for the respective year.SizeCompany Size Natural logarithm of the book value of assets at the end of the year.LevCompany Leverage Ratio Ratio of total liabilities to total assets at the end of the year.ROACompany Return on Assets (ROA) After-tax net profit at year-end divided by total assets.
Table 2 .
Regression of the Impact of Corporate Financial Variables on ESG Ratings Across Different Institutions.
|
2023-09-16T15:20:08.779Z
|
2023-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9b82bb1ac08ea29b676453d551b4b16e3c5821a3",
"oa_license": "CCBY",
"oa_url": "https://www.anserpress.org/journal/eal/3/1/48/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b8ab1aa549999338ae29138a8e1c6088e0fda322",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
7901013
|
pes2o/s2orc
|
v3-fos-license
|
Transplantation of mesenchymal stem cells ameliorates secondary osteoporosis through interleukin-17-impaired functions of recipient bone marrow mesenchymal stem cells in MRL/lpr mice
Introduction Secondary osteoporosis is common in systemic lupus erythematosus and leads to a reduction in quality of life due to fragility fractures, even in patients with improvement of the primary disorder. Systemic transplantation of mesenchymal stem cells could ameliorate bone loss and autoimmune disorders in a MRL/lpr mouse systemic lupus erythematosus model, but the detailed therapeutic mechanism of bone regeneration is not fully understood. In this study, we transplanted human bone marrow mesenchymal stem cells (BMMSCs) and stem cells from exfoliated deciduous teeth (SHED) into MRL/lpr mice and explored their therapeutic mechanisms in secondary osteoporotic disorders of the systemic lupus erythematosus model mice. Methods The effects of systemic human mesenchymal stem cell transplantation on bone loss of MRL/lpr mice were analyzed in vivo and ex vivo. After systemic human mesenchymal stem cell transplantation, recipient BMMSC functions of MRL/lpr mice were assessed for aspects of stemness, osteogenesis and osteoclastogenesis, and a series of co-culture experiments under osteogenic or osteoclastogenic inductions were performed to examine the efficacy of interleukin (IL)-17-impaired recipient BMMSCs in the bone marrow of MRL/lpr mice. Results Systemic transplantation of human BMMSCs and SHED recovered the reduction in bone density and structure in MRL/lpr mice. To explore the mechanism, we found that impaired recipient BMMSCs mediated the negative bone metabolic turnover by enhanced osteoclastogenesis and suppressed osteoblastogenesis in secondary osteoporosis of MRL/lpr mice. Moreover, IL-17-dependent hyperimmune conditions in the recipient bone marrow of MRL/lpr mice damaged recipient BMMSCs to suppress osteoblast capacity and accelerate osteoclast induction. To overcome the abnormal bone metabolism, systemic transplantation of human BMMSCs and SHED into MRL/lpr mice improved the functionally impaired recipient BMMSCs through IL-17 suppression in the recipient bone marrow and then maintained a regular positive bone metabolism via the balance of osteoblasts and osteoclasts. Conclusions These findings indicate that IL-17 and recipient BMMSCs might be a therapeutic target for secondary osteoporosis in systemic lupus erythematosus. Electronic supplementary material The online version of this article (doi:10.1186/s13287-015-0091-4) contains supplementary material, which is available to authorized users.
Introduction
Osteoporosis is defined as a reduction in bone strength and is the most common bone disease [1]. The bone loss is primarily related to age and/or menopause and secondarily affected by underlying risk factors such as nutritional deficiencies, diseases, or drugs [2]. Systemic lupus erythematosus (SLE) is a refractory and chronic multiorgan autoimmune disease. Because recent medical advances have successfully increased the lifespan of patients with SLE, many clinical researchers have focused on the organ damage associated with the systemic chronic inflammation and/or long-term medications relating to quality of life [3]. Secondary osteoporosis frequently occurs in SLE patients, which causes fragility fractures [4]. Currently, there are no safe or efficient treatments for SLE-associated osteoporosis.
Mesenchymal stem cells (MSCs) are a typical type of adult stem cell with the capabilities of self-renewal and multilineage differentiation [5]. Recent studies show that MSCs have immunomodulatory effects on immune cells [6,7], and MSC-based cell therapy has been greatly focused on the treatment of various immune diseases such as acute graft-versus-host disease [8] and inflammatory bowel disease [9]. Previous allogeneic transplantation of human bone marrow MSCs (hBMMSCs) and human umbilical cord-derived MSCs (hUCMSCs) governs successful therapeutic efficacy in refractory SLE patients [10][11][12]. However, it is unclear whether MSC transplantation is an effective treatment for skeletal disorders in SLE patients.
MRL/lpr mice are a well-known model of human SLElike disorders with clinical manifestations including a short lifespan, abundant autoantibodies, glomerulonephritis, and a breakdown of self-tolerance [13]. Furthermore, MRL/lpr mice exhibit a severe reduction of the trabecular bone, which is associated with excessive osteoclastic bone resorption and limited osteoblastic bone formation [10]. Recent studies show that systemic transplantation of human MSCs, including hBMMSCs, hUCMSCs, stem cells from human exfoliated deciduous teeth (SHED), and human supernumerary tooth-derived stem cells, improves primary autoimmune disorders in MRL/lpr mice, such as elevated autoimmune antibodies, renal dysfunction, and abnormal immunity [14][15][16][17][18]. In addition, hBMMSC and SHED transplantation markedly recovers the bone loss in MRL/lpr mice [16,17]. These results indicate that MSC transplantation might be a therapeutic approach for SLE patients who suffer from secondary osteoporosis. However, little is known about the human MSC-mediated therapeutic mechanism in the skeletal disorder of MRL/lpr mice.
Osteoporosis is characterized by a disruption of the balance between the formation and resorption of bone, which is associated with abnormal development of osteoclasts and osteoblasts. Increasing evidence has shown that BMMSCs from SLE patients and SLE model MRL/lpr mice exhibit a reduction in their bone-forming capacity both in vitro and in vivo [10,19]. Therefore, the osteogenic deficiency of recipient BMMSCs might explain the origin of osteoporosis in SLE. Accordingly, the impaired BMMSCs might be a therapeutic target for osteoporosis. However, little is known about the processes through which recipient BMMSCs are damaged functionally or the underlying mechanism of human MSC transplantation in restoration of the reduced bone formation via recipient BMMSCs in the bone marrow under SLE conditions.
In this study, we used MRL/lpr mice to examine the therapeutic efficacy and mechanisms of systemically transplanted hBMMSCs and SHED in the secondary osteoporotic disorders of SLE. Moreover, we focused on the pathological and clinical contributions of recipient BMMSCs to the dysregulation of bone metabolism through osteoblasts and osteoclasts in the inflammatory bone disorder of SLE.
Human subjects
Human exfoliated deciduous teeth were obtained as clinically discarded biological samples from five patients (5)(6)(7) years old) at the Department of Pediatric Dentistry of Kyushu University Hospital under the approval of the Kyushu University Institutional Review Board for Human Genome/Gene Research (protocol number: 393-01). Written informed consent was obtained from all parents on behalf of the participants.
Mice
C57BL/6J-lpr/lpr mice (female, 8 weeks old), and pregnant and young adult C57BL/6J mice (female, 8 weeks old) were purchased from Japan SLC (Shizuoka, Japan) and CLEA Japan (Tokyo, Japan), respectively. All animal experiments were approved by the Institutional Animal Care and Use Committee of Kyushu University (protocol number: A21-044-1).
To identify the isolated cells as MSCs, passage 3 (P3) cells (1×10 5 /100 μl) were stained with specific antibodies against stem cell markers including CD11b, CD14, CD35, CD45, CD73, CD90, CD105 and CD146 (1 μg/ml each; eBioscience, San Diego, CA, USA) and then analyzed using a flow cytometer (FACSVerse, BD Biosciences). The percentages of positive cells were determined by comparison with the corresponding control cells stained with an isotype-matched antibody, in which a false-positive rate of less than 1 % was acceptable. The isolated cells were positive for CD73, CD90, CD105, and CD146, and negative for CD11b, CD14, CD35, and CD45 (data not shown). Furthermore, P3 cells were cultured under osteogenic, chondrogenic, or adipogenic conditions [17,18], and showed differentiation capacities for osteoblasts (odontoblasts in the case of SHED), chondrocytes, and adipocytes (data not shown). These findings showed that our isolated cells were MSCs based on the standard MSC criteria [22].
Systemic MSC transplantation into MRL/lpr mice P3 hBMMSCs and SHED were collected and washed with PBS three times. The donor cells were diluted in PBS, and intravenously infused at 1×10 5 per 10 g body weight into 16-week-old MRL/lpr mice via the right cervical vein according to a previously published method [16]. The mice were analyzed at 20 weeks of age. Age-matched MRL/lpr mice that received PBS were used as controls.
Histological bone analysis
Tibias were fixed with 4 % paraformaldehyde in PBS and decalcified with 10 % ethylenediaminetetraacetic acid. Paraffin sections were prepared at a thickness of 6 μm and stained with tartrate-resistant acid phosphate (TRAP) [21]. The number of TRAP-positive cells per total bone area in the bone metaphysis was analyzed in five representative images by Image J software (National Institutes of Health, Bethesda, MA, USA).
Isolation and culture of mouse BMMSCs
Mouse BMMSCs were isolated based on the CFU-F method [23]. BMCs were seeded at 1-2×10 7 cells per 100-mm culture dish. After 3 h, the cells were washed with PBS twice to eliminate non-adherent cells, and the attached cells were cultured for 14-16 days. Attached colonies consisting of spindle-shaped cells were observed under a microscope. The colony-forming attached cells were passaged once. The cells were cultured in αMEM supplemented with 20 % FBS, 2 mM L-glutamine, 55 μM 2-mercaptoethanol, 100 U/ml penicillin, and 100 μg/ml streptomycin. Based on the MSC criteria [22], the colonyforming cells were characterized as described previously [24]: 1) flow cytometry demonstrated immunophenotypes of CD73, CD105, CD146, Sca-1, and SSEA-4, and negativity for CD14, CD34, and CD45; 2) mouse BMMSCs were evaluated for differentiation into osteoblasts, chondrocytes, and adipocytes under the corresponding specific culture conditions.
Western blotting
All the samples were lyzed in M-PER mammalian protein extraction reagent (Thermo, Rockford, IL, USA) containing proteinase inhibitor cocktail (Nacalai Tesque). They were separated by sodium dodecyl sulfatepolyacrylamide gel and transferred to Immobilon-P membranes (Millipore, Billerica, MA, USA). The membranes were blocked with 5 % skimmed milk in Tris-buffered saline (150 mM NaCl and 20 mM Tris-HCl, pH 7.2) for 1 h at room temperature and then incubated with anti-mouse IL-17 antibody overnight at 4°C. They were then treated with horseradish peroxidase-conjugated donkey antirabbit or anti-mouse IgG antibody (1:1000; Santa Cruz Biotechnology) for 1 h at room temperature. The bound antibodies were visualized using SuperSignal West Pico (Thermo).
Statistical analysis
Data were analyzed by the one-way analysis of variance F-test. P-values of less than 0.05 were considered to be significant.
Supplementary materials and methods
Supplementary materials and methods are provided as Additional file 1.
Systemic human MSC transplantation ameliorates SLE disorders in MRL/lpr mice
According to a previous method [16], hBMMSCs and SHED (1×10 5 per 10 g body weight) were systemically transplanted through the right cervical vein into MRL/ lpr mice at 16 weeks of age (Fig. S1a in Additional file 2), following which they exhibited severe autoimmune disorders including abnormal autoantibody increments and severe renal nephritis (Fig. S1b and c in Additional file 2) [10]. The systemic transplantation of human MSCs was followed by evaluation of their immune therapeutic efficacy in the SLE-like disorders, such as hyper-autoimmune antibody production and renal dysfunction in MRL/lpr mice at 4 weeks post-transplantation (Fig. S1b and c in Additional file 2) as described previously [16,17].
Systemic human MSC transplantation improves secondary bone loss in MRL/lpr mice
Severe osteoporosis with progressive trabecular bone breakdown occurs secondarily in SLE model MRL/lpr mice at 16 weeks of age ( Fig. 1) [10] and SLE patients [4]. To investigate effects of transplanted human MSC . These findings suggested that human MSC transplantation recovered the bone reduction in MRL/lpr mice via accelerated bone formation and suppressed bone resorption, but did not elucidate the detailed therapeutic mechanisms at cellular and molecular levels.
Systemic human MSC transplantation recovers dysregulation of osteoblast and osteoclast development via recipient BMMSCs in MRL/lpr mice
Dysregulation of osteoblast and osteoclast development in bone marrow leads to skeletal dysfunction. BMMSCs play a crucial role in the development of osteoblasts and osteoclasts in bone marrow [25][26][27]. Recent studies showed that recipient BMMSCs in SLE mice and patients impaired the stemness and osteogenic capacity [10,19]. Therefore, we hypothesized that recipient BMMSCs might participate in the osteoporosis of SLE and may be a therapeutic target. We isolated recipient BMMSCs from non-, hBMMSC-, and SHED-transplanted MRL/lpr mice, and designated them as MSC-MRL/lpr, MSC-hBMMSC, and MSC-SHED, respectively. We also isolated recipient BMMSCs from pre-transplant MRL/lpr mice at 16 to wild-type C57/BL6 mice (data not shown), as shown previously [10]. We examined the immunoregulatory effect of hBMMSCs and SHED on IL-17 production using a co-culture system of human MSCs with human T cells in the presence of transforming growth factor-β 1 and IL-6. This co-culture system demonstrated that hBMMSCs and SHED suppressed both Th17 cell differentiation and IL-17 secretion by flow cytometric analysis and ELISA, respectively ( Fig. S5a and b in Additional file 2). To confirm in vivo immunological effects of hBMMSCs and SHED on IL-17 secretion, we analyzed peripheral levels of Th17 cells and IL-17 in MRL/lpr mice. Immunological analyses showed that hBMMSCs and SHED suppressed systemic Th17 cells and IL-17 expression in MRL/lpr mice, as well as pre-transplant MRL/lpr mice ( Fig. S5c and d in Additional file 2). These findings indicated that hBMMSCs and SHED expressed a suppressive regulation of Th17 cell differentiation in vitro and in vivo, as reported previously [10,16].
Carboxyfluorescein diacetate succinimidyl ester (CFSE)labeled hBMMSCs and SHED were infused intravenously into MRL/lpr mice. Subsequently, these cells were found in the recipient bone marrow space at days 1 and 7 after infusion, but the positive cell number was lower at day 7 than that at day 1 (Fig. S6a in Additional file 2). IL-17 is significantly increased in the bone marrow of MRL/lpr mice [10]. We analyzed IL-17 levels in bone marrow tissues of hBMMSC-and SHED-transplanted MRL/lpr mice in comparison with non-transplanted MRL/lpr mice. In immunofluorescence analysis, hBMMSC and SHED transplantation resulted in a marked decrease in IL-17-positive cells in the recipient bone marrow ( (Fig. 2e). BMC-hBMMSC and BMC-SHED exhibited lower production of IL-17 than BMC-MRL/lpr, as well as BMC-Pre-MRL/lpr (Fig. 2f ), reflecting the recipient bone marrow IL-17 conditions. Taken together, these findings indicated that transplanted hBMMSCs and SHED might suppress the abnormal IL-17 production in the recipient bone marrow of MRL/lpr mice, but did not evaluate whether the recipient IL-17 conditions effected on the dysregulation by osteoclastic bone resorption and osteoblastic bone formation in MRL/lpr mice.
To understand the mechanistic transplant studies, we transplanted hBMMSCs and SHED (1×10 5 per 10 g body weight) into wild-type C57BL/6 mice at 16 weeks of age, and analyzed the effects of the transplants 4 weeks after the infusion (Fig. S1a in Additional file 2). There was no effect of hMSC transplantation on bone metabolism and IL-17 levels (Fig. S7 in Additional file 2). These findings suggested that recipient inflammatory conditions might influence the transplants' ability to treat recipient bone metabolism. capacity of MSC-WT as demonstrated by Alizarin Red staining ( Fig. S8a and b in Additional file 2). Anti-IL-17 antibody treatment reversed the suppressed osteogenic capacity in IL-17-and CM-MRL/lpr-treated MSC-WT when co-treated with the control antibody treatment ( Fig. S8a and b in Additional file 2). These preliminary data suggested that IL-17 conditions in the recipient bone marrow might modulate the osteoblastic differentiation of recipient BMMSCs in MRL/lpr mice. We then examined whether IL-17-dependent immune conditions in recipient bone marrow affected recipient BMMSC-mediated bone formation. Mouse BMMSCs were cultured under osteogenic conditions with or without CM collected from BMC-hBMMSC and BMC-SHED cultures (CM-hBMMSC, and CM-SHED, respectively), as well as CM-MRL/lpr, in the presence or absence of antimouse IL-17 antibody (Fig. 3a). The control antibody for anti-mouse IL-17 antibody was used as the control treatment. Alizarin Red staining results showed that CM-MRL/lpr significantly suppressed the osteogenic capacity of MSC-WT (Fig. 3b, Fig. S8b in Additional file 2), whereas CM-hBMMSC and CM-SHED showed little suppressive effects on the osteogenic capability of MSC-WT (Fig. 3b). Anti-IL-17 antibody treatment neutralized the inhibited osteogenic capability by the individual CMs, especially in the CM-MRL/lpr-treated group (Fig. 3b, Fig. S8b in Additional file 2). These results suggested that the hyperactivity of IL-17 in recipient bone marrow of MRL/lpr mice might cause the severe defective bone formation mediated by recipient BMMSCs.
Next, we investigated the effect of abnormal IL-17 in recipient bone marrow of MRL/lpr mice on recipient BMMSC-mediated bone formation. Recipient BMMSCs, including MSC-MRL/lpr, MSC-hBMMSC and MSC-SHED, were treated with CM-MRL/lpr under osteogenic conditions. CM-MRL/lpr treatment significantly suppressed the mineralized deposition induced by MSC-hBMMSC and MSC-SHED (Fig. 3c). The suppressed MSC-hBMMSCand MSC-SHED-induced mineralization showed similar rates to MSC-MRL/lpr-induced bone formation (Fig. 3c). The CM-MRL/lpr treated suppression was recovered by the treatment with anti-IL-17 antibody (Fig. 3c). We then cultured individual recipient BMMSCs under osteogenic conditions in the presence of IL-17. IL-17 significantly suppressed the mineralized deposition induced by MSC-hBMMSC and MSC-SHED, which showed similar deposition rates to that of MSC-MRL/lpr (Fig. 3d). Anti-IL-17 antibody treatment completely neutralized the IL-17suppressed osteogenic capacities of MSC-hBMMSC and MSC-SHED (Fig. 3d). These findings indicated that abnormal IL-17 in the bone marrow of MRL/lpr mice impaired the osteogenic capacity of recipient BMMSCs, and suggested that hBMMSC and SHED transplantation recovered the osteogenic dysfunction of recipient BMMSCs through inhibiting hyperactivated IL-17 in the recipient bone marrow of MRL/lpr.
Systemic human MSC transplantation rescues impaired recipient BMMSC-mediated osteoclast induction through IL-17-dependent immune conditions in the recipient bone marrow of MRL/lpr mice
We also preliminarily examined the effects of IL-17 and CM-MRL/lpr on osteoclast differentiation in the coculture system with BMC-WT and wild-type mousederived calvarial cells (Calvaria-WT). BMC-WT were co-cultured with Calvaria-WT under the stimulation of vitamin D 3 and prostaglandin E 2 in the presence or absence of IL-17 and CM-MRL/lpr (Fig. 4a). The control antibody for anti-mouse IL-17 antibody was also used as the control treatment. TRAP staining showed that both IL-17 and CM-MRL/lpr treatments induced TRAPpositive MNCs in the co-culture system ( (Fig. 4b). Anti-IL-17 antibody treatment reduced the number of TRAP-positive MNCs under the stimulation of CM-hBMMSC and CM-SHED (Fig. 4b). Subsequently, we co-cultured Calvaria-WT with recipient BMCs. Induction of TRAP-positive MNCs in the co-culture systems with BMC-hBMMSC and BMC-SHED showed a fewer number of TRAP-positive MNCs than that with BMC-MRL/lpr ( Fig. 4c and d). Treatment with CM-MRL/lpr and IL-17 stimulated the reduced osteoclast induction in the co-culture systems with BMC-hBMMSC and BMC-SHED ( Fig. 4c and d). Anti-IL-17 antibody treatment neutralized the CM-MRL/lpr-and IL-17enhanced TRAP-positive MNC-induction in the coculture systems with BMC-hBMMSC and BMC-SHED ( Fig. 4c and d). These results suggested that IL-17dependent immune conditions in the recipient bone marrow might modulate osteoclastic differentiation and bone resorption in MRL/lpr mice.
To examine whether IL-17-dependent immune conditions in recipient bone marrow affect recipient BMMSCmediated osteoclast induction, we pretreated recipient BMMSCs with CM of recipient BMC culture or IL-17 in the presence or absence of anti-IL-17 antibody. In the co-culture system of MSC-MRL/lpr and BMC-MRL/lpr, TRAP staining demonstrated that CM-MRL/lpr-enhanced TRAP-positive MNC induction was reduced when pretreated with CM-hBMMSC and CM-SHED (Fig. 4e). BMC-MRL/lpr were then co-cultured with individual recipient MSCs. MSC-hBMMSC and MSC-SHED showed less TRAP-positive MNC-inductive functions in comparison with MSC-MRL/lpr ( Fig. 4f and g). CM-MRL/lpr-pretreated recipient BMMSCs significantly enhanced the TRAP-positive MNC induction from BMC-MRL/lpr in the respective co-culture system, but the pretreatment effects on MSC-hBMMSC and MSC-SHED were less than those on MSC-MRL/lpr (Fig. 4f ). Anti-IL-17 antibody co-pretreatment blocked the CM-MRL/lpr-enhanced TRAP-positive MNCs in the individual co-culture system (Fig. 4f ). We then examined whether IL-17 treatment in recipient BMMSCs directly affected the osteoclast inductivity in the co-culture system with BMC-MRL/lpr. IL-17-pretreated MSC-hBMMSC and MSC-SHED, as well as MSC-MRL/lpr, significantly induced TRAP-positive MNCs, but the inductivity by IL-17-pretreated MSC-hBMMSC and MSC-SHED was less than that by IL-17-pretreated MSC-MRL/lpr (Fig. 4g). The increased TRAP-positive MNC inductivity was inhibited when the recipient BMMSCs were copretreated with anti-IL-17 antibody (Fig. 4g). These findings suggested that IL-17-dependent hyperimmune conditions in the recipient bone marrow of MRL/lpr mice might impair recipient BMMSCs to accelerate abnormal osteoclast induction while human MSC transplantation might target the impaired recipient BMMSCs to suppress the enhanced osteoclast inductivity.
Functional downregulation of IL-17 in BMCs of MRL/lpr mice inhibits recipient BMMSC-mediated osteoclast differentiation
We examined whether IL-17 levels in recipient BMCs of MRL/lpr mice play a significant role in osteoclast differentiation through recipient BMMSCs. BMC-MRL/lpr were treated with mouse IL-17 siRNA or the control siRNA, and were co-cultured with recipient BMMSCs under stimulation with 1α, 25-(OH) 2 vitamin D 3 and prostaglandin E 2 (Fig. 5a). Some co-cultures were incubated in the presence of anti-mouse IL-17 antibody or the control antibody (Fig. 5a). Western blotting confirmed that siRNA for mouse IL-17 significantly downregulated the expression of IL-17 in BMC-MRL/lpr (Fig. 5b). IL-17 siRNA-treated BMC-MRL/lpr inhibited individual recipient BMMSCmediated osteoclast induction in the co-culture system, but control siRNA-treated BMC-MRL/lpr did not (Fig. 5c). IL-17 antibody treatment also suppressed osteoclast differentiation in a co-culture system of intact BMC-MRL/lpr and individual recipient BMMSCs (Fig. 5d). Furthermore, to understand whether IL-17 levels in recipient BMCs affect direct capabilities of osteoclast differentiation of the BMCs, recipient BMCs were cultured in a RANKL-M-CSF culture system (Fig. 5e). Although IL-17 siRNA treatment in individual BMCs efficiently induced TRAPpositive MNCs, the efficiency was similar to the control cultures treated with control siRNA (Fig. 5f). Anti-IL-17 antibody treatment showed no effect on TRAP-positive MNC formation from individual recipient BMCs in a RANKL-M-CSF culture system when compared to the control antibody treatment (Fig. 5g). In addition, osteoclast formation in a RANKL-M-CSF system expressed a similar efficacy among individual recipient BMCs ( Fig. 5f and g). These finding suggested that IL-17 levels in recipient BMCs might affect recipient BMMSC-mediated osteoclast differentiation of recipient BMCs, but not direct osteoclastogenesis of recipient BMCs.
Systemic treatment of anti-IL-17 antibody improves secondary bone loss in MRL/lpr mice
To investigate a role of IL-17 on skeletal metabolism in MRL/lpr mice, we systemically infused anti-IL-17 antibody to MRL/lpr mice at 16 weeks of age for 4 weeks (
Discussion
The incidence of secondary osteoporosis in SLE patients ranges from 6.3 % to 28 % worldwide [30,31]. Osteoporosis is generally caused by disruption of bone remodeling, which is related to menopause and aging, whereas secondary osteoporosis in SLE is multifactorial and involves excessive systemic inflammation and long-term anti-inflammatory/immunosuppressive medications such as glucocorticoids [25]. These risk factors lead to fragility fractures, which are mainly vertebral fractures, and can proceed to new fractures, increase the mortality rate, and reduce quality of life [32,33]. Among the numerous antiosteoporosis drugs to prevent and treat bone loss in SLE patients, bisphosphonates are a first-line, anti-resorptive agent but show little effect on bone reconstruction and exhibit side effects such as negative fetus development, osteonecrosis of the jaw, and musculoskeletal pain. Other anti-osteoporotic agents, such as estrogen, calcitonin, and raloxifene, exhibit various limitations in terms of potency, population effects, and side effects [34]. From the viewpoint of a quality of life-based medicine, novel therapeutics have been strongly desired to recover the bone loss in SLE patients. In this study, according to recent findings of human MSC-based therapy [14][15][16][17][18], we systemically transplanted hBMMSCs and SHED into SLE model MRL/lpr mice with a severe osteoporotic phenotype and assessed their therapeutic effects on bone loss. Our results demonstrated that human MSC-based therapy ameliorated bone reduction in MRL/lpr mice. Upon dysregulation of the bone remodeling balance by osteoblasts and osteoclasts, the skeletal system enters a pathological condition. In bone marrow, BMMSCs not only serve as a source of osteoblasts, but also support osteoclast differentiation [25][26][27]. Increasing evidence has shown impaired functions in BMMSCs derived from SLE patients [10,19,35,36] and SLE model mice [10], suggesting that a recipient BMMSC deficiency might participate in the pathology of secondary osteoporosis in SLE. However, recent studies demonstrate that systemic MSC transplantation into MRL/lpr mice improves the bone loss, although no study has focused on recipient BMMSCs in the bone regeneration process by systemic transplantation of MSCs [10,16,17]. Therefore, we hypothesized that the therapeutic effects of systemically transplanted MSCs on bone regeneration in SLE were mediated by recovery of deficient host BMMSCs. Interestingly, in the present study, the exogenous hBMMSCs and SHED recovered the impaired bone-forming capability of recipient BMMSCs and reduced abnormal osteoclast induction via recipient BMMSCs in the bone marrow of MRL/lpr mice. These findings suggest that recovery of the impaired recipient BMMSCs might be a critical process to regenerate the skeletal loss in MRL/ lpr mice after human MSC transplantation. However, further studies should explore the cellular and molecular mechanisms through which exogenous MSCs improve the deficiency of recipient BMMSCs.
Inflammation shifts skeletal homeostasis to a bone resorptive condition. Secondary osteoporosis in SLE involves complex interplay of hyperactivated immune reactions and abnormal bone metabolism. The proinflammatory cytokine IL-17, which is produced by Th17 cells [37], has been investigated as a participant in various autoimmune diseases [38,39]. These findings suggest a novel role for IL-17 in bone diseases such as rheumatoid arthritis and osteoporosis [40,41]. In this study, we demonstrated that IL-17dependent hyperimmune conditions in the recipient bone marrow in MRL/lpr mice impaired recipient BMMSCs to suppress the osteogenic function and accelerate the osteoclast induction. Our IL-17 neutralization experiments also provided strong evidence that abnormal IL-17 expression in the bone marrow of MRL/lpr mice impaired recipient BMMSC-mediated osteogenesis and recipient BMMSCmediated osteoclast induction. However, our siRNA and neutralizing antibody of IL-17 experiments also demonstrated that IL-17 levels in recipient BMCs might affect recipient BMMSC-mediated osteoclast differentiation of recipient BMCs, but not influence osteoclastogenesis of recipient BMCs directly. In bone metabolism, IL-17 (See figure on previous page.) Fig. 5 Functional downregulation of IL-17 in bone marrow cells of MRL/lpr mice suppressed recipient BMMSC-mediated osteoclastogenesis. a-d Osteoclast induction assay in bone marrow cell MRL/lpr (BMC-MRL/lpr) and recipient bone marrow mesenchymal stem cell (BMMSC) co-culture. a A schema of osteoclast induction in a co-culture system. BMC-MRL/lpr were pretreated with or without mouse interleukin-17 siRNA (IL-17 siRNA) and the control siRNA (Cont siRNA). Some BMCs were co-cultured with recipient BMMSCs under the stimulation with 1α, 25-(OH) 2 vitamin D 3 (VD3) and prostaglandin E 2 (PGE2) in the presence of anti-mouse IL-17 antibody (Anti-IL-17 Ab) or control antibody for anti-mouse IL-17 antibody (Cont Ab). b Expression of IL-17 in BMC-MRL/lpr pretreated with IL-17 siRNA and Cont siRNA by Western blotting. c, d Osteoclast induction assay by tartrate-resistant acid phosphate (TRAP) staining. c Effects of IL-17 siRNA. d Effects of Anti-IL-17 Ab. e-g Direct osteoclast induction assay of recipient BMCs. e A schema of a direct osteoclast induction in a receptor activator for nuclear factor κB ligand and macrophage-colony stimulating factor (RANKL + M-CSF) system. Recipient BMCs were pretreated with or without IL-17 siRNA and Cont siRNA and cultured under a stimulation of M-CSF and RANKL. Some BMCs were cultured in the presence of Anti-IL-17 Ab or Cont Ab. f, g Osteoclast induction assay by TRAP staining. f Effects of IL-17 siRNA. g Effects of Anti-IL-17 Ab. n = 5 for all groups. Values are shown as means ± SD. *P < 0.05. c ¶ P < 0.05, ¶ ¶ ¶ P < 0.005, versus co-culture of Cont siRNA-treated BMC-MRL/lpr and MSC-MRL/lpr. # P < 0.05, versus co-culture of IL-17 siRNA-treated BMC-MRL/lpr and MSC-MRL/lpr co-culture. d ¶ P < 0.05, ¶ ¶ P < 0.01, ¶ ¶ ¶ P < 0.005, versus co-culture of BMC-MRL/lpr and MSC-WT in the presence of Cont Ab. # P < 0.05, versus co-culture of BMC-MRL/lpr and MSC-MRL/lpr in the presence of Anti-IL-17 Ab. MNC multinucleated cell; MSC mesenchymal stem cell; SHED stem cells from exfoliated deciduous teeth induces osteoclastic differentiation through osteoclastogenesis-supporting cells, such as mesenchymal cells and osteoblasts, but not in direct osteoclast induction stimulated by M-CSF and RANKL [42,43]. IL-17 also directly inhibits osteoblastic differentiation of MSCs [44]. Therefore, these findings suggest that hyperactivated IL-17 in recipient bone marrow may be responsible for secondary osteoporosis in MRL/lpr mice by impairing recipient BMMSCs. Furthermore, the present systemic transplantation of hBMMSCs and SHED suppressed the increased expression of bone marrow IL-17 in MRL/lpr mice, and restored the impaired functions of recipient BMMSCs in bone metabolism. NF-κB activation by proinflammatory cytokines including IL-17 is known to reduce the bone formation capacity of host BMMSCs [19,44]. Further efforts will be required to elucidate the crucial mechanism of IL-17 in the recipient BMMSCbased pathology of secondary osteoporosis in SLE, which may lead to novel recipient BMMSC-targeting therapeutics for skeletal disorders.
IL-17 antibody treatments have emerged as a novel therapeutic approach for immune-mediated diseases such as psoriasis, rheumatoid arthritis, psoriatic arthritis and ankylosing spondylitis [45,46]. Experimental evidence demonstrated anti-IL-17 therapy could protect bone destruction in rheumatoid arthritis by reducing the number of osteoclasts in joints as well as Th17 cells [47]. Anti-IL-17 antibody also preserves skeletal loss in osteoporosis by enhancing osteoblastic bone formation and suppressing osteoclastic bone resorption, as well as protecting against IL-17-mediated immune damages [48]. Several direct IL-17 inhibitors (for example, secukinumab and ixekizumab, anti-IL-17A monoclonal antibody) have shown exciting advances in proof-of-concept and phase II clinical trials, but also expected are further evaluations from phase III clinical trials in multiple autoimmune and immunerelated inflammatory diseases [49]. In this study, we demonstrated that systemic treatment of anti-IL-17 antibody recovered bone reduction in MRL/lpr mice. The usage of IL-17 antibody in treating secondary osteoporosis will not only provide a therapeutic method but also improve the understanding of disease pathogenesis.
In this study, we found that pathological immune conditions affected recipient MSCs, and indicated that hBMMSCs and SHED transplantation improved the bone reduction by restoring IL-17-impaired recipient BMMSCs after migration into the damaged bone marrow. These results suggest that abnormal recipient BMMSCs may undergo correction of their primary functions by the post-transplantation actions of human MSCs. Although the therapeutic mechanism of engrafted human MSCs in the target bone site is not fully understood, several possibilities may be involved in the posttransplantation behaviors of human MSCs in impaired bone marrow. Migrated human MSCs have the potential to participate directly in bone regeneration by differentiating into osteoblasts and suppressing osteoclast induction [25][26][27]. However, bone reconstruction is affected by proinflammatory cytokines at bone defect sites [50]. In addition, MSCs act as cellular modulators through immunomodulatory and trophic effects [51]. Immunomodulatory functions are inducted by cell-cell contact and include FasL-mediated T cell apoptosis [52], CCR6mediated Th17 cell inhibition [53], and MSC-secreted molecule (e.g., IL-10)-mediated Th17 cell suppression [54]. Trophic molecules released from MSCs can inhibit apoptosis and scar formation, such as macrophage inflammatory protein-1, stromal cell-derived factor-1, transforming growth factor-β1, and vascular endothelial growth factor [55]. Therefore, exogenous MSCs may exert their immunomodulatory and trophic functions by secreting bioactive molecules to recover impaired recipient BMMSCs. Further studies will be necessary to explore the deeper mechanisms of transplanted MSCmediated recovery of recipient BMMSCs.
Conclusions
The present study demonstrates that systemic transplantation of human MSCs, including SHED and hBMMSCs, ameliorates severe bone reduction, as well as primary SLE disorders, in MRL/lpr mice. The therapeutic efficacy is mediated by recovery of the impaired functions of recipient BMMSCs to regulate osteogenesis and osteoclastogenesis via IL-17 suppression in the recipient bone marrow. These data indicate that IL-17, as a cause of secondary osteoporosis in SLE, might be a therapeutic target of transplanted human MSCs in the skeletal disorder. Further studies will be necessary to explore new cellular and molecular strategies to overcome the recipient BMMSCbased pathology of secondary osteoporosis in SLE and develop a novel recipient BMMSC-targeting approach in MSC-based therapy for skeletal regeneration. Abbreviations ALP: alkaline phosphatase; αMEM: alpha minimum essential medium; BMC: bone marrow cell; BMC-WT: wild-type mice-derived bone marrow cells; BMD: bone mineral density; BMMSC: bone marrow mesenchymal stem cell; BV/TV: bone volume/trabecular volume; Calvaria-WT: wild-type mouse-derived calvarial cells; CFSE: carboxyfluorescein diacetate succinimidyl ester; CFU-F: colony-forming unit fibroblasts; CM: conditioned medium; CTX: C-terminal telopeptides of type I collagen; ELISA: enzyme-linked immunosorbent assay; FBS: fetal bovine serum; hBMMSC: human bone marrow mesenchymal stem cell; hUCMSC: human umbilical cord-derived mesenchymal stem cell; IFN: interferon; IL: interleukin; M-CSF: macrophagecolony stimulating factor; microCT: micro-computed tomography; MNC: multinucleated cell; MSC: mesenchymal stem cell; MSC-WT: wild-type mice-derived mesenchymal stem cells; NFATc1: nuclear factor of activated T-cells; NF-κB: nuclear factor κB; PBS: phosphate-buffered saline; RT-PCR: reverse transcription polymerase chain reaction; Runx2: runt-related transcription factor 2; SHED: stem cells from exfoliated deciduous teeth; SLE: systemic lupus erythematosus; sRANKL: soluble receptor activator for nuclear factor κB ligand; Tb.N: trabecular number; Tb.Sp: trabecular separation; Tb.Spac: trabecular space; Tb.Th: trabecular thickness; Th17: IL-17 producing helper T; TRAP: tartrate-resistant acid phosphate.
|
2017-06-28T05:03:12.543Z
|
2015-05-27T00:00:00.000
|
{
"year": 2015,
"sha1": "313a0427b83a8a20a87282461a583800132d4964",
"oa_license": "CCBY",
"oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-015-0091-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "313a0427b83a8a20a87282461a583800132d4964",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255983548
|
pes2o/s2orc
|
v3-fos-license
|
Maternal interchromosomal insertional translocation leading to 1q43-q44 deletion and duplication in two siblings
1q43-q44 deletion syndrome is a well-defined chromosomal disorder which is characterized by moderate to severe mental retardation, and variable but characteristic facial features determined by the size of the segment and the number of genes involved. However, patients with 1q43-q44 duplication with a clinical phenotype comparable to that of 1q43-q44 deletion are rarely reported. Moreover, pure 1q43-q44 deletions and duplications derived from balanced insertional translocation within the same family with precisely identified breakpoints have not been reported. The proband is a 6-year-old girl with profound developmental delay, mental retardation, microcephaly, epilepsy, agenesis of the corpus callosum and hearing impairment. Her younger brother is a 3-month-old boy with macrocephaly and mild developmental delay in gross motor functions. G-banding analysis of the subjects at the 400-band level did not reveal any subtle structural changes in their karyotypes. However, single-nucleotide polymorphism (SNP) array analysis showed a deletion and a duplication of approximately 6.0 Mb at 1q43-q44 in the proband and her younger brother, respectively. The Levicare analysis pipeline of whole-genome sequencing (WGS) further demonstrated that a segment of 1q43-q44 was inserted at 14q23.1 in the unaffected mother, which indicated that the mother was a carrier of a 46,XX,ins(14;1)(q23.1;q43q44) insertional translocation. Moreover, Sanger sequencing was used to assist the mapping of the breakpoints and the final validation of those breakpoints. The breakpoint on chromosome 1 disrupted the EFCAB2 gene in the first intron, and the breakpoint on chromosome 14 disrupted the PRKCH gene within the 12th intron. In addition, fluorescence in situ hybridization (FISH) further confirmed that the unaffected older sister of the proband carried the same karyotype as the mother. Here, we describe a rare family exhibiting pure 1q43-q44 deletion and duplication in two siblings caused by a maternal balanced insertional translocation. Our study demonstrates that WGS with a carefully designed analysis pipeline is a powerful tool for identifying cryptic genomic balanced translocations and mapping the breakpoints at the nucleotide level and could be an effective method for explaining the relationship between karyotype and phenotype.
Background
The clinical phenotype of 1q43-q44 deletion or duplication is highly variable, due to the size of the segment and the number of genes involved. The phenotypic features of patients with 1q43-44 deletion include moderate to severe mental retardation, development retardation, microcephaly, corpus callosum dysplasia, epilepsy and dysmorphic features. In individuals with 1q43-q44 duplication, the most recognizable features are macrocephaly, mental retardation, epilepsy and mild malformation. Interstitial deletions of the long arm of chromosome 1 involving only the 1q43-q44 region have been reported in more than 80 patients, with most of these patients arising de novo [1][2][3][4][5][6][7]. A few individuals exhibiting pure 1q43-q44 interstitial duplication have been reported [8][9][10][11][12][13][14][15]. However, both pure 1q43-q44 deletion and duplication occurring in a family have not been reported.
Several cytogenetic and molecular techniques have been applied to detect the deletion or duplication of pathogenic copies, such as G-banding, fluorescence in situ hybridization (FISH) and chromosomal microarrays (CMAs). However, these techniques present individual limitations and can often be technically challenging. Recent studies have shown that whole-genome sequencing (WGS) with a carefully designed data analysis pipeline is a more powerful tool for detecting chromosomal abnormalities due to its higher resolution and the ability to detect balanced translocations and small imbalances that cannot be detected with CMAs [16].
Insertional translocations are complex chromosomal rearrangements that require at least three breakpoints in the involved chromosome, with an incidence of 1:80,000 in live births [17]. Insertional translocations can be divided into simple intrachromosomal or interchromosomal insertional translocations and complex chromosomal insertional translocations [18]. Nowakowska et al. [19] found that 2.1% of de novo copy number variations (CNVs) are actually inherited from a parental balanced insertional translocation. However, this percentage may represent an underestimate because not all parental data may be collected in these studies, and due to technical limitations, some small imbalances have not yet to be discovered. However, WGS can identify nearly all cryptic chromosomal abnormalities or complex rearrangements present in the genome, in addition to characterizing translocation breakpoints at the nucleotide level.
Herein, we present a rare family in which two siblings presented with congenital anomalies. These two individuals harboured an approximately 6.0-Mb deletion or duplication of 1q43-q44 inherited from their mother, a carrier of a cryptic balanced insertional translocation. We further precisely identified the corresponding breakpoints via WGS and Sanger sequencing. This is the first report of the detection of an insertional translocation associated with 1q43-q44 deletion and duplication using WGS.
Case presentation
The proband (III-3, Fig. 1a) is the third child of a nonconsanguineous, healthy couple. She is a 6-year-old Chinese girl with profound developmental delay, microcephaly, agenesis of the corpus callosum, epilepsy, language delayed and hearing impairment. She was born at full term after an uncomplicated spontaneous vaginal delivery with a normal birth weight (3400 g). She experienced seizures four times at 3 months of age, with spontaneous remission occurring after more than 10 s. At 7 months of age, she began turning over but could not grasp and sit without support. Intellectual evaluation with the Gesell Development Schedule (GDS) showed that her developmental quotient at 7 months of age was equivalent to that of a 10-week-old infant, indicating significant growth retardation [20]. The detailed data are shown in Additional file 1: Table S1. Brain magnetic resonance imaging (MRI) indicated absence of the corpus callosum and enlargement of the posterior horn of all three ventricles bilaterally. The results of brainstem auditory evoked potential (BAEP) analysis indicated bilateral hearing impairment. At 6 years of age, the proband presented with microcephaly (47.2 cm, <− 2 SD) and began learning to walk but could not speak (Fig. 1b).
The elder brother (III-1, Fig. 1a) presented similar features to the proband, such as developmental delay, cerebral palsy and intracranial haemorrhage after birth, and died at 5 years of age. The elder sister (III-2, Fig. 1a) has a normal phenotype. The younger brother (III-4, Fig. 1a) is the fourth child. At 34 weeks of gestation, an MRI scan of the foetal head and ultrasonography revealed no obvious abnormalities. He was born at term via caesarean section after an uneventful pregnancy and exhibited a normal birth weight (3000 g). At the age of 3 months, he presented macrocephaly (Fig. 1c) and his head circumference was 44.5 cm (> + 2 SD). His developmental quotient was equivalent to that of an 11-week-old infant, with testing demonstrating a borderline full-scale developmental quotient (85), and he exhibited developmental delay in gross motor functions (Additional file 1: Table S1).
Materials and methods
G-banding at a band resolution of ∼400 was performed on metaphase peripheral blood lymphocytes obtained from the proband (III-3) and four other family members (II-4, II-5, III-2 and III-4) according to the laboratory's protocols. DNA was isolated from their peripheral blood lymphocytes using the QIAamp® DNA blood midi kit (QIAGEN, Hilden, Germany). Single-nucleotide polymorphism (SNP) array analysis was performed using Cytoscan 750 K chips (Affymetrix, Santa Clara, CA, USA) as we described in a previous report [21]. The data were analysed using ChAS chromosome analysis software (Affymetrix, Santa Clara, CA, USA). To confirm the chromosomal imbalances of the patients and determine whether they were de novo or inherited from the parents, the parental DNA was evaluated by whole-genome lowcoverage sequencing. Briefly, a non-size selected matepair library was prepared using~3 μg of genomic DNA and then subjected to 50-bp-end multiplex sequencing on the Illumina HiSeq™ X10 platform. After automatically removing adaptor sequences and low-quality reads, high quality paired-end reads were aligned to the NCBI human reference genome (GRCh37/hg19) by SOAP2. Uniquely mapped reads were selected for subsequent analysis as previously described in detail [22]. After the bioinformatics analysis, we obtained the candidate breakpoint regions. The precise breakpoints were further confirmed by PCR and Sanger sequencing, and the genomic locations of the breakpoints were analysed according to the February 2009 (GRCh37/hg19) assembly in the UCSC Genome Browser (http://genome.ucsc.edu). Primers targeting the flanking sequences of the candidate breakpoints of chromosomes 1 and 14 were designed with Primer 5 software and are listed in Additional file 2: Table S2. To validate the abnormal karyotype, FISH was performed on metaphase chromosomes of peripheral blood lymphocytes using whole chromosome probes (WCPs) of chromosomes 1and 14 and a centromere probe (CEP) of chromosome 14 (Cyto-Trend, HK, China) following the manufacturer's instructions. The chromosomes 14 and 22 with homologous regions in the centromeres were distinguished based on their different lengths.
Results
G-banding analysis at a band resolution of ∼400 revealed no karyotype abnormalities in the proband (Fig. 1d) or the four other family members. However, further SNP array analysis indicated pathological CNVs in the proband and Fig. 1 a Three-generation pedigree of the family with the proband (III-3) indicated by an arrow. Affected individuals are indicated with black, horizontal or vertical lines in the symbols, including III-3: monosomy 1q43-q44 and III-4: trisomy 1q43-q4; carriers of the cryptic insertion (II-5, III-2) are indicated with dots in the symbols. G-banding analysis was not performed for the elder brother (III-1) due to a lack of sample. b and c Facial profiles of the proband and younger brother at 6 years and 3 months, respectively. The proband presented with microcephaly, and the younger brother presented with macrocephaly. d G-banding analysis of the proband (III-3) at a band resolution of ∼400 showed no visual abnormal karyotype. e The results of SNP-array analysis. The proband (III-3) harbuored an interstitial 1q43-q44 deletion (upper), and the younger brother (III-4) carried an interstitial 1q43-q44 duplication (down). The deletion and duplication regions are indicated by red arrows. f The FISH results for the mother (II-5) with WCP1/14 (left) and WCP1/CEP14 (right) probes. Chromosome 1 is shown in green, and the chromosome 14 is shown in red (left); WCP1 and CEP14 are shown in green and red, respectively (right (Fig. 1e).
WGS analysis of the parents (II-4 and II-5) revealed a normal karyotype for the father but misalignment~3.78 million reads for the mother. Further analysis showed that two records were highly credible (p < 0.001), roughly described as chr14-chr1:62,011,989-245,138,646 and chr14-chr1:62,006,695-239,045,980. These abnormal records indicated insertion of the 1q43-q44 segment into 14q23.1 in the mother's genome, which was confirmed via FISH using the WCP1/14 and WCP1/ CEP14 probes (Fig. 1f). The combination of WGS and FISH analyses revealed that the mother exhibited a 46,XX,ins(14;1)(q23.1;q43q44) karyotype. The three breakpoints were further determined by PCR and Sanger sequencing. Sanger sequencing further confirmed that the first breakpoint on chromosome 1 was located at chr1:239,045,641-239,046,656, the second breakpoint on chromosome 1 was located at chr1:245,145,720-245,145,726, and the breakpoint on chromosome 14 was located at chr14:62,011,535-62,011,546. There were no genes around the 1st breakpoint on chromosome 1. By contrast, the EFCAB2 gene was disrupted in the first intron by the 2nd breakpoint on chromosome 1, and the breakpoint on chromosome 14 disrupted the PRKCH gene within the 12th intron ( Fig. 2a and b). Moreover, some small imbalances and microhomology sequences were also observed near these breakpoint sites (Fig. 2c-e).
Further family analysis by G-banding and FISH confirmed that the elder sister carries the same balanced insertional translocation as the mother.
Discussion
Here, we report a rare family in which two siblings exhibit 1q43-q44 deletion or duplication, respectively. The combination of SNP array, WGS and FISH analyses showed that both the deletion and duplication resulted from a 6.0-Mb cryptic balanced insertion of material from 1q43-q44 inserted into 14q23.1.
Chromosome 1q43-q44 deletion syndrome (OMIM: #612377) is characterized by moderate to severe mental retardation, limited or no speech, and variable but characteristic facial features, including a round face, prominent forehead, flat nasal bridge, hypertelorism, epicanthal folds, and low-set ears. Other characteristics may include developmental retardation, microcephaly, agenesis of the corpus callosum, and seizures [2]. Compared with 1q43-q44 deletion, the clinical manifestations of patients with 1q43-q44 duplication may be mild and mainly include macrocephaly, mental retardation and mild malformation [8,23]. The clinical features of previously described patients with deletion or duplication of chromosome 1 overlapping q34-q44 are shown in Table 1 for a comparison of phenotypic differences. In the present study, the proband exhibiting 1q43-q44 deletion was found to show some characteristic features, such as profound developmental delay, microcephaly, agenesis of the corpus callosum, epilepsy, and unusual hearing impairment. The younger brother carrying this duplicated region presented with macrocephaly and mild developmental delay in gross motor functions. We speculate that some symptoms have not yet emerged in this child because he is very young or that other symptoms might be very mild.
Many patients with 1q43-q44 deletion or duplication including part or all of the regions identified in our patients have been reported. However, the identification of well-detailed genotype-phenotype correlations is hindered by inaccurate mapping of the detailed breakpoints, due to the use of karyotyping or FISH analyses, before the era of high-resolution cytogenetics and the fact that those patients exhibit affected 1q43-q44 regions that differ in size and location. We identified a cryptic chromosomal rearrangement in the mother (II-5) via WGS and confirmed it via FISH. Moreover, we accurately mapped the breakpoints with a combination of a carefully designed data analysis pipeline and Sanger sequencing. The combination of these molecular and cytogenetics techniques characterized the breakpoints at the basepair level and identified two intron-disrupted genes, EFCAB2 and PRKCH, that have observable clinical phenotypes.
In our study, the two patients with 1q43-q44 deletion or duplication presented congenital anomalies, and the proband exhibited a more serious phenotype than her younger brother. Thus, dosage effects or pathogenic variants of some genes within 1q43-q44 likely contribute to their phenotypes. There are 20 known genes that lie within the 6.0-Mb genomic region, 9 of which are indicated to be disease genes in the Online Mendelian Inheritance in Man (OMIM) database according to NCBI Map Viewer (https://www.ncbi.nlm.nih.gov/mapview). The details of these 9 OMIM disease genes and their clinical characteristics and inheritances are shown in Table 2. We analysed the dominant genes for potential dose-effect phenotypes. The AKT3 gene encodes a serine-threonine kinase belonging to the protein kinase B family that is highly expressed in the brain tissue of humans and rodents [24]. The expression of this gene is significantly decreased in the brain and corpus callosum of AKT3-null mice [25], and some studies in humans and mice have demonstrated that AKT3 plays an important role in controlling the sizes of cells and organs [26,27]. Boland et al. [5] reported a patient with a 46,XY,t(1;13)(q44;q32) translocation who presented postnatal microcephaly and agenesis of the corpus callosum and demonstrated that AKT3 was a candidate gene for these phenotypes. Another study showed that a critical region comprising CEP170, SDCCAG8 and AKT3 was associated with microcephaly [4]. However, among patients with 1q43-q44 duplication, macrocephaly is observed in patients exhibiting AKT3 gene duplication [8]. The ZBTB18 gene encodes a protein that acts as a transcriptional repressor of key pro-neurogenic genes. Xiang et al. [28] found that conditional knockout of the ZBTB18 gene in the central nervous system resulted in microcephaly, reduced thickness of the cortex, agenesis of the corpus callosum, and cerebellar hypoplasia. Thus, ZBTB18 was proposed as the most likely candidate gene for corpus callosum abnormalities [2,29]. These studies support pathological roles of AKT3 and ZBTB18 in the 1q43-q44 region. Furthermore, our findings support the notion that ATK3 is a dosage-effect gene that may explain microcephaly or macrocephaly in patients with 1q43-q44 deletion or duplication, including our proband and her younger brother. Some studies have indicated that HNRNPU plays an important role in the regulation of embryonic brain development, and genetic mutation of HNRNPU might cause epileptic encephalopathy and intellectual disability [30][31][32][33]. Therefore, the HNRNPU gene may contribute to the seizure phenotypes of patients harbouring 1q43-q44 microdeletions. Furthermore, Bhatti et al. [34] found that homozygosity of 1q43-q44 deletion might cause non-syndromic hearing impairment and deemed a region containing CHLM, OPN3 and MAP1LC3C a new autosomal recessive nonsyndromic hearing impairment locus. In the present study, the proband also showed bilateral hearing impairment, but it may have been caused by genes of unknown function or other pathogenic factors. The elder brother presented a similar phenotype to that of the proband, and we cannot rule out the possibility that he might have exhibited the same karyotype as the proband. Analysis of families harbouring translocations via WGS and the associated analysis strategy can help us to gain a better understanding of the relationship between phenotype and karyotype, in addition to providing evidence for genetic and reproductive counselling, which may be especially important for the unaffected mother and sister, who are carriers of the insertional translocation.
Accurate breakpoint mapping not only facilitates the elucidation of the relationship between phenotype and karyotype but also offers insights into the possible mechanisms involved in the generation of balanced translocations. In this study, the molecular characterization of the breakpoints showed that they occurred in homologous regions between two non-homologous chromosomes, in addition to demonstrating the presence of small imbalances around the breakpoint site. These findings suggest that the translocation was likely generated through microhomology-mediated repair (MHMR) of doublestrand breaks.
Conclusion
In summary, we report a rare family in which two siblings exhibit pure 1q43-q44 deletion or duplication caused by a maternal balanced insertional translocation. Our study demonstrated that WGS is a powerful tool that allows rapid and accurate mapping of translocation breakpoints at the nucleotide level and could provide useful information for genetic and reproductive counselling for balanced translocation carriers. In addition, the results may help us to better understand detailed karyotype-phenotype correlations, and investigate the possible mechanisms underlying the generation of translocations.
Additional files
Additional file 1:
|
2023-01-19T21:09:08.925Z
|
2018-04-04T00:00:00.000
|
{
"year": 2018,
"sha1": "f1e7e079905a418dc896d3853d3a4fea6661c265",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13039-018-0371-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f1e7e079905a418dc896d3853d3a4fea6661c265",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
219955878
|
pes2o/s2orc
|
v3-fos-license
|
Formation of Complex Organic Molecules in Cold Interstellar Environments through non-diffusive grain-surface and ice-mantle chemistry
A prevailing theory for the interstellar production of complex organic molecules (COMs) involves formation on warm dust-grain surfaces, via the diffusion and reaction of radicals produced through grain-surface photodissociation of stable molecules. However, some gas-phase O-bearing COMs, notably acetaldehyde(CH$_3$CHO), methyl formate(CH$_3$OCHO), and dimethyl ether(CH$_3$OCH$_3$), are now observed at very low temperatures, challenging the warm scenario. Here, we introduce a selection of new non-diffusive mechanisms into an astrochemical model, to account for the failure of the standard diffusive picture and to provide a more generalized scenario of COM formation on interstellar grains. New generic rate formulations are provided for cases where: (i) radicals are formed by reactions occurring close to another reactant, producing an immediate follow-on reaction; (ii) radicals are formed in an excited state, allowing them to overcome activation barriers to react with nearby stable molecules; (iii) radicals are formed through photo-dissociation close to a reaction partner, followed by immediate reaction. Each process occurs without the diffusion of large radicals. The new mechanisms significantly enhance cold COM abundances, successfully reproducing key observational results for prestellar core L1544. H-abstraction from grain-surface COMs, followed by recombination, plays a crucial role in amplifying chemical desorption into the gas phase. The UV-induced chemistry produces significant COM abundances in the bulk ices, which are retained on the grains and may persist to later stages. O$_2$ is also formed strongly in the mantle though photolysis, suggesting cometary O$_2$ could indeed be interstellar.
INTRODUCTION
Complex organic molecules (COMs), usually defined as carbon-bearing molecules with six or more atoms, have been detected within the interstellar medium and in various protoplanetary environments (Blake et al. 1987;Fayolle et al. 2015;Bottinelli et al. 2010;Arce et al. 2008;Öberg et al. 2010;Bacmann et al. 2012). COMs synthesized at the early stages of star formation are suggested to have been a starting point for the organic materials that went on to seed the nascent solar system (Herbst & van Dishoeck 2009). While the degree to which the interstellar synthesis of COMs may contribute to pre-biotic/biotic chemistry on Earth is still a matter of debate, many recent studies have shed light on the possible interstellar/protostellar origins of chemical complexity. For instance, the sugar-like molecule glycolaldehyde (CH 2 (OH)CHO) has been detected toward the class 0 protostellar binary source IRAS16293-2422 (Jørgensen et al. 2012), as well as the Galactic Centre source Sgr B2(N) and other hot cores (Hollis et al. 2000;Beltrán et al. 2009).
It would appear, therefore, that -regardless of other mechanisms such as gas-phase processes or radiolysis -the standard rate description of grain-surface reactions is insufficient to treat all temperature regimes; there are situations in which reactants may be produced and rapidly meet (and react), either without diffusion of radicals or with some minimal amount of diffusion that does not obey the more general rate treatment.
In this study, we present a relatively simple formulation for non-diffusive chemistry, for use in standard gas-grain chemical models, that allows a newly-formed reaction product to react further with some other chemical species that happens to be in close proximity to the product(s) of the first reaction. Due to the instantaneous nature of this process, we refer to it here as a "three-body reaction mechanism" (3-B). We also consider a similar, related mechanism in which the new product has sufficient excitation energy to allow it to overcome the activation-energy barrier to its reaction with some nearby species; here, specifically with CO or H 2 CO. This mechanism is referred to here as the "three-body excited formation mechanism" (3-BEF).
A functionally-similar mechanism was included in the recent model of the solid-phase chemistry of cometary nuclei by Garrod (2019), but rather with the initiating process being the production of radicals by UV-induced photodissociation. In preliminary versions of that model that did not include that mechanism, it was found that the photodissociation of bulk ice molecules at temperatures 5-10 K was capable of producing implausibly high abundances of reactive radicals, which were unable to react due to the lack of bulk thermal diffusion at those temperatures. The Garrod (2019) model includes a new reaction process whereby a newly-formed photo-product may react immediately with a nearby reaction partner in the ice. This mechanism is included in the present model also, which we label as the "photodissociation-induced reaction mechanism" (PDI). Garrod & Pauly (2011) used a similar mechanism to the 3-body mechanisms to explain the formation of CO 2 ice at low temperatures. In their treatment, the production of an OH radical via the reaction of H and O atoms in proximity to a CO molecule could allow the immediate formation of CO 2 (overcoming a modest activation energy barrier). Their models successfully reproduced the observed behavior of CO, CO 2 , and water ice in dense clouds, and showed that such non-diffusive processes could be handled within a standard gas-grain chemical model. More recently, Chang & Herbst (2016) implemented a similar process, which they called a "chain reaction mechanism" in their microscopic Monte Carlo simulation, achieving abundances of gas-phase COMs high enough to reproduce the observational values toward cold cores (at a temperature of ∼10 K), using a chemical desorption efficiency of 10% per reaction. Dulieu et al. (2019), seeking to explain the surface production of NH 2 CHO in laboratory experiments involving H 2 CO, NO and H deposition, introduced a non-diffusive reaction treatment for a single reaction (see also Sec. 2.4).
Finally, a simple treatment for the Eley-Rideal process is included in the present model, in which an atom or molecule from the gas-phase is accreted directly onto a grain-surface reaction partner, resulting in immediate reaction (mediated by an activation-energy barrier, where appropriate). Such processes have been included in similar models before (e.g. Ruaud et al. 2015), but are included here for completeness in the consideration of all mechanisms by which reactants may instantly be brought together without a mediating diffusion mechanism.
The formulations presented here for the above processes also allow, through a repetitive application of the main nondiffusive reaction process (i.e. the three-body mechanism), for the products of each of those processes themselves to be involved in further non-diffusive reaction events (in cases where such processes are allowed by the reaction network). Thus, for example, an Eley-Rideal reaction may be followed by an immediate secondary non-diffusive reaction. The importance of such repetitive processes will diminish with each iteration.
All of the above non-diffusive reaction mechanisms are considered in the model, with a particular emphasis on the production of the O-bearing COMs that are now detected in the gas-phase in cold prestellar cores. The formulations corresponding to each of the new mechanisms presented here are functionally similar to each other, but quite different from the standard diffusive reaction formula used in typical astrochemical models. However, they are fully compatible with the usual treatment and may be used in tandem with it.
With the introduction of the new mechanisms, we run multi-point chemical models of prestellar core L1544 to test their effectiveness in an appropriate environment. A spectroscopic radiative-transfer model is implemented here as a means to evaluate the observable column densities of molecules of interest, allowing the direct comparison of the model results with observations. This paper is structured as follows. The chemical model and the newly-implemented mechanisms are described in § 2. The results of the models are explored in § 3, with discussion in § 4. Conclusions are summarized in § 5. 3.2(-4) S 8.0(-8) Na 2.0(-8) Mg 7.0(-9) Si 8.0(-9) P 3.0(-9) Cl 4.0(-9) Fe 3.0(-9)
Note-a A(B) = A B
We use the astrochemical kinetic code MAGICKAL to study new grain-surface/ice-mantle mechanisms that may effectively form complex organic molecules in cold environments. The model uses a three-phase approach, based on that described by Garrod & Pauly (2011) and Garrod (2013), in which the coupled gas-phase, grain/ice-surface and bulk-ice chemistry are simulated through the solution of a set of rate equations. Initial chemical abundances used in the model are shown in Table 1, with elemental values based on those used by Garrod (2013). The initial H/H 2 abundances are chosen to agree approximately with the steady-state values appropriate to our initial physical conditions, as determined by the chemical model. Bulk diffusion is treated as described by Garrod et al. (2017). Although the bulk ice is technically chemically active in this model, at the low temperatures employed in this work, diffusive reactions in general are negligibly slow, excluding processes involving H or H 2 diffusion. However, the addition of non-diffusive reactions to the model increases significantly the degree of chemical activity within the bulk.
The model uses the modified-rate treatment for grain-surface chemistry presented by (Garrod 2008), which allows the stochastic behavior of the surface chemistry to be approximated; the back-diffusion treatment of Willis & Garrod (2017) is also used. Surface diffusion barriers (E dif ) are related to desorption (i.e. binding) energies (E des ) such that E dif =0.35E des for all molecular species, with bulk diffusion barriers taking values twice as high (Garrod 2013). However, the recent study by estimated surface diffusion barriers for atomic species such as N and O to be E dif =0.55E des . We adopt a similar value E dif /E des =0.6 for all atomic species; the impact of this parameter is discussed in §4.3. These basic surface diffusion and desorption parameters are also adjusted according to the time-dependent abundance of H 2 in the surface layer, following the method of Garrod & Pauly (2011). All diffusion is assumed to be thermal, with no tunneling component (see also Sec. 4.3).
Chemical desorption, whereby grain-surface reactions allow some fraction of their products to desorb into the gas phase, is treated using the RRK formulation of Garrod et al. (2007) with an efficiency factor a RRK = 0.01.
The grain-surface/ice-mantle photodissociation rates used in MAGICKAL are based on the equivalent gas-phase rates, in the absence of other evidence (e.g. Garrod et al. 2008), and they likewise assume the same product branching ratios. Following the work of Kalvāns (2018), we adopt photodissociation rates on the grain-surfaces and in the ice mantles that are a factor of 3 smaller than those used for the gas phase. Photodissociation may be caused either by external UV, or by the induced UV field caused by cosmic-ray collisions with H 2 molecules, and both sources of dissociation are included in the model.
Methanol in cold clouds is mainly formed on the grain surfaces through ongoing hydrogenation of CO (Fuchs et al. 2009). The methanol production network used in the present network follows from that implemented by Garrod (2013), and includes not only forward conversion of CO to methanol but also the backward reactions of each intermediate species with H atoms.
The overall chemical network used here is based on that of Garrod et al. (2017), with a few exceptions. In particular, a new chemical species, CH 3 OCH 2 , has been added along with a set of associated gas-phase and grain-surface reactions/processes listed in Tables 11 (gas-phase) and 12 (solid-phase). This radical is a key precursor of DME in our new treatments (see §2.5). Its inclusion also allows the addition of a grain-surface H-abstraction reaction from DME, making this species consistent with methyl formate and acetaldehyde.
An additional reaction was included in the surface network, corresponding to H-abstraction from methane (CH 4 ) by an O atom (E A =4380 K; Herron & Huie 1973), as a means to ensure the fullest treatment for the CH 3 radical in the network.
The final change to the network is the adjustment of the products of CH 3 OH photo-desorption to be CH 3 +OH rather than CH 3 OH, roughly in line with the recent experimental study by Bertin et al. (2016).
Each of the generic non-diffusive mechanisms that we include in the model (as described below) is allowed to operate on the full network of grain-surface and ice-mantle reactions that are already included for the regular diffusive mechanism; reactive desorption, where appropriate, is also allowed to follow on from each of these, in the case of surface reactions. The full model therefore includes around 1600 surface and 1100 bulk-ice reaction processes for each new generic mechanism included in the model (excluding 3-BEF, see Section 2.5). All the grain-surface and bulk-ice reactions allowed in the network are presented in machine-readable format in Table 12. The rate formulations for diffusive and non-diffusive reaction mechanisms used in the model are described below.
New chemical mechanisms
The standard formulation, as per e.g. Hasegawa et al. (1992), for the treatment of a diffusive grain-surface chemical reaction (also known as the Langmuir-Hinshelwood or L-H mechanism) between species A and B is based on: the hopping rates of the two reactants, k hop (A) and k hop (B); the abundances of both species on the grains, N (A) and N (B), which here are expressed as the average number of atoms or molecules of that species present on an individual grain; the total number of binding sites on the grain surface, N S , often assumed to be on the order of 1 million for canonically-sized grains; and an efficiency related to the activation energy barrier (if any), f act (AB), which takes a value between zero and unity. Thus, the total rate of production (s −1 ) may be expressed in the following form (which is arranged in such a way as to demonstrate its correspondence with the non-diffusive mechanisms discussed later): In the first term, the expression within square brackets corresponds to the total rate at which particles of species A may hop to an adjacent surface binding site. The ratio N (B)/N S gives the probability for each such hop to result in a meeting with a particle of species B. Multiplying these by the reaction efficiency gives the reaction rate associated solely with the diffusion of species A. The reaction rate associated with diffusion solely of species B is given by the second term. The total reaction rate is commonly expressed more succinctly thus: which provides a more standard-looking second-order reaction rate. The rate coefficient, k AB , may be further adjusted to take account of random walk, in which a reactant that has yet to meet a reaction partner may re-visit previous, unsuccessful sites. This effect typically reduces the overall reaction rate by no more than a factor of a few (e.g. Charnley 2005;Lohmar & Krug 2006;Lohmar et al. 2009;Willis & Garrod 2017). The individual hopping rate for some species i is assumed in this model to be a purely thermal mechanism, given by where ν(i) is the characteristic vibrational frequency of species i and E dif (i) is the barrier against diffusion in units of K.
The reaction efficiency factor for a reaction between species A and B considers the case where, if there is an activation energy barrier, the diffusion of either species away from the other may compete with the reaction process itself (see e.g. Garrod & Pauly 2011), thus: where ν AB is taken as the faster of either ν(A) or ν(B), and κ AB is a Boltzmann factor or tunneling efficiency for the reaction (see Hasegawa et al. 1992). The denominator represents the total rate at which an event may occur when species A and B are in a position to react.
In order to formulate rates for non-diffusive reaction processes of whatever kind, the total rate must again be decomposed into its constituent parts, which, unlike in Eqs. (2) and (3), cannot generally be re-combined.
The generic form that we adopt for such processes is: where R comp (i) is labeled the "completion rate" for the reaction, corresponding specifically to the "appearance" of species i. The determination of R comp (i) values is explained in more detail in the following subsections for each of the specific reaction mechanisms considered. The above form is essentially the same as that given by Garrod (2019) for photodissociation-induced reactions. The correspondence of Eq. (1) (for diffusive reactions) with Eq. (6) is clear; the latter may be considered a more general description of a surface reaction rate, which can be applied to both diffusive and non-diffusive processes, according to the chosen form of R comp (i). The regular diffusive mechanism would use While the general form given in Eq. (6) is set up to describe grain-surface processes, it may easily be adapted for bulk-ice processes by substituting N S for N M , the total number of particles in the ice mantle, with N (i) now representing the number of atoms/molecules of species i present in the mantle. In this case, mantle-specific diffusion rates should be used.
The several non-diffusive processes incorporated into the chemical model, based on Eq. (6), are described below. Table 2 indicates which specific new mechanisms are included in each of the model setups tested in the present study.
Eley-Rideal reactions
The Eley-Rideal (E-R) reaction process occurs when some atom or molecule that is adsorbing/accreting from the gas phase onto the grain surface immediately encounters its grain-surface reaction partner as it adsorbs. Ruaud et al. (2015) considered a more intricate treatment than we use here, in which an adsorbing carbon atom could enter into a bound complex with a surface molecule. Here we adopt a more generalized treatment in which we do not differentiate between the binding properties of local surface species.
The rates for the Eley-Rideal process can easily be represented using Eq. (6). For reactions that have no activationenergy barrier (and for which f act (AB) is therefore close to unity), this is achieved by setting R comp (i) = R acc (i), the total accretion rate of species i from the gas phase.
In the interests of completeness, it is necessary also to consider how to treat the kinetics of E-R reactions that have at least some modest activation-energy barrier. To this end, one may consider a hypothetical case where oxygen atoms are slowly accreting onto an otherwise pure CO surface. For purposes of illustration, it is initially assumed here that the surface-diffusion rates of both O and CO are negligible, with the result that f act (O + CO)=1.
The reaction O + CO → CO 2 has an activation energy on the order of 1000 K; although the reaction will not be instant, in the absence of all other competing processes it should nevertheless occur on some finite timescale. Thus, the total timescale for the complete E-R process for an individual accreting O atom encountering and reacting with a surface CO molecule would be the sum of: (i) the accretion timescale of the oxygen atom onto the surface and (ii) the lifetime against its subsequent reaction with CO, i.e. 1/R acc (O) + 1/(ν O+CO κ O+CO ). This would provide a total completion rate associated with O accretion (to be employed in Eq. 6) of: The completion rate R comp (O) should be viewed as the rate at which the reaction process occurs successfully from the point of view of an individual accreting O atom, taking into account all sequential steps in the completion of the reaction process. Note that, in the full description, the probability of encountering a CO molecule on the surface, N (CO)/N S , and the reaction efficiency, f act (O + CO), should both remain outside of the formula for R comp (O), as per Eq. (6). Neither of these values affects the actual timescale over which an individual O atom successfully accretes and reacts with a surface CO molecule; rather, they affect the probability that a single such event is successful.
This expression for R comp (O) could result in one of two important outcomes, depending on the relative rates of accretion and reaction. If accretion of O is very slow, and therefore reaction is comparatively fast, then R comp (O) ≃ R acc (O). Since N (CO)/N S ≃ 1, this means that the total Eley-Rideal production rate would initially be R O+CO ≃ R acc (O). In other words, the overall production rate of CO 2 is only limited by the rate of O accretion onto the surface, which is as one would expect for this case.
However, if reaction is slower than or comparable to the initiating accretion process, each accretion of O would be followed by some significant lag-time between accretion and reaction, which must be accounted for in the overall rate; the incorporation of the above expression for R comp (O) into Eq. (6) indeed does this. Without this expression, and instead using the value R comp (O) = R acc (O), the rate of conversion of O and CO into CO 2 would incorrectly be set to the accretion rate of O. The correct formulation gives a total E-R reaction rate that is less than the total accretion rate, allowing the build-up of O on the surface.
The final adjustment to the barrier-mediated E-R treatment comes into play when one or other surface diffusion rate is non-negligible. If diffusion of (say) O is indeed fast compared to reaction, then the reaction efficiency, f act (O + CO), becomes small, which reduces the total rate of the reaction as per Eq. (6). However, the completion rate R comp (O) must also be adjusted, to correspond only to the instances in which the O+CO reaction is actually successful. Successful reactions would have to occur before the diffusive separation of the two reactants could render the process unsuccessful, so the reaction timescale would become shorter, even though the reaction probability (i.e. f act ) were reduced. For this reason, diffusion rates must also be considered when formulating R comp (O). Using a more general description for reactants A and B, the average lifetime against some event occurring (including reaction itself), once the reactants are in a position to react, may be described more fully by the expression: which can then be used in the general definitions: where R app (i) is the "appearance rate" of species i, which in the case of the E-R mechanism is simply R acc (i). It should be noted that once diffusion becomes significant, a model even as simple as the one used above to describe pure Eley-Rideal reaction processes would be incomplete; the standard diffusive reactions described by Eq. (1) must also be considered (as an entirely separate process) in such a model, to handle the occasions where accreting atoms (e.g. O) do not immediately react with their reaction partners (e.g. CO) before they diffuse away to another binding site, where they may also have the ability to react. In this case, the Eley-Rideal expressions would depend much less strongly on the time-lag effect described above, meaning that R comp and R acc would be similar in cases where diffusion of either reactant were relatively fast. In practical application to astrochemical models, for non-diffusive reactions whose reactants have slow or negligible diffusion rates, other processes could also act to interfere with the reaction; for example, the UV-induced dissociation of one or other reactant might occur on a shorter timescale than a very slow reaction, or a hydrogen atom might arrive to react with one of other reactant, before the reaction in question could occur. Competition from processes such as these would prevent very slow reactions (i.e. those with large activation barriers) from becoming important, even where diffusion of the reactants were negligible. A yet more complete treatment of reaction competition would include rates for these processes in Eqs. (5) and (7). In our chemical model MAGICKAL, Eqs. (6)-(9) are used to set up Eley-Rideal versions of all allowed grain-surface reactions in the network. Because the E-R process is exclusively a surface process, no such processes in the ice mantle are included. Note that, when incorporating the Eley-Rideal mechanism into a model, no modification of the accretion (adsorption) rates themselves is required, since the Eley-Rideal mechanism does not replace any part of the adsorption rate. Rather, the E-R mechanism occurs immediately after adsorption, and therefore acts as a sink on the surface populations of the reactants, even though its rate is driven by the rate of arrival from the gas phase of one or other reactant.
Equations with the general form of Eqs. (6)-(9) are used also to formularize the remaining non-diffusive reaction mechanisms described below, where R app (i) is the only quantity to vary between processes. These formulations can be used equally well for processes with or without activation-energy barriers.
While Eq. (6) is still valid for the regular diffusive reaction mechanism with completion rates of R comp (i) = k hop (i) N (i), no adjustment following Eqs. (7)-(9) should be used, nor would be needed. The formulation required to model any lag-time for diffusive reactions is different from that of non-diffusive processes (because N (i)/N S and f act (AB) cannot remain outside the R comp (i) expression), but there are no circumstances in which such a lag-time would be significant.
Photodissociation-induced reactions
Garrod (2019) suggested that the omission of non-diffusive, photodissociation-induced reactions from models of interstellar ice chemistry may result in the photolytic production of COMs being severely underestimated. Past models of chemistry in star-forming regions (e.g. Garrod et al. 2008Garrod et al. , 2017 have allowed photodissociation to contribute to the production of COMs in the surface and bulk-ice phases in only an indirect way, mediated by thermal diffusion. That is, photodissociation of various molecules produces radicals, which are separately allowed to react through the standard diffusive mechanism. Thus, at very low temperatures, no significant COM production is seen via radicalradical recombination, as diffusion of radicals is minimal. However, the presence of radicals in or upon the ice means that in some fraction of photodissociation events, the products may sometimes be formed with other reactive radicals already present nearby. In this case, the immediate products of photodissociation could react with the pre-existing radicals either without diffusion, or following some short-ranged, non-thermal diffusion process (possibly enabled by the excitation of the dissociation products).
Eqs. (6)-(9) can again be used to describe this process, with an appropriate choice for R app (i), which is simply the total rate of production of photo-product i caused by all possible photodissociation processes: where R j (i) is the production rate of i via an individual photodissociation process j. For the radical CH 3 , for example, this would include the photodissociation of CH 3 OH, CH 4 , and various larger molecules containing a methyl group. If one were to consider, for example, the production of dimethyl ether through this mechanism, an important reaction would be CH 3 + CH 3 O → CH 3 OCH 3 , which is usually assumed to be barrierless. For this reaction, in Eq. (6), species A = CH 3 and species B = CH 3 O; the appearance rate of CH 3 would be as described above. The main contribution to the appearance rate of CH 3 O would likely be the photodissociation process CH 3 OH + hν → CH 3 O + H. The formulation used for the dimethyl ether-producing reaction simply states that some fraction of CH 3 produced by photodissociation of various molecules in the ice will immediately meet a CH 3 O radical that it can react with, and vice versa.
Reactions affected by this PD-induced mechanism need not only be radical-radical recombination reactions; the production, via photodissociation, of atomic H in close proximity to CO, for example, could enhance the rate of the reaction H + CO → HCO, which has an activation-energy barrier. The treatment of barrier-mediated reactions in the generic Eqs. (7)-(9) is used again for this purpose.
This treatment does not take into account any explicit consideration of excitation of the photo-products, which could also enhance reaction rates (as per e.g. Shingledecker et al. 2018, in the case of cosmic-ray induced dissociation). It is also implicitly assumed that the rates of photodissociation used in the network represent the rates at which dissociation occurs without immediate recombination of those same photo-products.
It is trivial to adapt the equations used for surface reactions to deal instead with ice-mantle related processes, and this is indeed implemented in the simulations presented here.
Three-body reactions
The laboratory results of authors such as Fedoseev et al. (2015Fedoseev et al. ( , 2017, in which H, CO and/or other species are deposited onto a cold surface, indicate that surface reactions between radicals of low mobility may produce COMs, even at low temperatures and without any energetic processing. The suggested explanation is that pairs of radical species such as HCO may, on occasion, be formed in close proximity to each other, allowing them to react either immediately or after a very small number of surface hops. The HCO radicals themselves would initially be formed through a more typical diffusive (Langmuir-Hinshelwood) process or through an Eley-Rideal process, via the barrier-mediated reaction of H and CO. Fedoseev et al. (2015) suggest that reactions of HCO with CO may also be active, which would require no diffusion of HCO at all, if the HCO itself is formed through the reaction of atomic H and CO on top of a CO-rich surface.
In a similar vein, Garrod & Pauly (2011) found, using chemical kinetics modeling, that the interstellar abundance of solid-phase CO 2 could be explained by the reaction H + O → OH occurring on a CO-rich dust-grain ice surface. This allows the newly-produced OH to react rapidly with the CO without any intervening thermal diffusion. They introduced into their models a new reaction rate specifically for this process that was functionally similar to Eq. (6).
Here, we use Eqs. (6)-(9) to calculate rates for what may be termed three-body reactions, which include the above examples. This approach is extended to all grain-surface reactions in the network, with a similar treatment for bulkice processes. To do this, another dedicated expression for the appearance rate R app (i) to be used in Eqs. (8) and (9) must be constructed specifically for three-body reactions. Eq. (10) can again be used, this time where R j (i) is the production rate of i (as determined using Eq. 1) resulting from any diffusive (Langmuir-Hinshelwood) reaction, or for any non-diffusive Eley-Rideal or photodissociation-induced reaction, whose rates are described above. Thus, R app (i) includes the production rates of i for all reactive mechanisms j that could lead to a subsequent reaction. From a technical point of view, the rates of all such reaction processes must therefore be calculated in advance of the calculations for any three-body reactions.
Using the example of the process considered by Garrod & Pauly (2011), the reaction under consideration as a threebody process would be OH + CO → CO 2 ; thus, in Eq. (6), A = OH and B = CO. There are several reactions in our network that could produce OH, but the main one is indeed H + O → OH. The sum of the production rates of OH from all of these reactions would comprise R app (OH). The appearance rate for CO would also be constructed from the CO production rates of all reactions leading to its formation.
In this way, CO 2 could be formed via a three-body reaction process in which, for example, an H and an O atom diffuse on a surface until they happen to meet in a binding site where CO is in close proximity, they react to form OH in the presence of the CO, and the OH and CO then subsequently react with no further diffusion required. Alternatively, an oxygen atom could be situated in contact with a CO molecule when an incoming H atom from the gas phase initiates an Eley-Rideal process, leading to OH formation, followed by reaction with the CO. Or, a CH 4 molecule in close proximity to a CO molecule and an O atom could be dissociated to H and CH 3 , with the H quickly reacting with the O atom to produce OH, which would then react with CO. The prescription above would allow many such scenarios to be included in the overall production rate of CO 2 , including others relating to the formation of a CO molecule in close proximity to an OH radical. The adoption of this generalized process means that the special-case prescription for the OH + CO reaction introduced by Garrod & Pauly (2011) is no longer required.
In the kind of chemical system considered by Fedoseev et al. (2015Fedoseev et al. ( , 2017, in which H and CO are deposited onto a surface, complex molecules could be built up via three-body reactions between HCO radicals, initiated either by E-R or L-H production of HCO. Note that the new treatment does not explicitly differentiate between the case where the newly-formed reactant is immediately in contact with the next reaction partner, and the case where it has sufficient excess energy to allow it to undergo a thermal hop in order to find its next reaction partner. It is in fact highly probable that the products of exothermic reactions (which includes virtually every surface reaction included in the network) would have sufficient energy to allow some degree of non-thermal hopping immediately following formation. The possibility of such energy also allowing barrier-mediated three-body reactions to occur more rapidly is considered in the next subsection.
To go yet a stage further, one may imagine a scenario in which the products of three-body reactions themselves could also be involved in subsequent non-diffusive three-body reactions. This possibility is also included in our model, using the same equations as before, with appearance rates defined by: where R j,3B (i) is the production rate of i caused by the three-body reaction labelled j. Although these appearance rates will usually be lower than those used in the first round of three-body reactions, the second three-body reaction could be the most important for certain species if they have no more dominant production mechanism. In the present models, we allow a total of three rounds of three-body reactions to take place. Although this could in theory be increased to any arbitrary number of rounds, the influence of those processes rapidly diminishes beyond the second round.
As with the photodissociation-induced reactions, a similar method is employed also for reactions in the bulk ice. In this case, the appearance rates of reactants in the first round of three-body reactions would generally all be products of photodissociation-induced reactions, as the Eley-Rideal process is exclusively a surface mechanism, while the thermal diffusion of all species in the bulk -excluding arguably H and H 2 -would be very slow at the temperatures considered in the simulations presented here.
Finally, we note that a method for treating what we label three-body reactions was recently employed by Dulieu et al. (2019), for a single reaction between H 2 CO and newly-formed H 2 NO. Those authors constructed a separate chemical species to represent H 2 NO that is formed in contact with H 2 CO on a surface, then included a special reaction in their network that occurs at a rate equal to the vibrational collision frequency of the two contiguous species. Although apparently different from our approach, such a treatment should also provide the correct result; this is because, assuming that there are no competing processes with the reaction in question, the abundance of the special H 2 NO species is determined solely by its formation rate and its one destruction rate. In that case, both the specific abundance of the special H 2 NO species and its reaction rate coefficient are cancelled out in the overall rate calculation, giving a total production rate for the reaction that is equal to the rate that we employ in the present work. In such a case, therefore, the chosen rate coefficient becomes immaterial to the result. It is presumably possible to set up a large network of such reactions for newly-formed species; however, the requirement to include a new chemical species for each reactant pair would likely make this method prohibitive for large networks.
Specific reactions
Although the full model includes a range of three-body (3-B) processes capable of producing acetaldehyde (CH 3 CHO), methyl formate (CH 3 OCHO) and dimethyl ether (CH 3 OCH 3 ), the dominant mechanisms for each (based on model results) are presented below.
For acetaldehyde, the most important three-body mechanism is made up of a pair of sequential two-body processes as follows: The most important sequential mechanisms for the other two COMs are: H + CO → HCO (16) Each of these reaction pairs involves the addition of radicals in the second step, and two of them involve the addition of atomic H to a radical in the first step. The production of the COMs through these mechanisms should therefore have a strong dependence on the instantaneous abundances of short-lived reactive radicals.
The full network used in the models includes three-body versions of all the reactions used for regular diffusive chemistry, for all surface and mantle species.
Excited three-body reactions
Besides the three-body reaction process described in §2.4, we also consider a mechanism whereby the initiating reaction produces a product that is sufficiently excited that it is able to overcome the activation energy barrier to a subsequent reaction. This is of particular interest if it may allow a reaction with either CO or H 2 CO -both abundant surface species -that would result in the production of a precursor to an important O-bearing COM. In this picture, Figure 1. Illustration of alternative mechanisms for acetaldehyde formation: (a) the regular diffusive grain-surface reaction between radicals CH3 and HCO, and (b) a postulated three-body excited-formation mechanism involving H, CH2 and CO, followed by a regular diffusive reaction between the radical product, CH3CO, and another H atom. In case (a), reaction is slow at low temperatures. In case (b), a rapid initiating reaction between H and CH2, with exothermicity ∆H 0 f = 4.80 eV, provides enough energy to the product CH3 that it may immediately overcome the barrier to reaction with a neighboring CO molecule. This produces a precursor to acetaldehyde, CH3CO, that can easily be hydrogenated by a mobile H atom to form CH3CHO.
the energy of formation released by a reaction is held in the vibrational excitation of the product species. That excited species can then immediately react with a contiguous reaction partner. Figure 1 shows the formation of CH 3 CHO via this three-body excited formation (3-BEF) mechanism as an example. The original reaction network included the direct association of CH 3 and HCO, mediated by radical diffusion, as the main formation process for surface CH 3 CHO. The chance to form the CH 3 CHO in that purely diffusive model is small, because it would require immobile heavy radicals to meet at low temperature. The new 3-body process described above, as well as the photodissociation-induced and Eley-Rideal processes, would allow this reaction to occur non-diffusively. However, the excited production of CH 3 could also allow reaction with abundant surface CO. In the first step, an H atom meets and then reacts with a CH 2 radical that is adjacent to a CO molecule. This reaction is exothermic by 4.80 eV (∼55,700 K), sufficient to overcome the barrier to the CH 3 + CO reaction (nominally 2,870 K, see below). Once this follow-up reaction has occurred, the product CH 3 CO, which is a precursor to acetaldehyde, can easily be converted into a stable species via hydrogenation by another H atom. The entire process is described as follows: where an asterisk indicates an excited species. Similar reactions for the production of CH 3 OCH 3 and CH 3 OCHO through the 3-BEF process are as follows: The 3-BEF process technically concerns only the first two reactions out of the three, in each case; the final hydrogenaddition step most typically occurs through the usual Langmuir-Hinshelwood mechanism that is already included in the model, although non-diffusive mechanisms may also act to add the final H atom. Table 2.
Non-diffusive mechanisms included in model setups
Model
Eley-Rideal Photodissociation-Induced Three-body Three-body Excited Formation Adjusted 3-BEF efficiency for MF Due to the more complicated requirement to consider the energy of formation in each case, the three new 3-BEF processes shown above were individually coded into the model, rather than constructing a generic mechanism. For this reason, the 3-BEF mechanism is included only in the first round of three-body processes. The production rate of the standard diffusive process for the initiating reaction in each case is responsible for the entire value of R app (i), and only one term is required in Eq. (6). Crucially, the reaction efficiency for the second reaction in the process (i.e. the reaction whose rate is actually being calculated with the 3-BEF method) is initially set to unity, to signify that the activation energy barrier is immediately overcome.
Unfortunately, the activation energies of the above reactions between the radicals and CO or H 2 CO are not well constrained. The chemical network of Garrod (2013) included the CH 3 + CO and CH 3 O + CO reactions, adopting a generic activation energy barrier of 2,500 K, based on the approximate value for the equivalent reactions of atomic H with CO and H 2 CO. A reaction between CH 3 and H 2 CO was also present in that network, with products CH 4 and HCO, and E A =4440 K; this reaction is retained here in addition to the new pathway.
For the present network, we calculate an approximate activation energy of 2,870 K for the CH 3 + CO reaction using the Evans-Polanyi (E-P) relation (e.g. Dean and Bozzelli 2000); this would be well below the energy produced by the initiating reaction (∼55,700 K). Due to the lack of comparable reactions for a similar E-P estimate for the activation energy of the CH 3 + H 2 CO reaction, the same value of 2,870 K might be assumed, placing it also comfortably less than the energy produced by the H + CH 2 → CH 3 reaction. Few determinations exist for the activation energy of the CH 3 O + CO reaction, although an experiment places it at 3,967 K (Huynh & Violi 2008, for temperatures 300-2500 K). This also is less than the energy produced by the initiating reaction, H + H 2 CO → CH 3 O (∼10,200 K). In any case, the activation energies involved in each of the three reactions mentioned here are sufficiently large that they should be of no importance without the inclusion of the 3-BEF mechanism to provide the energy required, while the 3-BEF mechanism itself is assumed to go at maximum efficiency. However, the latter assumption may not necessarily be accurate, depending on the form of the energy released by the reaction, and whether there is any substantial loss prior to reaction actually occurring (see §3.3).
Physical conditions
MAGICKAL is a single-point model, but a spatially-dependent picture of the chemistry of L1544 can be achieved by running a set of models with different physical conditions at specific positions within the prestellar core. Recently, Chacón-Tanarro et al. (2019) determined the parameterized density and temperature structure of L1544 as follows, considering the optical properties of dust grains as a function of radius: where r is measured in arcseconds. Based on this density structure, we determine 15 densities at which the chemical models are to be run, ranging logarithmically from the minimum of n H = 4.4 × 10 4 cm −3 (∼11,000 AU) to the maximum of n H = 3.2 × 10 6 cm −3 (core center). An additional eight density points are then placed to achieve better spatial resolution toward the core center (where the density profile is relatively flat). The appropriate temperature for each point is then chosen from the profile, based on density/radius.
In order to take account of the gradual collapse of the gas into this final density profile, the density used for each chemical model in the set is independently evolved using a simple modified free-fall collapse treatment. (The radial position of each model point is thus not explicitly considered during this evolution). Each point begins with a gas density of n H,0 = 3 × 10 3 cm −3 , with an initial H/H 2 ratio of 5 × 10 −4 . The density evolution stops once each model reaches its specified final density, resulting in a marginally different evolutionary time for each density point. The collapse treatment is based on that used by Rawlings et al. (1992). Magnetic fields can play an important role in the equilibrium of dense cores, significantly slowing down the collapse process. Estimating an accurate collapse timescale is challenging, although the ratio of the ambipolar diffusion time (τ ap ) and the freefall time (τ ff ) is typically assumed to be τ ap /τ ff ∼ 10 (see, e.g., Hennebelle & Inutsuka 2019). Thus, the magnetic retardation factor for collapse, B, is adopted here to control the collapse timescale. This parameter takes a value between 0 (static) and 1 (free-fall) and is technically density-dependent. In our model, this value is set for simplicity to 0.3 for all density points, which results in a collapse timescale approximately 3 times longer than the free-fall timescale. A time of a little over 3 × 10 6 year is therefore required to reach the final density at each point, although much of this time is spent under relatively low-density conditions as the collapse gradually ramps up.
The density evolution for each model is accompanied by increasing visual extinction, which evolves according to the expression A V = A V,0 (n H /n H,0 ) 2/3 (Garrod & Pauly 2011). The initial extinction values are set such that the values at the end of the chemical model runs correspond to the linear integration of the density profile, converted to visual extinction using the relationship N H = 1.6 × 10 21 A V . An additional background visual extinction of 2 is added for all positions and times, under the assumption that L1544 is embedded in a molecular cloud (e.g. Vasyunin et al. 2017). In contrast to density and visual extinction, the temperature is held steady throughout the chemical evolution for each density point, with the same value adopted for both the gas and the dust. Temperatures range from approximately 8 to 14 K depending on radius, which is consistent with the observational features (Crapsi et al. 2007).
RESULTS
The time-evolution of the fractional abundances at the core-center position is presented in Fig. 2 (gas phase) and Fig. 3 (solid phase), for each of the main chemical model setups. In the control model, no new mechanisms are added. In each of the other model setups, a single new mechanism is added to the control model setup, except for model 3-B+3-BEF, in which it is assumed that the 3-BEF mechanism could not occur without the 3-B mechanism also being active.
As seen in Fig. 2, every new mechanism introduced here, excluding E-R, significantly increases the abundances of CH 3 OCH 3 and CH 3 OCHO in the gas phase during core evolution, while CH 3 CHO is only substantially increased via 3-BEF. However, it should be noted that the increased fractional abundances rapidly drop as density increases toward the end-time of all the models, mostly converging to the control-model values. This indicates that the new mechanisms may hardly affect the gas-phase COMs at the core center, but may be more effective at more distant radii (i.e. lower density regions); this would nevertheless result in higher abundances toward the core-center position when averaged over the line of sight to include lower-density gas.
The presence of the COMs in the gas phase following their formation on grain surfaces is the result of chemical desorption. All surface reactions that form a single product have a small possibility of returning that product to the gas phase. The upper limit on the ejection probability per reaction is 1%.
Similar to the gas phase, every mechanism excluding E-R significantly increases the solid-phase populations of the COMs (Fig. 3). Note that the solid-phase population of CH 3 CHO, whose gas-phase abundance is only strongly increased by the 3-BEF mechanism, increases even in the 3-B (only) and PD-Induced models. The 3-B and 3-B+3-BEF models converge to essentially the same value at the end-time. Dimethyl ether in the mantle is produced in similar quantities by each of the three effective mechanisms, while 3-BEF and then 3-B are more important than PD-Induced formation in the case of methyl formate. The E-R mechanism produces only marginal increases in mantle abundances of acetaldehyde and methyl formate. The increase in dimethyl ether production caused by E-R is around an order of magnitude throughout most of the evolution, although this is dwarfed by the effects of the other mechanisms. Fig. 4 shows the radial distribution of gas-phase COM abundances using the full radius-density-temperature profile model results; abundances shown correspond to the end-time abundances, at which the final density profile is achieved.
The observational values that are also indicated in the figure are for the core-center position; however, those observations correspond to a beam of radius ∼1900 AU, and would also sample a range of physical conditions along the line of sight -some caution should therefore be taken in directly comparing them with the local fractional abundance values.
It may be seen that the general trend, even for the control model, is for COM abundances to increase toward greater radii. The 3-B+3-BEF model produces maximum molecular abundances for acetaldehyde and methyl formate similar to the observational values. For the latter molecule, the modeled fractional abundance exceeds the observational values at radii greater than around 2500 AU, although the absolute gas density begins to fall off at these positions, Figure 3. Time evolution of the abundances in the dust-grain ice mantles of the three O-bearing COMs at the core center, for models using each of the new mechanisms. The abundance from the control model is denoted as a black dashed line. Diamonds indicate the abundances at the end of each model run. The gas density is included for reference, indicated by the right-hand vertical axis and the similarly colored line. so that they should contribute less to the total column density of the molecule. At the largest radii modeled, the 3-B (only) model produces methyl formate sufficient to match the observed abundance (although, again, perhaps with little contribution to total column density). Acetaldehyde also reaches its peak abundance at large radii, although it reaches a similar abundance at smaller radii.
For each of the new models, the local fractional abundance of CH 3 OCH 3 is greatest at positions away from the core center, but a significant increase in abundance is achieved at almost all positions for every model, versus the control. However, the maximum value achieved (for the 3-B model) is still at least two orders of magnitude lower than the observations, both in the inner regions and at the outer edge. Curiously, for dimethyl ether, the most effective model is the 3-B (only) model, whereas the 3-B+3-BEF model is the most productive for the other two COMs.
In each of the 3-B, 3-B+3-BEF and PD-Induced models, acetaldehyde and dimethyl ether abundances show a prominent peak feature at around 2000 AU. This feature is also present for methyl formate in the PD-Induced and 3-B (only) models. Observations of L1544 by Jiménez-Serra et al. (2016) show higher fractional abundances of CH 3 CHO and CH 3 OCH 3 toward an off-center position at r ≃4000 AU, versus those at the core-center, with CH 3 OCHO arguably showing similar behavior. Our 3-B, 3-B+3-BEF and PD-Induced models all show this general behavior (methyl formate in the 3-B+3-BEF model notwithstanding), albeit at a somewhat different radius from the observations. The origin of this peak and its similarity to the observations are discussed in more detail in § 4.2. Figure 5 shows the radial distribution of the ice-mantle abundances at the end-time of each model, plotted as a function of the water abundance in the ice at each position. In contrast with the gas-phase, all the new mechanisms but E-R significantly increase the solid-phase abundances of COMs at all radii. This is partly because the ice mantle preserves the earlier surface layers during the evolution of the prestellar core, when significant enhancement of the COM abundances in the gas-phase is found (see Fig. 2), which is itself caused by increased efficiency in the production of COMs on the grain surfaces. However, the PD-Induced model permits substantial ongoing processing of mantle material itself.
While COM production is not especially important in the control or E-R models, the others attain substantial COM abundances in the ices, comparable with gas-phase values observed in hot molecular cores. The maximum abundance achieved by methyl formate in the 3-B+3-BEF model is close to 1% of water abundance at the core center, i.e around 10 −6 with respect to total hydrogen. This value may thus be too high to agree with observations of hot cores/corinos, if the abundances achieved in the prestellar stage should be preserved intact to the later, warmer stages of evolution. It is also noteworthy that the COM fraction in the ices is in general somewhat greater at larger radii, although the dimethyl ether abundance is fairly stable through the core in the 3-B and 3-B+3-BEF models, and methyl formate is also stable throughout in the 3-B+3-BEF model. Figure 6 shows the final radial distribution of the main ice constituents as a fraction of the local water ice abundance, for the control model and for the 3-B+3-BEF model. The absolute abundance of profile of water ice is fairly constant across the profiles. The latter is taken as representative of all the new models, due to their similarity, except for the E-R model, which is rather similar to the control. Based on the absolute abundance profiles, column densities for each ice species are derived by integrating along the line of sight (without beam convolution); the resulting abundances with respect to H 2 O ice column density are summarized in Table 3. Comparable observational ice abundances are also shown, taken from Boogert et al. (2015), who provided median values with respect to H 2 O, along with the full range of the observed abundances (from subscript to superscript value). Both of our model setups produce a centrally-peaked distribution of CH 3 OH ice, while CO ice is approximately as abundant as H 2 O ice, especially toward inner radii where the most extreme depletion occurs. With the new mechanisms, a more gently-sloped distribution appears, and a better Figure 6. Radial distribution of the ice abundances of the main ice constituents. The abundance of CO and CO2 is denoted as a black solid and a black dashed line, respectively. A blue, green, and red solid line represents the abundances of CH4, NH3, and CH3OH, respectively. match with observational abundances of CO and CO 2 is achieved. The other ice components in the 3-B+3-BEF model are within the observational range as well.
Column density analysis
Observational abundances may not accurately represent the true local abundances within a source. This is because the observational intensities are not only averaged over the line of sight, but are also affected by the excitation characteristics of each observed species, and by the response of the telescope beam. Considering this, it is indispensable to perform spectral simulations for better comparison with the observations. The spectral model used here simulates molecular lines (of COMs) that are expected to be observable, and uses chemical abundances shown in Fig. 4 as the underlying distribution. The 1-D chemical/physical model is treated as spherically symmetric, so that molecular emission can be simulated along lines of sight passing through the core at various offsets (including directly on-source), assuming local thermodynamic equilibrium. Each line of sight passes through a range of gas densities, temperatures and chemical abundances. The resulting 2-D simulated intensity maps for each frequency channel are then convolved with a Gaussian telescope beam of appropriate size, dependent on frequency and the telescope in question. (For a more detailed description of the spectral model, see Garrod 2013). The FWHM of the molecular lines is assumed here to be 1 km/s, with a spectral resolution of 250 kHz, although the simulations are quite insensitive to the precise choice of parameters.
The integrated intensities of the ensemble of molecular lines is used in a rotational diagram analysis (Goldsmith & Langer 1999) to obtain column densities (N tot ) and rotational temperatures (T rot ) for each molecule. These quantities can then be compared directly with those obtained from observations. Beam sizes were assumed to be ∼28 ′′ -31 ′′ between Note-Acetaldehyde line data from JPL catalogue based on the data set of Bauder et al. (1976). Dimethyl ether line data from JPL catalogue based on the data set of Lovas et al. (1979); Neustock et al. (1990). Methyl formate line data from JPL catalogue based on the data set of Ilyushin et al. (2009);Plummer et al. (1984). The representative molecular transition used for the normalized convolved intensity analysis in § 3.4 is denoted with * .
The distance to the model prestellar core is assumed to be 140pc (Elias 1978). The radiative transfer and rotational diagram analysis is performed toward the on-source position, and toward two offset positions: (i) the peak of the COM abundances (2000 AU), and (ii) the low-density outer-shell (9000 AU). By considering these three positions, we may compare the modeled COM peaks with the observational ones, and determine the dependence of the chemical reactions on the local physical conditions in the prestellar core.
One strategy to apply this radiative transfer and rotational diagram technique would be to simulate precisely the same molecular lines used in individual observational datasets for L1544. However, since the present aim is to determine a well-defined column density (and rotational temperature) based on the models, with which observed column densities may be directly compared, we instead choose a selection of lines that may plausibly be (or indeed have been) detected toward cold sources, and which include a range of upper energy levels. Emission lines of CH 3 CHO and CH 3 OCHO recently detected toward the cold dark cloud B5 (Taquet et al. 2017) are chosen for this analysis. While Taquet et al. (2017) detected a relatively large number of molecular lines for CH 3 CHO and CH 3 OCHO, only four transitions of CH 3 OCH 3 with a limited range of E up (10-11 K) were detected by those authors. Our adoption of only those lines could therefore cause substantial uncertainty in the determination of N tot (CH 3 OCH 3 ). For this reason, we choose eight bright (i.e. high A ij ) AA-transitions of CH 3 OCH 3 with E up ranging from 8-19 K, using the Splatalogue web tool 1 . The spectroscopic data originate from the JPL line list 2 (Bauder et al. 1976;Lovas et al. 1979;Neustock et al. 1990;Ilyushin et al. 2009;Plummer et al. 1984); the COM transitions considered in this analysis are listed in Table 4.
Column densities of O-bearing COMs toward the core center
Tables 5 -7 (core center, 2000 AU, 9000 AU) compare the molecular column densities obtained from the RD analysis of different chemical models with observational values from the literature; observational errors and rotational diagram line-fitting error estimates are given in parentheses. Figure 7 shows the molecular column densities for each model at three different positions as histograms. The observed value (a solid horizontal line) and its error bounds (dashed horizontal lines) are presented together for comparison. While every chemical model introduced here significantly underproduces CH 3 OCH 3 , both the 3-B and 3-B+3-BEF models result in meaningful differences from the control model for the other two COMs at the core center (Table 5). One thing to note is that the COMs are more actively formed via 3-BEF than solely by the 3-B mechanism. For example, while 3-B+3-BEF significantly increases the column density of CH 3 CHO as well as CH 3 OCHO, 3-B substantially increases CH 3 OCHO only. Furthermore, even though Figure 8. Time evolution of the fractional ice composition of the reactants on the grain-surface related to the 3-B+3-BEF mechanisms forming CH3OCH3(red lines) and CH3OCHO(black lines). Note that the abundances shown refer specifically to species on the outer grain/ice surface and not within the ice mantles. A fractional abundance of ∼ 1.3 × 10 −12 corresponds to one particle per grain.
In either the 3-B or 3-BEF models, the key quantities through which the production rates of COMs (on the grains or in the gas phase) may be understood are the surface abundance of reactants, and the production (i.e. appearance) rates of their reaction partners. The latter quantity is an explicit component of the new expressions for non-diffusive processes, whereas in the regular L-H formulation it does not appear. The higher formation rates in the 3-BEF model can be explained by the fact that this mechanism involves the addition of radicals to stable compounds (which are thus more abundant on the grain surface) in the second step of the consecutive reaction chain (Eqs. 18-20), while the 3-B process involves the addition of sparse radicals (Eqs. 13, 15, and 17).
The greater rate of formation of CH 3 OCHO over that of CH 3 CHO in either the 3-B or 3-BEF model can also be understood in the same context. In the 3-B model, the reactants CH 3 and CH 3 O are technically competing with each other to form either CH 3 CHO or CH 3 OCHO by reacting with HCO radicals on the grain surface. As seen in Fig. 8, the fractional grain-surface abundance of CH 3 O (shown for the core-center position) is much higher than that of CH 3 . The production rate of CH 3 O is also much greater than that of CH 3 , which is partly why its surface abundance is higher. Similarly, in the 3-BEF model, CH * 3 and CH 3 O * are competing with each other to react with CO on the grain surface to form either CH 3 CHO or CH 3 OCHO; CO is abundant, and the appearance rates of CH * 3 and CH 3 O * directly determine the formation rates of CH 3 CHO and CH 3 OCHO.
Note that only a fraction of newly-formed methyl radicals can take part in the formation of the COMs through the 3-BEF mechanism, because only the excited methyl radicals formed via hydrogenation of CH 2 have this mechanism available; abstraction of H from CH 4 by other H atoms is slightly endothermic, so it should not produce CH * 3 . Thus, the radical CH 3 acts as a bottleneck to the formation of the COMs in the non-diffusive models. Also, the gradual depletion of C and related hydrocarbons from the gas phase, while CO remains abundant, means that production of CH 3 cannot keep up with the production of CO-related radicals. The reaction of CH 3 with H or H 2 from the gas phase to re-form CH 4 also keeps the average grain-surface CH 3 abundance low. The production of HCO and CH 3 O radicals continues to be effective as methanol builds up; while the net direction of the CO chemistry is to convert it gradually to methanol, there are backward reactions at every step, including H-abstraction from CH 3 OH, that allow the intermediate radicals to maintain some level of surface coverage and a sustained rate of production/appearance.
Given that the formation of CH 3 OCH 3 is related to CH 3 in both the 3-B or 3-BEF models, the lower column density of CH 3 OCH 3 in those models can be explained. CO and H 2 CO are competing to form either CH 3 CHO or CH 3 OCH 3 by reacting with the excited methyl radicals on the grain surface, but CO is much more abundant than H 2 CO (Fig. 8). The small amount of excited methyl radicals on the surface are thus preferentially consumed to form CH 3 CHO.
The E-R mechanism does not make a substantial difference to the gas-phase abundances versus the control (see Fig. 4 and Table 5). This is because the E-R process requires high surface coverage of the reactive species on the grains to be effective. This result is not exactly consistent with the results from Ruaud et al. (2015). They find that the combination of E-R and their complex-induced reaction mechanisms is efficient enough to reproduce the observed COM abundances at temperatures as low as 10 K. Beyond the uncertainty in the level of contribution of either mechanism, the different model parameters of both studies should be noted: Ruaud et al. (2015) mainly focus on the accretion of carbon atoms and assume a much higher binding energy (3600-8400 K) than ours (800 K). This may cause higher concentration of reactive species on the grain surface, allowing the E-R process to be efficient.
The PD-induced reaction process is ineffective in increasing the population of COMs in the gas phase at the core center. However, the PD-induced model significantly increases (more than 2 orders of magnitude) the amount of COMs in the ice mantles throughout the core's evolution (see figure 5). Other studies suggest indeed that the bulk ice is where the majority of physico-chemical changes caused by radiation chemistry are likely to occur (Johnson 1990;Spinks & Woods 1990;Shingledecker et al. 2017). The enhanced population of the COMs in the ice mantle does not actively affect the population in the gas phase, because the COM products are preserved in the mantle rather than diffusing to the grain-surface, which is directly coupled to the gas phase. Even though this process does not make a prominent difference in the gas-phase abundance for the prestellar core, it would significantly affect the chemistry during the warm-up period of a protostellar core in which accumulated mantle manterial is ejected from the grains.
Optimization of the 3-BEF model
As discussed in §3.2, the formation of methyl formate (CH 3 OCHO) through the 3-BEF mechanism is so efficient that CH 3 OCHO is significantly overproduced, while this is not the case for acetaldehyde (CH 3 CHO). The 3-BEF mechanism as described in §2.5 is assumed to proceed with 100 % efficiency; however, the appropriate value in individual cases could be lower if the exothermic energy available from the initiating reaction (E reac ) is similar in magnitude to the activation energy barrier (E A ) of the subsequent reaction. Assuming the energy is initially released into the vibrational modes of the excited species, it may not be available in the required mode for reaction with an adjacent species to occur before that energy is lost to the surface, or indeed that the excited species diffuses away entirely from its reaction partner. If the excited product has s internal vibrational modes, the 3-BEF process would be expected to have substantially sub-optimal efficiency in the case where E A > E reac /s, while it would not occur at all in the case where E A > E reac . The former condition would appear to hold for the reactions shown in Eqs. (20), in which methyl formate is produced; here, s(CH 3 O)=9, E reac ≃ 10, 200 K, and E A =3,967 K for the CH 3 O + CO → CH 3 OCO reaction (Huynh & Violi 2008).
Rice-Ramsperger-Kassel (RRK) theory may be introduced to obtain a statistical estimate of the efficiency. Using the same formulation that is employed to determine the probability of chemical desorption in the model (Garrod et al. 2007), the probability of a successful 3-BEF process would be: where s now includes an additional vibrational mode representing the reaction coordinate (i.e. s(CH 3 O)=10). For the reactions forming CH 3 OCHO, the values provided above give a probability of 1.2%, while for the reactions producing CH 3 CHO and CH 3 OCH 3 the probability would be 73%. This shows that the P (CH 3 OCHO) of 1 originally introduced in our 3-B+3-BEF model was too high, explaining the overproduction of that species. For the present models, in which only three 3-BEF processes are explicitly considered, we empirically test a selection of efficiencies for the reaction to form CH 3 OCHO ranging incrementally from 100% to 0.1% in factors of 10; the other two 3-BEF reactions are assumed to operate at maximum efficiency. It is found that a probability of 0.1% best reproduces the molecular column densities from the observations. The empirically-determined optimal efficiency is clearly lower than the simple RRK treatment above would suggest. However, the latter does not include competition between reaction and diffusion of the excited species, which could Table 8. Column densities of COMs in the 3-BEF Best model Observation (Core Center) a 1.2 ×10 12 1.5 ×10 12 (2.0 × 10 11 ) 4.4 ×10 12 (4.0 × 10 12 ) Core Center 1.7 ×10 12 (5.7 × 10 9 ) 3.4 ×10 10 (3.3 × 10 8 ) 4.0 ×10 12 (2.7 × 10 10 ) 2000 AU 1.4 ×10 12 (5.4 × 10 9 ) 4.8 ×10 10 (1.1 × 10 8 ) 4.8 ×10 12 (1.1 × 10 10 ) 9000 AU 8.5 ×10 11 (3.3 × 10 8 ) 5.7 ×10 10 (3.0 × 10 7 ) 4.0 ×10 12 (3.8 × 10 10 )
Note-Values in parentheses indicate observational or rotational diagram line-fitting (model) errors.
a Jiménez-Serra et al. (2016) account for at least a factor of a few, representing several diffusion directions. Likewise, additional translational degrees of freedom of the excited species could be considered in Eq. (23), rather than just one reaction coordinate. We note also that these modifications would reduce the efficiency of the other two 3-BEF reactions considered here, perhaps bringing them closer to around 10%. The molecular dynamics study by Fredon et al. (2017) of reaction-induced nonthermal diffusion would indeed suggest that translational motion would be a necessary factor to consider in a detailed treatment of the 3-BEF process (although it should be noted that those authors assumed all of the energy to be immediately released into translational modes, rather than distributed also into internal vibration and/or rotation. Figure 9 shows rotational diagrams obtained from LTE radiative transfer calculations based on the 3-BEF Best model molecular profiles, with the beam directed toward the core center. Table 8 compares the molecular column densities towards the core center from this model with the observational literature values. The errors (in parentheses) for modeled column densities are derived from the standard deviation of linear regression fitting in rotation diagrams. The three-body mechanisms introduced here are efficient enough to reproduce the amount of CH 3 OCHO and CH 3 CHO in the prestellar core when an appropriate efficiency for the 3-BEF mechanism is adopted. Figure 10 compares the chemical distribution of the 3-BEF Best results (solid lines) with those of the control (dotted lines) and the normal 3-BEF (dashed lines) models. While the amount of CH 3 OCHO is significantly reduced in the 3-BEF Best model compared to the normal 3-BEF in both gas-and solid-phase, the population of CH 3 OCH 3 increases (by roughly an order of magnitude in the gas); the weakening of the 3-BEF mechanism for methyl formate production leaves more of the CH 3 O radical available to participate in other reactions, including the regular 3-B mechanism (CH 3 + CH 3 O) that produces dimethyl ether. A commensurate increase is seen in the column density values.
The adjustment to the efficiency of MF production through the 3-BEF process also reduces the solid-phase abundance of that molecule with respect to water back to more plausible values that are in line with the maximum typical values observed in hotter sources (i.e. around 10 −8 with respect to H 2 ). The fraction is higher beyond around 5,000 AU, but the total ice abundance at these positions would also be somewhat lower.
CO hydrogenation and CH 3 OH abundances
The CH 3 OH map of Bizzocchi et al. (2014) shows a highly asymmetric non-uniform ring surrounding the dust peak of L1544. This morphology is consistent with central depletion and preferential release of methanol in the region where CO starts to freeze out significantly. Jiménez-Serra et al. (2016) shows that COMs are actively formed and already present in this methanol peak.
The upper panel of Fig. 11 shows the radial distribution of CH 3 OH fractional abundance for each of the chemical models; abundances are very similar for all models at all positions. Methanol in the gas is mainly formed as the result of the hydrogenation of grain-surface CO all the way to CH 3 OH, followed by chemical desorption. The radial distribution of the fractional abundance of gas-phase methanol has its peak well beyond where the observations would suggest. However, it should be noted that the gas density in these more distant regions drops off significantly, according to the physical profile. The location of the peak in absolute abundance would provide a better comparison directly with observations, although the best method is to consider the column density structure of methanol explicitly. Figure 11.
(a) Radial distribution of fractional abundance of CH3OH for each model. The abundance value from the observations toward the core center is denoted a black dotted line. Gas density as a function of radius is also indicated. (b) The normalized, convolved intensity of a representative emission line for methanol and for each of the three COMs of interest, shown as a function of the offset of the beam from the on-source position.
The lower panel of Fig. 11 shows the normalized convolved intensity of a representative emission line of methanol, as a function of the beam offset from the center, using the radiative transfer model already described with the 3-BEF Best model data. Since the lines are optically thin and are well represented by an LTE treatment (see Fig. 12 and Table 9), the line intensity profile scales well with the column density profile along each line of sight. The modeled methanol emission shows a peak near to 4000 AU as reported in the observations, even though this feature is not so obvious, as the slope is quite gentle. The same treatment is shown for the other three COMs of interest. Methyl formate shows a fairly similar distribution of emission to that of methanol, while the other two COMs show peaks at 2000 AU as seen in the fractional abundances. The representative molecular transition used for this analysis is denoted with an asterisk in Tables 4 and 9.
The full RD analysis is performed for methanol as for the other COMs. The seven E-transition lines of CH 3 OH that were detected by Taquet et al. (2017) are chosen for this (Table 9). A single fit to all lines provides a column density 8.6×10 13 cm −2 at the core center. This value is roughly consistent with the observation (2.6×10 13 cm −2 , Note-Methanol line data from JPL catalogue based on the data set of Xu et al. (2008). The representative molecular transition used for the normalized convolved intensity analysis in § 3.4 is denoted with * . Bizzocchi et al. 2014). The precise value, as with those of the other COMs, will be dependent on the fidelity of the chemical desorption treatment used here.
DISCUSSION
Of the several new non-diffusive processes tested here, the Eley-Rideal mechanism appears to have the least effect, due largely to the low surface coverage of reactive species. Those reactive species that might benefit from the spontaneous arrival of a reaction partner from the gas phase always maintain low fractional surface coverage due to their reactivity with highly diffusive surface species e.g. atomic H. Species that do build up a large surface coverage, like CO, tend to have large barriers to reaction, so that incoming species are more likely to diffuse away than to react spontaneously. The importance of the E-R process to typical surface reactions is unlikely to be substantial under any physical conditions as long as atomic H remains mobile.
Photodissociation-induced reactions, in which the PD process acts spontaneously to bring a reactive radical into contact with some other species, has no significant influence on the gas-phase abundances of complex organics, but has a strong effect on the COM content of the ice mantles. The basic three-body process provides substantial improvement in the gas-phase abundances of COMs, notably methyl formate and dimethyl ether, by allowing the products of diffusive reactions (in some fraction of cases) to find a reaction partner themselves without requiring further diffusion. However, the excited formation mechanism, which allows the reaction of excited, newly-formed radicals with stable species (in spite of activation energy barriers) has the strongest effect, and is again most important for methyl formate and acetaldehyde. An adjustment to the efficiency of these processes, based on the available energy from the initiating reaction, appears to provide the best match with observational column densities of those molecules. It is important that the process that seems to reproduce most effectively the gas-phase abundances of the COMs (3-BEF) is one that occurs on the grain/ice surface itself, rather than deep within the mantle, allowing chemical desorption to return some fraction of the product to the gas phase.
The details of the various mechanisms and their implications are discussed in more detail below.
H-abstraction/recombination as an amplifier of chemical desorption
The models show substantial success in reproducing observed gas-phase column densities, through molecular production mechanisms operating on the surfaces of the icy dust grains. Consideration should therefore be given to the efficiency of the desorption mechanism that releases surface molecules into the gas phase. Although photo-desorption is included in all of the models presented here (with the explicit assumption of fragmentation of methanol as the result of this process), the most important ejection mechanism for grain-surface COMs is chemical desorption. In these models, this occurs with a maximum efficiency per reaction of 1%; this efficiency is further lowered according to the RRK-based treatment described by Garrod et al. (2007).
Thus the formation of, for example, acetaldehyde, through Eqs. 18, culminating in the addition of an H-atom to the CH 3 CO radical, may sometimes produce gas-phase CH 3 CHO. However, the immediate desorption following its formation is not the only factor in ejecting those molecules into the gas. The chemical desorption effect is considerably amplified by the abstraction of H atoms from existing surface COMs, followed rapidly by recombination of the resulting radical with another H atom, inducing the ejection into the gas of some fraction of the product molecules. In the case of methanol, for instance, once it is formed on the grain surface through the repetitive addition of H to CO, the abstraction of H from CH 3 OH by another H-atom allows it to be transformed back to its precursor (CH 3 O/CH 2 OH), providing additional chances for chemical desorption -indeed, this process of addition and abstraction was suggested by as a mechanism by which the depletion of CO from the gas phase could be slowed and its grain-surface conversion to methanol delayed. Similar H-abstraction/addition processes are present for each of the larger COMs of interest in our models.
To understand how significantly this process takes part in the overall chemical desorption scheme, four additional test models were run for conditions appropriate to the core center, turning off the H-abstraction reaction for each molecule (the three larger COMs plus methanol). The local fractional abundances of COMs from each test model are compared with the control in Table 10. When the H-abstraction reaction of a specific COM is turned off, the gas-phase abundance of that molecule decreases by ∼1 order of magnitude. Furthermore, when H abstraction from methanol is switched off, it reduces the fractional abundance of other COMs such as CH 3 OCHO and CH 3 OCH 3 , whose surface production is closely related to the CH 3 O radical. The abstraction of H from methanol by other H atoms in fact strongly favors the production of the CH 2 OH radical; the network employed here uses surface reaction rates for these processes calculated by F. Goumans and S. Andersson (see Garrod 2013) based on harmonic quantum transition state theory. However, as per the network of Garrod (2013), the recombination of CH 2 OH with H is assumed to produce either methanol or H 2 CO+H 2 with a branching ratio of 1:1. The production of formaldehyde in this way can then lead to reaction with H atoms again; this forward process strongly favors production of the CH 3 O radical, thus influencing the production of DME and MF.
COM distribution and COM peaks
As seen in Fig. 10 for the 3-BEF Best model, the COMs in the gas phase have their lowest fractional abundances at the core center, gradually increasing toward the outer shell of the prestellar core. This general feature is observed regardless of model type (Fig. 4). Interestingly, a local fractional abundance peak for COMs is found at around 2000 AU, especially for CH 3 CHO and CH 3 OCH 3 . This result suggests at least qualitative agreement with the observational result of Jiménez-Serra et al. (2016); those authors performed deep observations of the COMs toward the low-density outer shell (4000 AU) as well as the core center of L1544. While they observed higher abundances for all three COMs at the outer position, the level of enhancement for CH 3 OCHO was ambiguous, due to its large observational error.
The behavior seen in the models indicates that there are two possible peak features (or two plausible causes for observed peaks) that could become apparent in column densities or line intensities (e.g. lower panel of Fig. 11) as opposed to fractional abundances. The first of these relates simply to the increased fractional abundances of COMs at large radii, combined with the drop-off in overall gas density at the greatest extents, producing a peak in the absolute molecular abundances that manifests in the resulting column density or line intensity profiles. This behavior is especially apparent for methyl formate (which does not show the bump-like feature at around 2000 AU). This peak seems to be in reasonably good agreement with the observational peak position; Fig. 11 indicates peak line intensities around 4000-6000 AU. A major cause of the lack of COMs in the gas phase at small radii (in terms of fractional abundance, e.g. Fig. 10) is that most of the gas-phase material at those locations has already accreted onto the grains and become locked into the ice mantles by the end-time of the models; little CO exists in the gas phase (on the order 10 −7 with respect to total H), thus grain-surface chemistry involving CO-related products is limited. At the greatest radii, freeze-out is incomplete and CO chemistry is still active with the accretion of new CO. Somewhat greater gasphase abundances of atomic H at large radii, caused by the density slope, also encourage H-abstraction from COMs on the surfaces, followed by recombination and chemical desorption.
The local peak at 2000 AU in the fractional abundances of acetaldehyde and dimethyl ether occurs particularly in models that use the 3-B and 3-BEF processes, and manifests also in the resultant column density profiles of those molecules (lower panel of Fig. 11). The gas density at the 2000 AU position is at least three times higher than at the outer-peak region (4000-6000 AU), so the inner peak tends to dominate over the outer in its contribution to column densities, for the models/molecules in which that inner peak occurs.
What is the origin of the inner peak at 2000 AU? It is related to the freeze-out of gas-phase material through the core. It traces a position at which the net rate of accretion of gas-phase material onto the grains is close to zero, caused by the high degree of depletion that has already occurred for most major gas-phase species other than hydrogen. For example, the gas-phase abundance of CO reaches a local minimum at this position. At radii internal and external to the 2000 AU peak region, there is slightly more gas-phase material remaining to be accreted onto dust grains at the end of the model runs, thus the ice mantles still continue (slowly) to grow. This local peak in freeze-out is due to the combined density and temperature profiles used in the models. The adsorption rates of neutrals scale with gas density, which is greatest at the core center, but they also scale with the square root of the gas temperature, which is greater at larger radii. The 2000 AU position is the point where the two profiles combine to give the largest total adsorption rate. The position of the maximum freeze-out position is thus strongly dependent on the density and temperature profiles. Furthermore, given a slightly longer model run time, the freeze-out peak would likely widen, as other positions reached a state of near-zero net accretion onto grains.
The stronger production of COMs (acetaldehyde and dimether ether) around this 2000 AU position peak is a consequence of the changing freeze-out conditions described above. Once the net rate of freeze-out reaches zero, it indeed undergoes a reversal in which there is a small, net rate of loss of material from the grains. This loss is caused by the desorption of molecular hydrogen from the grain surface, which is slowly replenished by the gradual outward diffusion of H 2 molecules embedded deep in the ice mantles. This H 2 -loss process occurs throughout all the model runs, but is of little importance until the adsorption of non-volatile species diminishes, when gas-phase species become depleted. Once this net loss of material from the grains starts to occur, some molecules embedded in the upper layer of the ice mantles are "uncovered", becoming available for surface chemical processing. Most importantly, this includes CH 4 , from which an H-atom may be chemically abstracted through several mechanisms, increasing both the production rate of CH 3 and its surface abundance. This drives up the three-body production of acetaldehyde and dimethyl ether (Eqs. 12-15), which are chemically desorbed into the gas phase -either directly, or as the result of H-abstraction and recombination on the surface. The behavior of the inner peak in certain COMs should therefore be treated with a degree of skepticism. Not only does its position depend on the interplay of the observationally-determined physical profiles, but its strength must be time-dependent. Furthermore, the ability of the chemical model to treat accurately the return ("uncovering") of mantle material to the ice surface is limited by the use of only a single mantle phase, rather than the consideration of distinct layers within the ice (cf. Taquet et al. 2014). If most of the methane residing in the mantles is present mainly in the deepest layers, the inner-peak effect described above would be overestimated here. It is also the case that, even with this mechanism in play, the gas-phase abundance of dimethyl ether is insufficient to reproduce observed column densities in L1544 (although see § 4.6). If such a mechanism is active, considering the uncertainty in its precise position (based on models), it may not be easily distinguished from the outer peak at 4000+ AU.
The peak in methanol column density occurs at the outer peak position, caused again by a peak in absolute abundance of that molecule. It is noteworthy that in the present models, the local fractional abundance of methanol does not need to exceed a value of a few 10 −9 to be able to reproduce the observed column density. Again, the strength of the methanol peak will be dependent on the efficiency of chemical desorption for that molecule, which is not well constrained through purely experimental means.
The effect of diffusion barriers
In many astrochemical models including MAGICKAL, chemistry on the grains is governed by the diffusion of surface species via thermal hopping (any non-diffusive processes notwithstanding). The energy required for a particular species to hop from one surface binding site to another is given by the diffusion barrier E dif ; this value is parameterized in the chemical model as some fraction of the desorption energy, i.e. E dif /E des . Even though this is a key parameter to describe the mobility of species on grain surfaces, the exact value has not historically been well constrained, broadly ranging from 0.3 to 0.8. In the present models, this parameter was set to E dif /E des =0.6 for all atomic species, which leans toward a high value based on recent experimental estimates by , who suggested 0.55 for atoms. In our past models (e.g. Garrod 2013), atoms and molecules were assigned the same fractional barrier value of 0.35, based on the optimum value for CO. All molecular species in the present models retain the 0.35 value.
At the very low surface temperatures that are found in prestellar cores, the diffusion of atoms in particular is of great importance. For this reason, test models were also run using the previous fractional diffusion barrier of 0.35 for atoms. Figure 13 shows a comparison of COM abundances for the two cases, shown for the end-time of the 3-BEF Best model run using the L1544 physical profiles as before. Using the higher E dif /E des ratio, the COMs typically show much higher abundances at positions near the core center. This result is somewhat contradictory to the expectations of Vasyunin et al. (2017), who suggest that the E dif /E des ratio would not play a crucial role in cold environments, as diffusion of H and H 2 via tunneling is dominant. In our model,while tunneling through chemical barriers is included, surface diffusion via tunneling is not, as the barriers are assumed to be too broad for tunneling to be effective. In this case, the higher diffusion barrier for the atomic species means that the time taken for H atoms to reach and react with surface radicals is increased. This consequently raises the lifetimes of those radicals on the surface (see Fig. 14), which in turn renders the non-thermal mechanisms explored here more effective, increasing the production of COMs.
It should be noted that the higher E dif /E des ratio does not always result in a larger amount of COMs (or radicals) on the grain surface. For example, the discrepancy in the COM abundances between the two models decreases at large radii, and methyl formate and acetaldehyde here are even a little more abundant in the case where atomic diffusion barriers are lower, due to slightly more effective H-abstraction from methane to produce CH 3 .
The variation of the E dif /E des ratios thus has an important effect on the chemical model results; the higher value for atomic species, and for H in particular, reproduces COM abundances more effectively, through the increase in radical lifetimes. Senevirathne et al. (2017) calculated the distribution of binding energies and diffusion barriers for H on an amorphous water surface, suggesting representative values for each; although their H binding energy (661 K) is higher than the value used in our models (450 K), their diffusion barrier (243 K) is close to the value we use here (270 K) for the E dif /E des =0.6 models. We note also that Senevirathne et al. (2017), based on their calculations of diffusion rates using quantum transition-state theory, aver that tunneling (as opposed to the thermal mechanism) is likely of limited importance under most temperature conditions in dark clouds; our use of purely thermal diffusion rates in MAGICKAL is thus broadly consistent with that work.
In other models, in which non-diffusive chemical processes were not included, the variation of the H diffusion barrier might have less of an effect, as most of the active chemistry in that case would involve only atomic H. The lifetime of the radicals would therefore be of less relevance, since H would still be the dominant reaction partner. In the present models, the mobility of atomic hydrogen is a major determinant of the effectiveness of non-thermal processes in producing complex species.
Gas-phase processes
Due perhaps to the generally low abundances of DME that our chemical models provide, they do not appear to reproduce the correlation between CH 3 OCHO and CH 3 OCH 3 sometimes observed in various evolutionary stages of star-forming regions (Jørgensen et al. 2011;Brouillet et al. 2013;Jaber et al. 2014). As a means by which such a relationship might arise, Brouillet et al. (2013) suggested protonated methanol CH 3 OH 2 + in the gas-phase as the common precursor to form CH 3 OCHO and CH 3 OCH 3 via reactions with HCOOH and CH 3 OH. As a test, the proposed reactions were incorporated into our chemical network; however, they were too slow to be effective in producing CH 3 OCHO and CH 3 OCH 3 in our model, due to the low abundance of protonated methanol in the gas.
Recently, potentially influential gas-phase reactions were proposed by Shannon et al. (2013Shannon et al. ( , 2014, who found that reactions of either OH or C( 3 P) with methanol are efficient at low temperatures, due to quantum tunneling: The gas-phase methanol reactions could act not only as an efficient loss process for gas-phase methanol, but also produce more radicals that would be available as reactants to form other COMs when they accrete onto grain surfaces, or directly in the gas phase if such processes are efficient. Vasyunin & Herbst (2013) suggested a gas-phase radiative association reaction between the radicals that are produced by the above mechanisms, to form DME: To understand how significantly these reactions would affect the overall formation of COMs in our chemical model, we ran a test model that included all three (with rate coefficients on the order of 10 −10 cm −3 ). However, gas-phase methanol was still predominantly destroyed by ion-molecule reactions at the core center. The contribution of the above neutral-neutral reactions to the loss of gas-phase methanol was minor (∼ 2%), hardly changing the abundances of methanol and the three COMs, while the radiative association reaction also showed minimal influence. Balucani et al. (2015) proposed a gas-phase mechanism that would form methyl formate from dimethyl ether through the radical CH 3 OCH 2 . The dimethyl ether itself would form through the efficient radiative association of the radicals CH 3 and CH 3 O : Although our network does not include fluorine, the incorporation of the other reactions into our model did not make a meaningful difference to the results, because they involve a one-way process where CH 3 OCH 3 is converted into CH 3 OCHO. In our chemical model, neither the radiative association of the radicals CH 3 and CH 3 O nor any other processes were efficient enough to form abundant CH 3 OCH 3 . As such, several key reactions concerning gas-phase chemistry of COMs do not affect our chemical model significantly.
Thus, at least under the conditions tested in our physical model, we found no efficient gas-phase mechanisms that could produce either DME or MF.
Other surface processes
The 3-body excited-formation mechanism included here is especially efficient for the initiating reaction H + CH 2 → CH 3 , which is highly exothermic, but which also results in a small product, CH 3 , that has only a limited number of vibrational modes in which the resulting energy may be stored. The models suggest that when this is coupled with highly abundant CO on the grain surface, the subsequent reaction between the two proceeds at a sufficient pace to produce enough CH 3 CO (and thence CH 3 CHO) to be able to explain the gas-phase abundance of the latter molecule (given an adequate desorption mechanism). The production of CH 3 O via the hydrogenation of formaldehyde is also exothermic, but not sufficiently so to allow the subsequent reaction with abundant CO to proceed at high efficiency. Nevertheless, this low-efficiency mechanism is capable of producing enough CH 3 OCO (and thence CH 3 OCHO) to account for the presence of methyl formate in the gas phase.
As noted in Section 3.3, a more detailed treatment of the 3-BEF mechanism should include not only the energy partition between bonds, but also translational degrees of freedom of the excited species. This would impact the RRK calculation, but could also provide an alternative outcome to the process. The RRK treatment as formulated in Section 3.3 assumes that the efficiency of the process is determined solely by the competition between energy going into the "reaction mode" and energy being lost to the surface. However, if diffusion spontaneously occurred, moving the two reactants apart, then the process would automatically end (unless another reactant were present in this new site), regardless of the energy status of the excited species. We would expect this effect to reduce efficiency by a factor of say 4 (on the basis of there being four available diffusion directions), even for the otherwise efficient 3-BEF mechanism that produces CH 3 CO/CH 3 CHO. The production of CH 3 CHO may therefore be somewhat less efficient than the simple 100% approximation used in the treatment presented here.
It is also of interest to consider specifically the possible effects of reaction-induced diffusion, such as that studied by Fredon et al. (2017) for stable molecules including methane. If reactive species like CH 3 were able to undergo some non-thermal diffusion as the result of excitation caused by their formation, they could react with other radicals that they could not otherwise reach under low-temperature (i.e. non-diffusive) conditions. In fact, as we allude in Section 2.4, the standard (non-excited) 3-B mechanism that we already implement in the models will automatically include such processes to a first approximation. The treatment that we construct for 3-B processes does not explicitly require the reactants to be immediately contiguous, but rather to become so immediately following the initiating reaction. If one were to consider a newly-formed radical species, A, taking some finite and approximately straight-line trajectory across an ice surface, the probability of it encountering some reaction partner, B, along its path would still be given, to first order, by N (B)/N S , as already included in Eq. (6). The simple 3-B mechanism is therefore broad enough to cover this specific case also.
While the 3-BEF mechanism for the production of the dimethyl ether precursor, CH 3 OCH 2 , should be highly efficient based on the statistical calculations in § 3.3, the lower abundance of H 2 CO on the grain surfaces appears to be too low to allow this mechanism to account for gas-phase DME. It should also be noted that in this work it was assumed that DME is the only product of this reaction. It is possible, and perhaps favorable, for ethanol (C 2 H 5 OH) also ultimately to be formed, if the methyl radical attaches to the carbon atom in formaldehyde, producing a radical C 2 H 5 O. This would naturally limit the yield of DME through the suggested excited-formation mechanism.
Are there alternative surface processes that might produce sufficient dimethyl ether if the reactants could be brought together through some non-diffusive process? One possibility might be the reactions of the carbene CH 2 with methanol (CH 3 OH). Methylene, CH 2 , is a diradical in its ground (triplet) state. Reactions of triplet CH 2 with methanol could involve the abstraction of hydrogen from CH 3 OH, followed by immediate radical-radical addition of the resultant CH 3 to the remaining CH 3 O or CH 2 OH. The review of Tsang (1987) suggests gas-phase rate coefficients for the abstraction processes (without the subsequent recombination); the activation barrier for the CH 3 O branch is marginally lower than that for CH 2 OH, indicating that CH 3 O (and thence DME) might be the preferred product. On a dust-grain/ice surface, the production of CH 2 , either by H-addition to CH or by the barrier-mediated reaction of H 2 with atomic C, would likely be exothermic enough to allow the subsequent abstraction barriers to be overcome. However, abstraction might be fast in any case, even without (vibrationally) excited CH 2 , due to hydrogen tunneling through the activation-energy barrier.
Another possibility is that the higher-energy singlet CH 2 could undergo a direct, barrierless insertion into the methanol molecule, producing either dimethyl ether or ethanol. Bergantini et al. (2018) investigated the action of singlet CH 2 , produced through the irradiation of a mixed CH 4 /CH 3 OH ice, to produce DME and ethanol in this way; they found essentially equal production of the two branches. If, instead of the dissociation of methane, the hydrogenation of carbon on the grain surfaces were the means by which singlet CH 2 were produced, then this mechanism could occur effectively as a non-diffusive (i.e. three-body) process, although the short lifetime of the singlet methylene might make a diffusive meeting of the reactants unlikely. Although the dissociation of methane, as per those experiments, is an entirely plausible starting point for the production of COMs within ice mantles, it is an unlikely explanation for the gas-phase detection of COMs.
A further consideration, relating to the production of CH 3 CHO via the 3-BEF mechanism, is the possible production of ketene, CH 2 CO, through the reaction CH 2 + CO → CH 2 CO. This process could also occur through the 3-BEF mechanism, following production of methylene through exothermic surface reactions. The more complex nature of the coding of the 3-BEF mechanism required us to include only the three 3-BEF mechanisms directly related to MF, MDE and AA in the present work, but the application of this mechanism to the full chemical network might impact ketene production. An immediate question would be whether the ketene production might also preclude the production of acetaldehyde, as the CH 2 used to produce ketene would otherwise be required to produce the CH 3 needed for AA production. Furthermore, one could argue that the production of CH 3 in the presence of CO, as needed for our 3-BEF route to AA, would first require contiguous CH 2 and CO, and that this CH 2 would also have to be formed in the presence of CO, making ketene the preferred product instead of AA. Such a view implicitly assumes that there is no reaction-induced diffusion occurring, when in fact, due to the large exothermicities of the reactions in question, it is highly likely that there is some form of diffusion following each reaction. As mentioned above, this diffusion does not make either the 3-B nor the 3-BEF treatments any less accurate, as we do not explicitly rule out such occurrences. Rather, it might be better to assume that, on a surface at least, such reaction-induced diffusion is the rule, rather than the exception, and thus that there is no expectation nor requirement that any newly-formed reaction product considered in the 3-B mechanisms necessarily meets it own reaction partner within its immediate surroundings. In that case, any conditionality in the production of one species from another, based on location, would be lost. On the topic of ketene in particular, it is also possible that it may be hydrogenated to acetaldehyde anyway; the reaction H + CH 2 CO → CH 3 CO is assumed in our network to have a barrier of 1320 K (Senosiain et al. 2006), which is lower than, for example, the typically-assumed barrier to hydrogenation of CO. In future work, we will apply the 3-BEF process to the entire surface network, allowing the relationship between acetaldehyde and ketene to be explored more deeply. Fedoseev et al. (2015) conducted experiments investigating the production of COMs, specifically glycolaldehyde and ethylene glycol, through non-diffusive surface reactions of HCO radicals produced through H and CO co-deposition. Those experiments did not detect any methyl formate production, but follow-up work by Chuang et al. (2016), who codeposited various combinations of CO, H 2 CO, CH 3 OH and H, demonstrated methyl formate production in the setups that involved direct deposition of formaldehyde (H 2 CO). Thus, under the conditions of their experiments, the HCO and CH 3 O radicals required for the radical-radical reactions to produce CH 3 OCHO derived mainly or uniquely from H 2 CO reactions with atomic H (either H addition or H-abstraction by H atoms). In the case of CO and H deposition alone, they suggested that the reaction between two HCO radicals dominates, producing glyoxal (HCOCHO) that can be further hydrogenated to glycolaldehyde (CH 2 (OH)CHO) and ethylene glycol (HOCH 2 CH 2 OH).
The reaction network we use here includes HCO-HCO reaction routes , with one branch producing glyoxal, and an equal branch producing CO and H 2 CO through a barrierless H-abstraction process. Our network does not include the further hydrogenation of glyoxal, but the removal of HCO radicals should be well enough treated.
In our astrochemical models (in which the overall system is much more complicated than the laboratory setups, with many more species and processes), grain-surface formaldehyde and methanol are produced through CO hydrogenation by H atoms; in the experiments, the CO + H system does not produce enough formaldehyde to allow substantial production of CH 3 O and thence methyl formate. The outcomes of the models, which are run over astronomical timescales, should not therefore be expected to correspond directly with the experimental outcomes. However, the mechanism of non-diffusive radical chemistry that seems to produce methyl formate in the experiments (via HCO + CH 3 O) is present in our models (the basic three-body mechanism).
The key comparison with the experiment concerns our implementation of the excited three-body formation mechanism for the reaction of CH 3 O with CO (the other 3-BEF mechanisms tested in this work involve CH 3 and are therefore not tested in the laboratory experiments). The very low efficiency that we require (0.1%) for immediate reaction is likely to be too small to have an important effect in a laboratory regime where the regular three-body process is presumably efficient (unlike in our prestellar core models). In this sense, it seems superficially consistent with the experiments, since there are no experimental setups in which methyl formate was not found in which our excited formation mechanism would predict it to be highly abundant. Indeed, our mechanism should only become important if other means of production (such as the regular three-body process) are already weak. Thus, it may be difficult to test the excited-formation mechanism for methyl formate through experimental means.
Another possibility exists for the production of all three COMs considered here: that is, that they are produced in the ice mantles, through UV processing (or some other means). The ice mantle material would then have to be removed by some violent process such as sputtering by cosmic rays. (Such a process would also result in some degree of complex molecule production, e.g. Shingledecker et al. 2018). However, it is unclear whether such mechanisms would be capable of maintaining gas-phase abundances of COMs at the required levels.
A separate point of discussion concerns the experimental evidence surrounding the interaction of H atoms specifically with solid-phase acetaldehyde. Bisschop et al. (2007) studied the deposition of H onto a pre-deposited surface of pure CH 3 CHO at temperatures ranging from 12.4 -19.3 K. They found reaction products C 2 H 5 OH, H 2 CO, CH 3 OH, and CH 4 , which they posited to be formed either through repetitive hydrogenation (ethanol), or fragmentation into a stable molecule and a radical, which may be further hydrogenated to a stable species. In our model, it is assumed that H atoms interact with CH 3 CHO by abstracting another hydrogen atom from the aldehyde end of the molecule. If the alternative mechanisms measured in the laboratory should compete strongly with this process, then the mechanism described in Section 4.1, in which H-abstraction and re-hydrogenation work together to enhance reactive desorption, could become less effective, and the acetaldehyde produced on the surface could be converted to entirely different species.
The Bisschop et al. (2007) data suggest production yields for ethanol of ∼20%, with other products also on the order of 10%. However, these yields are provided as a fraction of the acetaldehyde initially available in the surface layer of the ice; they do not indicate yields per hydrogen atom or per H-CH 3 CHO interaction. Furthermore, the experiments would not appear to be sensitive to processes in which acetaldehyde were converted to CH 3 CO, then re-hydrogenated to CH 3 CHO. As a result, it is not possible to determine how strongly H-abstraction may dominate over hydrogenation or fragmentation, or vice versa. However, each of these processes would involve an activation energy barrier, and it is found that abstraction from aldehyde groups occurs more readily than H-addition. Hippler & Viskolcz (2002) calculated barriers to such processes, including the H + CH 3 CHO → C 2 H 5 O addition reaction, finding an activation energy of 22.4 kJ mol −1 (2690 K), versus the literature value for abstraction of 17.6 kJ mol −1 (2120 K Warnatz 1984). Assuming the simple rectangular-barrier tunneling treatment used in our models, and assuming a 1Å barrier width, the abstraction process should go around 350 times faster than hydrogenation. The preferred gas-phase value in the more recent review by Curran (2006) suggests an even higher barrier to hydrogenation of 26.8 kJ mol −1 (3220 K), which would provide an abstraction/hydrogenation ratio closer to 10 5 . Fragmentation is more sparsely studied in the literature, but based on the Bisschop et al. study we presume those mechanisms to occur at similar rates to the hydrogenation mechanism. Since chemical desorption in our model is calculated to proceed in a little less than 1% of cases, we would not expect our results for acetaldehyde to be strongly affected by the inclusion of alternative reaction branches, either on the grains or in the gas-phase.
O 2 production
Aside from its effect on COM abudances in the ice mantles, the PD-induced reaction mechanism also produces a significant increase in O 2 ice abundance; this effect is noteworthy, as it may provide a clue to the origin of O 2 in comets. Gas-phase O 2 was recently observed toward comet 67P/C-G, as part of the Rosetta mission (Bieler et al. 2015). It was found that O 2 achieves a fractional abundance as high as ∼4% with respect to water, indicating this compound as one of the most dominant species in cometary material. While the origin of molecular oxygen is still controversial because of its difficulty in observation, the strong correlation with H 2 O implies a connection to dust-grain ice chemistry rather than gas-phase chemistry in the coma.
Many studies directly or indirectly suggest a primordial nature for O 2 in comets. For example, Rubin et al. (2015) confirmed the presence of O 2 in the Oort cloud comet 1P/Halley at a level similar to that seen in the Jupiter-family comet 67P/C-G. This suggests that O 2 may be common, regardless of dynamical history, indicating a primordial origin. Mousis et al. (2016) proposed that the radiolysis of water-containing interstellar ices in low-density environments such as molecular clouds could produce O 2 in high abundance. Meanwhile, Taquet et al. (2016) conducted a range of astrochemical models based on diffusive grain-surface chemistry, to investigate three possible origins for O 2 in comets: (i) dark cloud chemistry; (ii) formation in protostellar disks; and (iii) luminosity outbursts in disks. They concluded that dark clouds are the most plausible regime in which to form the O 2 , through diffusive O-atom addition on grain/ice surfaces at temperatures around 20 K. However, as they noted, the temperature required in their models is rather higher than typical dark cloud values. Garrod (2019) suggested that the upper layers of cold-storage comets could be processed to increase their O 2 content, as the result of photolysis and radiolysis by the galactic UV and cosmic ray fields. The chemical models presented by Garrod included the same PD-induced mechanism used in the present study.
Here, we propose non-diffusive, photodissociation-induced processing of interstellar ice mantles as the possible origin of the abundant O 2 in comets; this avoids the requirement for higher-temperature diffusive chemistry on dust-grain surfaces. Fig. 15 shows the radial distribution of the O 2 ice (solid lines) in our model of L1544 with the PD-induced process activated. The fractional abundance of O 2 ice in this model is as high as 0.6 % with respect to water ice toward the core center, which is 1-2 orders of magnitude higher than in the other models. The abundance of O 2 ice has been suggested to be low in prestellar core material, because O 2 is efficiently hydrogenated to form H 2 O and H 2 O 2 ices at low temperature (Ioppolo et al. 2008). However, the PD-Induced process induces the association of O-atoms in the icy mantle (sourced from water molecules), resulting in the production of large amounts of O 2 . The other models (without PD-induced reactions) accumulate the O 2 ice on the grain-surface over time rather than directly synthesizing it within the ice mantles. If interstellar O 2 ice formed via the PD-induced process were locked in until such ices were incorporated into comets, it could remain there to contribute to the O 2 population observed in 67P/C-G.
It should be noted that H 2 O 2 ice in the PD-induced model is more abundant than O 2 ice (dashed line; Fig. 15), in contrast to observations. The H 2 O 2 /O 2 ratio found in 67P/C-G by ROSINA was as low as 6.0 ± 0.7 × 10 −3 (Bieler et al. 2015). The same discrepancy was found by Garrod (2019), and is related to the relative formation rates of other compounds in the ice as well as O 2 . Photodissociation of water leads first to OH; two such radicals may recombine through the PD-induced processes to form H 2 O 2 in high abundance. The present chemical network may be missing some chemical reactions related to the destruction of H 2 O 2 , or may overestimate the efficiency to form H 2 O 2 via the PD-Induced process. The efficiency of H-atom diffusion within the bulk ice to abstract another H atom from H 2 O 2 may also be an important factor; Fig. 15 shows that the H 2 O 2 abundance strongly dominates O 2 in the coldest (inner) regions of the core. The sum of the O 2 and H 2 O 2 abundances collectively reach a value on the order of 1% in total at the core center. Inclusion of O 3 and O 2 H boosts this total further. The photodissociation that leads to O 2 , H 2 O 2 and other species' production from H 2 O in these models is the result of the secondary, cosmic ray-induced UV field. Thus, the abundances of each would likely be enhanced by the adoption of a somewhat larger cosmic-ray ionization rate. A longer evolutionary timescale, or further processing of the ices during the later disk stage, could also lead to enhancement. Radiolysis, i.e. direct cosmic-ray impingement on the ice mantles could also act in concert with the photodissociation effect.
Further modeling to reproduce the amount of O 2 and H 2 O 2 seen in 67P/C-G is outside the scope of this work, but may be a fruitful means to elucidate the origins of cometary O 2 . The vast majority of this O 2 could indeed be interstellar, produced by photolysis.
CONCLUSIONS
Here we have introduced new rate formulations that allow astrochemical models to simulate a number of new, nondiffusive chemical mechanisms on interstellar dust-grain surfaces and within bulk ices. These formulations are fully compatible with existing model treatments for diffusive chemistry. Some of the non-diffusive mechanisms considered here, such as the Eley-Rideal process and three-body reactions, are automatically taken into account in microscopic Monte Carlo kinetics models of the same systems, but must be explicitly added in rate-based treatments such as the one used here. Others, such as the three-body excited-formation mechanism and photodissociation-induced reactions, are entirely new.
Crucially, it is shown that non-diffusive processes can affect the bulk-ice, the ice-surface, and -indirectly -the gas-phase composition, through a cyclic H-abstraction and addition process that amplifies the efficiency of chemical desorption. Eley-Rideal reaction processes appear not to have a strong effect in our implementation.
To place the new non-diffusive mechanisms into a context in which they could be directly tested, a physical model approximating the prestellar core L1544 was adopted, with a focus on reproducing the observed gas-phase abundances of the complex organic molecules (COMs) acetaldehyde, dimethyl ether, and methyl formate. Reactions involving excited radicals (recently produced by other surface or bulk reactions) appear to be influential in producing COMs, although their influence is likely limited to reactions involving the highly-abundant CO molecule on the grain surfaces, and/or the excited radical CH 3 , whose production through the addition of H to CH 2 is especially exothermic. In the three-body excited formation model, the efficiency of the 3-BEF mechanism for methyl formate production must indeed be optimized, as this mechanism is otherwise more efficient than is required to agree with observations. Although the gas-phase dimethyl ether abundance in particular remains difficult to reproduce, the models presented here tested only three plausible excited-formation mechanisms; alternative surface-formation processes for this molecule could be active.
Further application of the new mechanisms and formulations presented here into astrochemical models of the later stages of star formation, such as the hot core/corino stage, is currently underway.
The main conclusions of our study are enumerated below: 1. Non-diffusive reactions between newly-formed radicals and nearby species on dust grains, which we label threebody reactions, appear to influence strongly the production of complex organics and other species on interstellar dust grains and in their ice mantles, producing abundances similar to those detected in the gas phase toward hot star-forming sources.
2. We propose a new surface/bulk mechanism in which the energy of formation of a newly-formed radical allows it to overcome an activation energy barrier against reaction with a nearby, stable species (the three-body excited-formation process, 3-BEF). For key molecules/processes, especially reactions between excited radicals and abundant grain-surface CO, this mechanism appears strongly to influence production, beyond the effects of the regular three-body mechanism. Except for these key processes, the 3-BEF process is expected to be inefficient in most cases.
3. Grain-surface molecule production is enhanced by the three-body excited-formation mechanism sufficiently to explain observed gas-phase abundances of methyl formate and acetaldehyde in prestellar core L1544. Chemical desorption allows these grain surface-formed molecules to enter the gas phase.
4. Dimethyl ether is still under-produced in the model, when compared with observations of L1544. Other plausible grain-surface production mechanisms, such as reactions between CH 2 and methanol, remain to be tested in the models.
5. Repetitive H-abstraction by H atoms from COMs on grain surfaces, followed by recombination with another H atom and the possible desorption of the product into the gas phase, gives chemical desorption a greater influence than its basic efficiency of around 1% would otherwise suggest. This cyclic amplification effect brings the required surface-formed COMs into the gas phase effectively enough to reproduce abundances as described above. The effect should be especially important in regions where gas-phase H abundances remain relatively high.
6. Specific to the L1544 models, the position of the methanol peak is located further outward than observation, but it is still associated with the region where CO starts to freeze out significantly. The off-center peaks in COM column densities toward L1544 are most likely related to the interplay between rising COM fractional abundances at larger radii and rising gas density at smaller radii.
7. The surface-diffusion rate of atomic H is important to radical lifetimes, which affects the efficiency of non-diffusive mechanisms that rely on reactive radicals being available on the grain surfaces. Thus the choice of diffusion barrier for H has a strong effect on the production of COMs in chemical models that consider non-diffusive chemistry.
8. Photodissociation-induced non-diffusive chemistry within the bulk ices produces abundances of O 2 and related species on the order of 1% of water. This suggests that interstellar (and perhaps later) UV-processing of grainsurface ices may be sufficient to reproduce observed cometary values, regardless of the precise temperature of the dust grains.
9. The broader inclusion of various non-diffusive grain-surface/ice chemical reactions in interstellar chemical models now seems imperative.
We thank the anonymous referees for helpful comments and suggestions. We are grateful to E. Herbst and P. Caselli for useful discussions. This work was partially funded by a grant from the National Science Foundation (Grant number AST 19-06489).
b Generic rate coefficients are assumed as per Garrod (2013).
c The cosmic ray ionization rate, ζ 0 , is set to 1.3 × 10 −17 s −1 ; generic prefactors are assumed based on like processes.
d Generic rate coefficients based on like processes.
e Same processes are assumed for grain-surface/ice species with a factor 3 smaller rate.
|
2020-06-22T01:00:26.855Z
|
2020-06-19T00:00:00.000
|
{
"year": 2020,
"sha1": "16401c8b1b64389c48deb8cef3e5fbffa30b6be6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.11127",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "16401c8b1b64389c48deb8cef3e5fbffa30b6be6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
213481411
|
pes2o/s2orc
|
v3-fos-license
|
Managing Sustainable Urban Tourism Development: The Case of Ljubljana
The interest in sustainable urban development and sustainable tourism development is growing. Yet, according to our knowledge, only a limited number of studies combining those two areas exist, and the holistic model for sustainable urban tourism development has not been introduced. Our study aims to integrate sustainable urban development scientific area with sustainable tourism development scientific area and to integrate and advance the existing models for sustainable urban tourism development. As a method for analyzing the results of 322 interviews, we used content analysis. Based on the analyzed data, the conceptual sustainable urban tourism model is proposed and applied to the case of Ljubljana. The results show that Ljubljana needs more emphasis on sustainable urban tourism development by considering different dimensions of sustainability, stakeholders, as well as types of tourism. Specifically, respondents took into consideration social, environmental, as well as economic sustainability. The most often mentioned stakeholders were local communities and companies; meanwhile, according to their opinion, Ljubljana has the greatest potential in cultural, green, and sports tourism. Finally, the study integrates sustainable urban development and sustainable tourism development scientific areas by providing a conceptual model and taking into consideration the need for proper management, ranging from planning to education and policy-making.
Introduction
Global warming, climate change, pollution, emissions, food shortages, depletion of natural resources, desertification, and other threats [1] are some of the key challenges of contemporary societies. In the past 70 years, tourism has extensively grown, due to technological innovations in traveling and the transportation industry, economic growth, and rise in disposable income [2]. The focus of sustainable tourism is on the natural environment, rural setting, and protected land, even though the majority of the people in the world live in urban areas and cities are most traveled to [3]. Sustainable urban tourism development (SUTD) is a crucial subject due to the trend of urbanization, the rise in the world population's financial and technological ability to experience life in other parts of the world. Ljubljana was among the capitals that invested a great deal into an integrated marketing mix to attract tourists; however, now that it is a "hot" and popular tourist attraction, concern also arises in the government of the city as to how to sustainably manage urban tourism development. However, also some other global cities are already facing the negative effects of massive tourism (e.g., Venice, Dubrovnik, Barcelona, etc.), and cities need to develop in line with the sustainable development goals (SDG). The United Nations presented Sustainable Development Goals (SDG) [4], which represent a global strategy for achieving a resilient future for all stakeholders. The 11th SDG deals with a specific urban goal of making cities and human settlements inclusive, safe, resilient, and sustainable, while
Sustainable Urban Development
The term sustainability was first used as Nachhaltigkeit (the German word for sustainability) in 1713 [17,18], and it meant never using more than what the forest yields in new growth [19]. The European Commission released in 1987 Our Common Future report. At the core of the report was the principle of "sustainable development," which is also gaining an important role in the educational agenda [20]. The EU Commission's acceptance of sustainable development as a guideline offered credibility to an old concept others worked in the past [8]. The EU Commission named sustainable development as "meeting the needs of the present without compromising the ability of future generations to meet their own needs" [8,9]. Linear and unidimensional thinking processes are not suitable when defining the future sustainable plan, as decision-makers need to take into consideration different stakeholders as well as understand sustainability through networks and organic alliances [21].
Management for achieving sustainable, low-carbon adaptive reuse needs to be designed more holistically, integrating economic, social, environmental, urban, and political policies [22]. City sustainability goes beyond the quality of life, as cities need to find the balance between being part of a competitive global city community and meeting the needs of the daily requirements of their inhabitants [23]. A sustainable city responds to social, environmental, political, cultural objectives, economic and physical requirements [23].
A total of 1410 articles on sustainable urban development have been published [20]. The sustainable urban development construct has been gaining in popularity among researchers since 2011 [24].
In addition, the sustainable urban development construct was most often studied within the environmental sciences ecology, followed by urban studies, science technology on other topics, engineering, and public administration, according to the Web of Science Core Citation results [24].
Strategical environmental assessment [25] measures the environmental impact of plans, programs, and policies. Cultural heritage, energy consumption, carbon emissions, and waste management are part of urban sustainability [26]. There are four models of the dynamics among cities and their environmental hinterlands, such as redesigning the city, self-reliance, external dependency, and equitable balance [27]. Adaptive reuse of buildings is a mode of sustainable urban regeneration, as it extends the building's functionality and avoids waste, encourages reuse of energy, and also provides social and economic benefits to the community [22]. Also, artists, as a subgroup, make substantial, valuable contributions to the functioning of cities [28].
Effective urban development tendencies are influenced by joint actions of state authority and the city [29]. People within Central and Eastern Europe are more influenced by the pride and self-esteem encouragement connected with empowerment and the perceptions of community cohesion than the economic promises of tourism [30]. Favorable starting conditions for green mobility are successful financial and institutional implementation strategies [31]. Better marketing and closer cooperation among transport providers and planners with tourism attractions and accommodation providers were identified [31]. Positive festival impacts are influenced by emotional solidarity and community attachment [32].
Sustainable Tourism
Sustainable tourism is a way of tourism development or tourist activity, complementing the environment, ensuring long-term protection of resources, as well as ensuring social and economical acceptance and fairness [12]. The definition of tourism and its several types needs to be focused to enable standards and meet the interests of the tourists [33]. Sustainable tourism is difficult to define and implement [34], as it emphasizes the activities of local stakeholders [35]. Research on sustainable tourism gained momentum in 2008 and has been a popular research construct since 2015 [36]. Among a range of research areas, the sustainable tourism construct is by far the most studied within social sciences than other topic fields, followed by environmental sciences ecology, science technology, other topics, and business economics [36].
Sustainable tourism research demonstrates the use of an appropriate scientific methodology, such as is the interpretive paradigm in the highest degree [12]. A deficiency of research on circular tourism [37] was found.
Urban tourism [38] was researched in the 1960s mainly by geographers and gained renewed attention in the 1990s [39]. Urban tourism has the potential to drive economic growth, as well as promote inclusive development to the region [40]. The following themes are emphasized within urban tourism research: sustainability, planning, management, impacts, cultural agendas, visitor perception and satisfaction, urban regeneration, models, typologies of tourist cities, city case studies, social theory, transport and infrastructure, marketing, and place imagery [41]. Spatial distribution of users, i.e., their origin neighborhoods, affects satisfaction with the services, demonstrating social inequity patterns [42]. There is a discrepancy among the views of industry and academia in terms of who sets priorities for urban tourism. Industry places emphasis on issues that serve their direct commercial interests, rather than reflecting the broader interests of other stakeholders, such as the environment [43].
In multifunctional entities, tourists' motivation for going can differ greatly. The construct of sustainable tourism has multiple aspects and different researchers emphasize different facets. Among others, the latest research is focused on overtourism [44], by refocusing on the well-known problem of managing negative tourism impacts on core challenges [45] between the quality of life for residents and the city development to have a positive impact on the tourism industry, sharing economy as a new business model [46], existing difficulties in defining solutions to activation and diffusion of sustainable urban tourism practices [2], or preparation of destinations to be smart in line with co-participative tourism [47].
Despite the undeniable importance of the sustainable approach to tourism, a lack of investment in sustainable practice of tourism is identified [48]. Nevertheless, also the size of the establishment affects commitment to sustainable development as large-and medium-sized hotels show higher commitment [49]. However, tourism sector stakeholders are aware of the vital role sustainability has [48,50]. An assessment system for sustainable tourism was developed facilitating the ranking of tourism regions according to their sustainability degree [50]. In addition, it was also researched how sustainable tourism can be enhanced by mobile connectivity through new space-time practices to create social groups who are interested in sharing [51]. The importance of the decision-making process in the sustainable management of tourism development and utilization of the typology of community decision-making influence factors that need to be considered for effective sustainable tourism implementation [52]. Sustainable tourism was bridged with psychological factors, focusing on emotions [53]. Studies that measure visitors' behaviors longer than a year post-trip are missing [54]. Conservation psychology and environmental education claim that extended and repeated experiences are key for sustaining the initial motivation to develop new long-term behaviors [54].
SUTD has different activities and services than other forms of tourism [2]. SUTD demands a complex system of integrated elements, such as attractions, accommodation, transportation, retail, travel trade, food, and beverages [55]. Part of the city tourism industry needs to be a destination management organization, responsible for development and planning, in order to enable SUTD [13]. Urban and environmental problems have roots in ecology and urban contexts, and therefore, city governors need to plan, design, and manage urban settlements in a way that allows inhabitants to preserve their quality of life [56]. Annual budgets are needed to perform different activities in urban areas to protect the environment and tourism management, ranging from employing professionals to do maintenance and cleaning, surveillance, visitor assistance, and information, limitation of vehicle access, as well as investments in infrastructure and equipment for public use and conservation of local wealth and landmarks [57]. Recent study shows that city governors should also invest in the Big Data and eliminate common errors in big data databases (missing data, duplicate entries, errors generated by the data custodians and others), which can then help cities to become smart, utilize resources more efficiently and improve inhabitants' quality of life [58]. Quality of life affects urban growth and competitiveness [59]. Culture is an essential element of urban transformation due to its role in promoting consumption and territorial marking in the context of global tourism and economic flows [60]. The right to the city influences urban policies [61] and indigenous culture should not be lost due to massive tourism [62], as it is an important element of the social capital of an urban destination [63]. Massive tourism also produces pollution [64] that affects the quality of life and tourist carrying capacity needs to be included in planning for sustainable tourism [65]. Cities should be inclusive, to avoid urban poverty and inequality [66]. When studying urban ecosystems, social and natural sciences need to be connected. Ecology needs to be integrated with urban planning and design to address urban resilience capacity [67].
Conceptual Model Development
The conceptual model is developed, as presented in Figure 1. In the literature, partial views are existing and we have advanced the argumentation further. Participative management perspective towards sustainable urban tourism development is needed as a lack of triple bottom-line appreciation among tourism industry stakeholders was identified [13]. We formed the conceptual model on the basis of the existing literature review. The conceptual model combines the proposed layers, as they have been identified as missing in the existing literature and supported by research findings. Each layer presents a different perspective that needs to be taken into consideration when developing sustainable urban tourism. It combines different perspectives on sustainable urban tourism development by combining different layers, namely: dimensions of sustainability (environmental, economic, and social) [68]; different stakeholders: individuals, companies, local community, and country/regional level; relevant types of tourism that can be developed in selected urban environments based on resources available; as, for example, family, green, shopping, gastro, cultural, business/industrial, sports, party, congress, educational, political, medical, nautical, intergenerational, wellness, or religious.
All those levels, namely types of tourism, stakeholders, and sustainability dimensions, should be combined in all available combinations and put under consideration when deciding on future urban sustainable tourism development. Every type of tourism has a specific segment of tourist and tourism types also share some key stakeholders, such as regional, local community, companies, and context matters, e.g., if green tourism is based on the value of environmental sustainability, this means that city governors need to invest more managerial effort into also developing economic and social sustainability. The selected areas should be appropriately managed [69]; namely, planned, organized, led, as well as controlled. Triple bottom line, including environmental, social, and economic indicators, are recommended within SUTD but rarely fully implemented [13], which is why we added a layer of management process in our model to emphasize the connection between sustainability dimensions and management functions. The participative model of SUTD is based on sustainability networks, which represent the interactions and management of different stakeholders' interests, goals, and powers [2,13].
Method
The research question of this study is "How to strategically manage sustainable urban tourism development?" In order to be able to answer our research question and apply the proposed conceptual model, we selected the qualitative approach, which is advised to research an underdeveloped field to gain an in-depth understanding of the phenomena under study. Specifically, the case study [70][71][72] of Ljubljana, the capital of Slovenia, is applied. When studying sustainability, with the case study approach, research is frequently focused on analyzing an example [12]. In addition, the studies that apply qualitative methods prevail [12].
The chosen approach to researching sustainable urban tourism development is content analysis and quantification of qualitative data, to really understand how the informants (tourists and inhabitants) perceive Ljubljana and its potential for further sustainable urban tourism development. In order to grasp a deeper understanding, their narratives were also collected and analyzed. We quantified the qualitative data and also provided in vivo proof citations. Informants were asked how they view the development of tourism in the capital of Slovenia by providing an explanation for their choices. The Stanford social innovation questionnaire was applied by the research team of 10, asking tourists, visitors, and inhabitants of Ljubljana the following questions regarding the researched topic of how to design tourism development with the support of visitors, tourists, and inhabitants who share it: Narratives are formed in the sociocultural setting in which the narrator functions; that is why the purpose of the narrative analysis is to understand both the narrative as well as the sociocultural context [21]. Joint content analysis of the answers by the authors was chosen as the most suitable method for analyzing the responses, aiming to find the preferred sustainable urban tourism development. The content analysis aims to define concepts and theoretical formulations and is, as
Method
The research question of this study is "How to strategically manage sustainable urban tourism development?" In order to be able to answer our research question and apply the proposed conceptual model, we selected the qualitative approach, which is advised to research an underdeveloped field to gain an in-depth understanding of the phenomena under study. Specifically, the case study [70][71][72] of Ljubljana, the capital of Slovenia, is applied. When studying sustainability, with the case study approach, research is frequently focused on analyzing an example [12]. In addition, the studies that apply qualitative methods prevail [12].
The chosen approach to researching sustainable urban tourism development is content analysis and quantification of qualitative data, to really understand how the informants (tourists and inhabitants) perceive Ljubljana and its potential for further sustainable urban tourism development. In order to grasp a deeper understanding, their narratives were also collected and analyzed. We quantified the qualitative data and also provided in vivo proof citations. Informants were asked how they view the development of tourism in the capital of Slovenia by providing an explanation for their choices. The Stanford social innovation questionnaire was applied by the research team of 10, asking tourists, visitors, and inhabitants of Ljubljana the following questions regarding the researched topic of how to design tourism development with the support of visitors, tourists, and inhabitants who share it: Narratives are formed in the sociocultural setting in which the narrator functions; that is why the purpose of the narrative analysis is to understand both the narrative as well as the sociocultural context [21]. Joint content analysis of the answers by the authors was chosen as the most suitable method for analyzing the responses, aiming to find the preferred sustainable urban tourism development.
The content analysis aims to define concepts and theoretical formulations and is, as such, an element of the interpretation [73]. In total, 322 tourists and inhabitants of Ljubljana as informants (thereof 44.72% males) were interviewed in the Ljubljana city area, who gave answers to an open-ended social innovation questionnaire. Informants' perceptions of Ljubljana as sustainable urban tourism destination development were analyzed with the descriptive method, which offers a rich presentation comprehension of sustainable urban tourism development. Proof citations [74] of the phenomenon demonstrate the depth of their thinking. We have structured our analysis according to gender, habitation, age, and three content themes emerging from the data. Content themes were the following: (a) dimensions of sustainability [17], namely, what is the main focus of the participant's perceived sustainable tourism needs in terms of sustainability conceptualization, that is, economic, social, and/or environmental; (b) researchers identified key stakeholders responsible for sustainable tourism implementation in the capital, that is, individual, company, local, and country/regional level; and, (c) we have identified different types of tourism which have been mentioned by the informants as desired types of sustainable urban tourism.
Context of the Conceptual Model Application: The Case of Ljubljana
Ljubljana has made the Global TOP 100 Sustainable Destinations list for the fifth time in 2019, maintaining its position as one of the most sustainable destinations, [75] and held the title of European Green Capital 2016. It was chosen as a case study with the reason to advance Ljubljana tourism services and products management, established by previous successful accomplishments in the field of sustainable tourism, recognized by international experts, awards, and prizes. Management, digitalization, and smart solutions integration in the field of managing sustainable tourism in the city was elaborated as a challenge by the vice mayor of Ljubljana, Mr. Crnek [76]. The study was conducted with the aim to discover the potential for further advancement of SUTD. Ljubljana has in recent years developed into an urban tourism destination, and the city is changing considerably, and therefore, Ljubljana is a typical example. In order to preserve the city, it should encourage sustainable tourism development. How to do it-we asked respondents.
The capital of Slovenia, Ljubljana, is the administrative, cultural, political, and economic center of Slovenia and has 280,000 inhabitants. Ljubljana has been part of the Green Scheme of Slovenian Tourism since 2015 [77]. Three-quarters of the entire territory of Ljubljana are green areas, including contiguous aquatic, forest, and agricultural fields [78]. Nevertheless, Ljubljana has a rich historical heritage, ranging back to the Roman settlement Emona, and Medieval times (which are still evident), and it is also very close to the Alps, Mediterranean Sea, as well as other important sightseeing destinations, such as Postojna cave, Bled, Venice, Milan, Vienna, and Salzburg, and therefore, it is developing its tourism potential and it is vital for stakeholders that this development path is sustainable.
The Case of Ljubljana-Data Analysis
As presented in Table 1, the sample consisted of 322 informants, thereof 144 (44.72%) males. In total, 146 (45.34%) respondents had Ljubljana residency. According to the analysis, economic sustainability was identified in 27.64%, social in 86.02%, and environmental in 60.87% of total responses. Table 2 presents the results regarding the impact on different stakeholders. The majority of informants, namely, 95.03%, mentioned impact on local community, followed by impact on companies (66.46%), country/regional level (12.11%), and only 2.48% targeted specific individual. As presented in Figure 2 and Table 3, the respondents believe that Ljubljana should promote cultural (67.08%), green (48.45%), and sports (37.89%) types of tourism the most, in line with sustainable urban tourism development. In addition, informants also encourage the development of gastro tourism (23.29%), party tourism (16.46%), senior/intergenerational tourism (15.84%), educational tourism (12.42%), shopping tourism (12.11%), and nautical tourism (10.87%). Less than 10% of respondents recommended family tourism (7.14%), business/industrial tourism (4.66%), medical and congress tourism (both 1.55%), wellness tourism (0.62%), and political tourism (0.31%). To get an in-depth insight emphasizing the complexity of the context-rich data, the selected proof quotations are presented in Table 4. To get an in-depth insight emphasizing the complexity of the context-rich data, the selected proof quotations are presented in Table 4. " . . . due to the disconnection of people, Ljubljana remains closed, Ljubljana lacks variegation and diversity, we are not sufficiently aware of our culture and cultural heritage. People need to be addressed with stories, we must emphasize our identity, crafts, history, and culture, it looks like we overemphasize sport and underemphasize culture. We have good projects, but we need to advertise them more [ . . . ] For example, Ljubljana could gain and earn a lot from foreign guides, as for example, if you are a local guide, you have to have a license to guide around the city, meanwhile foreign guides can walk around the city worry-free, without licenses, and that should be changed, we could earn extra money here."
Quotation 4 Respondent 24
" . . . inaccessibility of the parts of the city for disabled, we need more drinking water fountains, more public toilets accessible also for disabled people, we need to emphasize and promote local food, open a restaurant with local food and drinks atČopova street."
Quotation 5 Respondent 36
" . . . more sports events for all generations, introduce the Gladiator event in Ljubljana, to revive the Koseško pond, to be able to rent pedal boats for a ride at Zbilje lake, to organize more outdoor cinemas (in Tivoli, at the Congress square, etc.), better promotion of Ljubljana, greater connection of the residents by for example exchanging items or garage sales."
Quotation 6 Respondent 38
" . . . adapt Ljubljana for short city breaks, by offering more shopping centers, more active experiences in Ljubljana and more romantic experiences in Ljubljana. The need to emphasize architecture with the focus on Plečnik, more fun events, more experiences of Ljubljana by night."
Quotation 7
Respondent 41 " . . . Improved marketing with the emphasis on the analysis of tourists' expectations, more youth benefits in terms of additional offers and guides around Slovenia divided by their age to have a peer as a guide."
Quotation 8 Respondent 123
" . . . more green areas, elimination of traffic jams, better traffic connections with Ljubljana, more information on events in Ljubljana, a place where I will feel a sense of belonging to Ljubljana."
Quotation 9
Respondent 127 " . . . higher number of festivals, a product through which a person could experience Ljubljana from the past, more parking spots near the center."
Quotation 10
Respondent 178 " . . . higher number of concerts in the center, better signage of the sights, the route of Roman Ljubljana, some great attraction"
Quotation 11 Respondent
" . . . increased supply of vegan food in a trendy restaurant increased supply of local fruits and vegetables at a more affordable price, more fruit trees, not only ornamental ones but also the edible ones. Multiple urban installations by the principle of Prostorož or Light Guerrilla. I want a playground with endless trampolines." Source: own.
Discussion
Our paper further integrates two scientific fields, namely, sustainable urban development as well as sustainable tourism development. Till today, four scientific articles have been published on sustainable urban tourism development according to the Web of Knowledge Core Citations database, namely [13][14][15][16], which indicate that the scientific field of sustainable urban tourism development is emerging, yet it needs further advancements. The proposed conceptual model contributes to the understanding of sustainable urban tourism development by (a) integrating existing unidimensional models of sustainable urban tourism development [13,14] into a single multidimensional model and (b) expanding the model to include different types of tourism as suggested by sustainable tourism development literature and incorporating the management process dimensions to reflect the fact that sustainable urban tourism is an ecosystem that needs to be properly managed. The resulting expanded model of sustainable urban tourism development encompasses four levels on which sustainable tourism occurs: dimensions of sustainability, stakeholders of sustainable urban tourism development, type of tourism to be promoted, as well as the management process. The overall validity of the model is illustrated by applying it to the case of Ljubljana.
The presented model should be understood in 3D visualization technique and, at the same time, serve as a building block for further adjustment of the model by adding: (a) additional dimensions of the sustainability, e.g., technological [79]; additional stakeholders, e.g., key publicand private-sector stakeholders [31]; additional types of tourism, e.g., adventure tourism [80]; or (b) additional layers relevant for sustainable urban tourism development, e.g., possible threats, as, for example, overtourism [44], water pollution [81], and others; or existing conditions, e.g., developed infrastructure, existing legalization, financial viability, and others, which are all possible avenues for further research; or (c) by adapting the proposed model for sustainable non-urban tourism development, e.g., rural [82] or mountain tourism [83]. Respondents grasped the sustainability concept in its integrity: 43 (13.35%) mentioned in their conceptualization all three studied sustainability elements (economic, social, environmental). A test of the proposed model on the case of Ljubljana, based on 322 informants, shows that Ljubljana should develop cultural, green, and sports types of tourism the most, next to the gastro, party, and senior/intergenerational tourism, in line with UN sustainable global goals and practices. Ljubljana offers, for now, yet-unfulfilled potentials also in experiential nautical tourism [84].
The management process needs to be professionally incorporated into the daily functioning of SUTD. It is necessary to involve and mobilize citizens and expand the specter of key stakeholders in sustainable urban tourism by capturing and better communicating the impact of that kind of tourism. Policy-makers need to show what is the impact of sustainable tourism in urban development and if other cities are less developed due to traditional tourism focus. Cities need to indicate and share with key stakeholders a clear direction of sustainable urban development which has mission-driven financing and incorporates bottom-up advancements of inhabitants and tourists.
Resilience measures need to be incorporated into sustainable tourism and future planning [85]. The paper discusses one option of managing SUTD by incorporating and developing different specters of sustainable tourism types by integrating different sustainability aspects and stakeholders. Today, tourists do not want to feel and be seen as tourists, rather, they want the feeling of belonging and inhabiting the place they visit or stay at in order to get an authentic experience in line with circular economy [86], new platforms Airbnb, Couchsurfing, global sustainable tourism trends [87].
A sharp distinction exists between traditionalists and modernists in the policy-making arena in the major cities: those who agitate against tall buildings, and those who promote them under the banner of urban sustainability should be avoided [88]. Industry and academic points of view need to be considered in the formulation of any urban tourism research agenda. Here, education and training are strategic factors for SUTD [89]. Also, corporate social sustainability needs to be the basis in the educational and training programs in tourism, overall, not just specifically in SUTD [90].
Conclusions
The theoretical contributions of this study are multifaceted. The first theoretical contribution to the scientific research area of sustainable urban development [27] represents an up-to-date literature review. Numerous scientific articles have been published in the Web of Science Core Citation on sustainable urban development and the analysis [24,36] shows that this construct is becoming extensively researched since 2011. Moreover, researchers from different fields bring theoretical contributions by wearing different lenses, ranging from environmental sciences, regional urban planning, geography, construction building technology to urban studies, economics, management, transportation, and energy fuels.
Second, a similar theoretical contribution is identified also in the sustainable tourism development field. Analysis of Web of Science Core Citation scientific articles shows an increased interest in sustainable tourism since 2008. Sustainable tourism research attracts researchers from social sciences, engineering, geography, public administration, education, urban studies, computer science, and others. Searching for solutions, it is important to combine the multifaceted perspectives of the research constructs as existing studies, as focused on only topics within their own "silos" domain, and horizontal collaboration among disciplines is necessary for innovation and new research findings to be implemented.
Third, the originality of this work lies in the integrated and advanced multidimensional conceptual model of SUTD, which is applied to the Ljubljana urban area. Managers (city governors) need to take into consideration different perspectives when developing sustainable urban tourism, and our proposed model shows how to do so by combining different perspectives. It serves as a basic platform which should be modified and therefore upgraded by additional dimensions for each specific area. Our proposed model can also be modified and, as such, applied to other touristic areas, such as rural, mountain tourism in line with sustainable development. The practical value of the proposed model is for interested stakeholders, who should carefully rethink the types of tourism and possibly add other new types that are emerging or present in their specific context. In doing so, it is important to visualize this model as 3D, so it offers the basis for considering different possible combinations the model enables by rolling all or some "circles." Nevertheless, other layers/dimensions can also be added to the proposed model. For example, within the layer of sustainability dimensions, technological sustainability [79,91,92] also plays a role, and resource dimension [27] can be added.
Nevertheless, we think this study is an important step forward when bridging the two studied fields, sustainable urban development, and sustainable tourism development, by offering an integrated and advanced conceptual model of SUTD. In order to ensure sustainability in the future, the urban areas, as well as tourist destinations, need to develop in a sustainable manner.
When setting the guidelines for future sustainable tourism development following the proposed model, the stakeholders, such as city governors, decision-makers and researchers, can learn from the model how to integrate the proposed dimensions and consider all possible combinations of tourism development based on the designed model by spinning the proposed layers of the conceptual model. In addition, additional layers could also be added as, for example, type of accommodation, duration of stay, and other elements they want to include in their final touristic product/service/study offer if interested stakeholders find it necessary. We conceptualized our work based on the gap that there is no integrated model for developing SUTD and connected existing models into our model where every layer represents another aspect of SUTD, and city governors need to take into their consideration all of the layers when they are deciding how to develop and manage their cities further. The field of tourism in the urban environment needs integration [43]. Our model's aim is to offer structure, but not as "a straightjacket," and enable the foundation for further theoretical and empirical research [43].
In line with the recommendations [93,94], managerial implications for Ljubljana SUTD governance, such as policy-makers responsible for Ljubljana, sustainable development should put more effort on managing the heritage value of tourism destinations for strategic management, integration of databases (vertical and horizontal), as well as the marketing purposes. Looking just at Europe, 73% of Europeans live in cities [95,96]. Our model is worth studying and adopting for cities which wish to have a conceptual tool for integrating all the smart solutions in their cities that modern technology nowadays can provide; however, city governors need a strategical guideline on how to integrate the data and tools acquired in order for them to be sustainably managed in neighboring cities, such as Vienna [97], Torino [98], or other global emerging smart cities.
Despite the abovementioned theoretical contributions, our study has some limitations due to the characteristics of a chosen research approach-qualitative study, as it does not allow for statistical generalization. The interviews were performed in the Slovene language, and some meaning could be lost in translation. Also, the majority of the literature review is based on journals in the English language. Nevertheless, the proposed conceptual model is applied to a single case study and additional case illustrations could be performed to further advance the model. We also recommend applying the proposed model to other urban settings as well as by adopting it, to apply it to the nonurban environment. In combining the research areas-sustainable tourism development and sustainable urban development-we recommend further study of the emerging research trends by the integration of studied research areas, as it may bring important theoretical as well as practical implications for this, as well as for future generations. Nevertheless, sustainable urban tourism development needs to enable quality places for people to live and also to visit [16], no matter the companies' size or the budgets [48]. Funding: This research was partly funded by the European Social Fund, Javni sklad Republike Slovenije za razvoj kadrov in štipendije, Republika Slovenija Ministrstvo za izobraževanje, znanost in šport, "Naložba v vašo prihodnost," operation is partly funded by the European Union, European social fund. The authors acknowledge that the paper was partly financially supported by the Slovenian Research Agency, Program P5-0364-The Impact of Corporate Governance, Organizational Learning, and Knowledge Management on Modern Organization.
|
2020-01-30T09:10:28.497Z
|
2020-01-21T00:00:00.000
|
{
"year": 2020,
"sha1": "723deb3798296c16655dc0a4d0ccb5fa61fab654",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/3/792/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "336dc0f97614cf1de345cb5c7cfe5112980c0035",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
259194269
|
pes2o/s2orc
|
v3-fos-license
|
Exploitation of elasto‐inertial fluid flow for the separation of nano‐sized particles: Simulating the isolation of extracellular vesicles
High throughput and efficient separation/isolation of nanoparticles such as exosomes remain a challenge owing to their small size. Elasto‐inertial approaches have a new potential to be leveraged because of the ability to achieve fine control over the forces that act on extremely small particles. That is, the viscoelasticity of fluid that helps carry biological particles such as extracellular vesicles (EVs) and cells through microfluidic channels can be tailored to optimize how different‐sized particles move within the chip. In this contribution, we demonstrate through computational fluid dynamics (CFD) simulations the ability to separate nanoparticles with a size comparable to exosomes from larger spheres with physical properties comparable to cells and larger EVs. Our current design makes use of an efficient flow‐focusing geometry at the inlet of the device in which two side channels deliver the sample, while the inner channel injects the sheath flow. Such flow configuration results in an efficient focusing of all the particles near the sidewalls of the channel at the inlet. By dissolving a minute amount of polymer in the sample and sheath fluid, the elastic lift force arises and the initially focused particle adjacent to the wall will gradually migrate toward the center of the channel. This results in larger particles experiencing larger elastic forces, thereby migrating faster toward the center of the channel. By adjusting the size and location of the outlets, nanoparticles comparable to the size of exosomes (30–100 nm) will be effectively separated from other particles. Furthermore, the influence of different parameters such as channel geometry, flow rate, and fluid rheology on the separation process is evaluated by computational analysis.
| INTRODUCTION
The isolation and separation of biological particles from heterogeneous mixtures is of paramount importance for many biomedical applications including drug discovery, diagnostics, -omics screening, and personalized medicine, to name a few.Significant engineering efforts have resulted in many chip-based technologies that permit robust separation/isolation of extremely small particles.Despite significant progress, the separation of nanoparticles, in particular exosomes, remains challenging owing to their small size.This study presents a novel microfluidic-based separation approach that demonstrates facile high throughput separation proven by computational fluid dynamics (CFD) simulations.To preface this work, we contrast our findings to the conventional methods for the separation of nanoparticles such as exosomes, discussed below.
Conventional methods for the separation of nanoparticles include ultracentrifugation, size exclusion chromatography, ultrafiltration, and immunoaffinity-based methods.Ultracentrifugation is currently considered the gold standard for exosome separation [1,2].This method is based on the differences in the sedimentation coefficient of exosomes and other substances in the sample.Ultracentrifugation-based separation techniques can produce large amounts of exosomes.However, they are not suitable for clinical diagnosis due to the time required (>4 h), low recovery rate (5%-25%), high cost, and poor repeatability [3][4][5].Ultrafiltration is a membrane-based separation technique, which primarily relies on size and molecular weight.This technique is often used in combination with the ultracentrifugation technique where large extracellular vesicles (EVs) and cells are first separated by ultracentrifugation and subsequently, ultrafiltration is used for the purification of exosomes from proteins [6,7].However, exosome recovery rates can be low owing to issues such as clogging and trapping of particles within the filters [8].Size exclusion chromatography (SEC) separates particles based on their size, where the separation force is owing to gravitational acceleration.When contrasting ultracentrifugation to size exclusion chromatography, ultracentrifugation is higher in throughput yet lower in yield [9][10][11].Thus, it is suggested to use SEC in combination with ultracentrifugation [12].
Immunoaffinity-based methods rely on antigen-antibody affinity to capture exosomes.Various proteins on the membrane of exosomes are ideal biomarkers for the immunoaffinity-based separation of exosomes.Despite the high purity and yield of immunoaffinity-based methods compared to ultracentrifugation [2,13], its application for large-scale exosome separation is inhibited due to several drawbacks such as the requirement of cell-free samples and costly reagents [14][15][16].
Microfluidic methods, on the other hand, have emerged as powerful tools to address the challenges associated with the aforementioned conventional techniques.The capability of precisely manipulating vast numbers of particles, working with small-volume samples, and integrating with downstream detection tools makes such methods suitable alternatives to be exploited in different medical applications.Among different approaches, inertial microfluidic devices received significant attention for particle separation.The passive nature of such devices significantly increases the portability, simplicity, and durability of such devices when compared to active methods such as those using magnetic fields, acoustic streaming, and electric fields.However, the inertial lift force used in these devices is not strong enough for manipulating submicron particles.The strong dependency of inertial lift force on particle size (F / d 4 ) results in the deficiency of these devices for separating small particles (less than $3 μm).
Using viscoelastic fluids in such devices results in an additional lift force, namely the elastic lift force.The emerging elastic lift force at high flow rates (elasto-inertial flow regime) pushes all the particles toward the center of the channel, which enables precise 3D focusing of particles [17].Furthermore, the elastic lift force in such devices is dependent on the size of particles, however, proportional to d 3 which makes it possible to better manipulate submicron particles [17][18][19].
Apart from that, the arising elastic lift force in such devices is strongly dependent on the fluid property.Hence, elasto-inertial approaches enable achieving fine control over the forces that act on extremely small particles.That is, the viscoelasticity of fluid that helps carry particles through microfluidic channels can potentially be tailored for high-resolution focusing and separation of micron/submicron particles without the need for an external force at low cost and high versatility.
Despite several advantages of elasto-inertial approaches compared to the inertial method, the application of such techniques for efficient isolation of nanoparticles such as extracellular vesicles is still in its infancy.Furthermore, using sophisticated computational techniques is essential to predict the flow field in such devices owing to the inherent complexity of the carrier viscoelastic fluids.
The earliest research on particle migration in viscoelastic fluids was reported by Leshansky et al. [17].Using a polyacrylic acid (PAA) solution they observed that particle focusing in a rectangular channel was better when using dilute PAA solutions.The influence of shear thinning property on the focusing of particles in a rectangular channel was also conducted by Seo et al., [18,19] in which a high molecular weight polyethylene oxide (PEO) contributed to shear thinning and degraded the focusing of particles at the center of the channel.Interestingly, by adding a minute amount of polymer (e.g., PEO, PAA, or xanthan gum [XG]) to a Newtonian fluid, the fluid exhibits viscoelastic behaviors.This is in contrast to Newtonian fluid flow in which the shear stresses are the dominant stresses.The existence of long polymer chains in viscoelastic fluids results in the generation of considerable normal stresses.The emerging normal stresses are responsible for an elastic lift force exerted on the particles.The distribution and intensity of elastic forces in such devices not only depend on the flow characteristics but also changes with the fluid properties such as the viscosity of the solvent, polymer molecular weight, and concentration [20,21].Therefore, following these early studies, different investigators studied the influence of polymer molecular weights, relaxation time, and concentrations.Various reported studies were conducted using straight microchannels, and revealed that particle focusing enhances as the molecular weight and lengths of the polymer decreased [22][23][24].Published studies reveal the feasibility of particle focusing/ separation for different micron-size particles in slightly different geometries [25][26][27][28].Apart from these, curved microchannels were employed to enhance the focusing of micron and submicron-size particles [20,[29][30][31][32].The Dean drag-induced force in curved channels can potentially enhance the focusing behavior of particles.However, when using viscoelastic fluids, the curvature of the channel generally limits the throughput of the device due to the occurrence of flow instabilities [33].This work introduces a major advance in elasto-inertial microfluidic separation methods for nano-sized particles.We demonstrate, Figure 1A is a schematic of the device.As shown, as the sample encounters the sheath, an initial focusing of all the particles occurs near the sidewalls of the channel at the inlet.However, due to the high shear rate of the carrier viscoelastic fluid, significant elastic stress is generated further along the length of the main channel.This stress is highest at the walls and decreases toward the center of the channel.
Hence, the initially focused particles adjacent to the wall experience a stress gradient that exerts a lift force on them.Figure 1B shows the contours of typical elastic stress distribution in the cross-section of the main channel.The direction of the exerted elastic lift force on the particles is toward the center of the channel while its magnitude is proportional to their size (F / d 3 ).Hence, larger particles experience a greater elastic lift force and migrate faster toward the center of the channel.The different migration speed of particles with different sizes results in a size-based separation of particles as they move toward the outlet of the main channel.By precisely adjusting the size and location of the outlets, particles smaller and larger than a certain size can be separated.This approach effectively results in an efficient focusing of all particles with different sizes near the side walls at the entrance followed by size-based separation of particles in the main channel.However, when applying viscoelastic fluids in this flow regime, elastic instabilities can arise particularly when high flow rates are introduced within flow focusing microchannel.Therefore, the work herein first optimizes the flow-focusing geometry to avoid instabilities at high flow rates.Subsequently, we assess the characteristics of the flow field and exerted forces for different channels and polymer solutions, perform particle tracking, and draw conclusions on how our optimized chip design can accurately and precisely adjust the location and size of the outlets.
| Computational approach
CFD simulation was employed to predict the separation of nanoparticles of different sizes in elasto-inertial fluid flow.To this end, governing equations of the flow field are fully resolved to predict the velocity, pressure, elastic stress, and particle trajectories.Such an approach not only reveals the performance of the simulated microfluidic device but also sheds light on the underlying physics necessary for further improvements.The governing equations for viscoelastic fluid flow in the laminar regime are as follows: The continuity and momentum equations (Equations 1 and 2), contain the velocity u, pressure p, and stress tensor τ.The stress tensor for viscoelastic fluids is written as the sum of solvent stress τ s and polymer stress τ p (Equation 3).The polymer stress in the viscoelastic fluid was taken into account by using Phan-Thien-Tanner (PTT) model (Equations 4 and 5).λ, μ p , and ε denote the polymer relaxation time, polymer viscosity, and material parameter, respectively.Of note, the PTT model takes into account the finite extensibility of polymers and shear thinning effects [34,35].Furthermore, previous studies revealed the accuracy of this model in predicting the flow field of viscoelastic fluids [35].The finite volume method (FVM) was used to numerically solve the transport equations using TransAT 5.6.The pressure velocity coupling in the CFD model was achieved by using the SIMPLEC algorithm.The spatial derivatives were discretized using the HLPA scheme.Furthermore, the Log-conformation tensor (LCT) approach was employed to accurately model the flow field at high Weissenberg numbers.
By adopting the abovementioned numerical approach, the velocity and pressure distribution along with the distribution of polymer stresses are obtained.Using such information, the trajectory of different particles can be simulated by integrating the force balance on particles via a Lagrangian approach.Hence, the following equation is solved for individual particles: where, F The drag force is quantified as follows: where, μ, ρ p , and d p indicate the viscosity, particle density, and particle diameter, respectively.Drag coefficient C d is calculated according to the Morsi-Alexander model [36], and the relative Reynolds number (Re) is defined as follows: The arising elastic lift force is proportional to the gradient of the first normal stress difference N 1 , and particle diameter d p and calculated as follows: The inertial lift force is calculated as follows [37]: 3-D numerical simulations were carried out using a uniform mesh with a grid size of 1 μm.Adoptive time stepping was employed for the transient simulations to ensure the stability of numerical solutions.
The adopted Courant number in all the simulations was 0.2.
| Fluid properties
In these simulations, our carrier fluid is assumed to be phosphate buffered saline (PBS).The elasticity of the carrier fluid is adjusted by adding PEOs with different molecular weights and concentrations.
Table 1 shows the property of different solutions examined in this study.The total viscosity of the solution is determined as the summation of solvent viscosity and the polymer viscosity, μ ¼ μ s þ μ p .The polymer viscosity is calculated as follows [38]: where, c and M w are polymer concentration and molecular weight.
The effective relaxation time of the polymer solution is determined as follows [38]: where, N A , K B , and T are Avogadro's number, Boltzmann constant, and temperature, respectively.The critical overlap concentration is where, μ ½ is the intrinsic viscosity and calculated as follows: In viscoelastic flows, the Weissenberg number (Wi) shows the ratio of the elastic force to the viscous force.The Wi number is calculated as follows: where, λ and γ are the effective relaxation time and shear rate, respectively.The relative contribution between the inertial and viscous forces is often quantified by using the elasticity number, El, which is defined as the ratio of the Wi number to the Re number.Figure 2A demonstrates the CFD simulation result that indicates our ability to achieve stable flow and uniform elastic stress distributions at a high velocity (0.7 m/s) in the main channel.With the curved geometry, we investigated how varying flow rate ratios affect the initial focusing of particles at the inlet.The data show that an appropriate focusing of particles can be achieved when the ratio of sheath flow to the sample flow rate is equal to or greater than 5 (supplementary Figure S2).
Figure 2B shows the predicted particle trajectories at the flow focusing section for the flow rate ratio of 5.According to the obtained simulation results, for a constant flow rate ratio, the predicted particle trajectories at the inlet are identical for all PEO solutions irrespective of the polymer molecular weights and concentrations.To satisfy the minimum flow rate ratio required for the effective particle focusing the sample and sheath flow rates are chosen to be 42 μL/min (0.035 m/s) and 210 μL/min (0.076 m/s), respectively.
| The influence of channel aspect ratio and fluid rheology
The elastic stress distribution in the main channel not only depends on the rheology and velocity of the fluid but also on the aspect ratio of the channel.Numerical simulations were performed for channels with different heights while the channel width was kept constant (W = 30 μm).
According to the CFD simulation results, an increase in channel aspect ratio slightly decreases the maximum elastic stress while resulting in a more uniform distribution of elastic stress (supplementary Figure S3).
An increase in the aspect ratio of the channel ensures that a higher percentage of particles are influenced by the uniform stress region, thereby, increasing the efficiency of particle separation.Moreover, for a constant mean velocity in the channel, increasing the aspect ratio results in a higher flow rate, which in turn, increases the throughput of the particle separation process.In the present study, the width and height of the channel were chosen to be 30 and 200 μm, respectively.
| Predicting particle trajectories in the channel
After identifying ideal geometric and rheological conditions, we performed final CFD simulations to predict the trajectory of particles and adjust the geometry of the outlet for efficient particle collection.
Extracellular vesicles are generally subdivided into exosomes, microvesicles, and apoptotic bodies.We used a range of diameters because of the ranges that exist; that is, exosome diameters (30-100 nm) are smaller compared to the microvesicles (100-1000 nm) and apoptotic bodies (0.5-3 μm) [39].Initially focused particles at the inlet experience the elastic lift force.The higher elastic lift force exerted on the large particles results in a faster migration of larger particles toward the center of the channel compared to those with small diameters.
The trajectory of particles in a certain flow field depends on the shape, density, and size of particles.The density of EVs slightly varies according to their size.Previous studies revealed that an increase in the size of EVs results in an increase in their densities [39].Table 2 shows the adopted densities for EVs with different sizes.Considering We performed a Lagrangian particle tracking simulation to assess the separation performance of particles within the microfluidic device.
Due to the high elasticity, we simulated 1 MDa as well as 600 kDa PEO solutions as the carrier fluid in the respective CFD simulations.
Additionally, to understand the range of possible purities and rate of recovery, we simulated this for five particle size diameters: 50, 100, 200, 300, and 500 nm.The smallest particles (50 and 100 nm) represent exosomes while larger ones (200, 300, and 500 nm) represent microvesicles.As mentioned previously, our overall objective is to demonstrate a facile way to separate exosomes from larger EVs.
Hence, the effective separation of 50 and 100 nm particles from those with larger diameters would demonstrate the feasibility of this microfluidic device.(Figure 4B). Figure 4C shows an even further degree of size-based separation at a location of 24 mm downstream from the inlet.Hence, if a chip was designed to collect samples at a location of 24 mm downstream from the sample inlet, particles of size differences with a resolution of 100 nm can be separately sorted.To ensure that such segregation happens for all the injected particles at the inlet, we show a snapshot of the Lagrangian simulation for all particles passing the channel section at 25 mm downstream from the inlet.Figure 4D shows the distribution of particles with different sizes when passing the channel cross section 25 mm downstream of the inlet.The Reynolds (Re) and Weissenberg (Wi) numbers for this test were 23.63 and 56, respectively.It is worth mentioning that a previously reported study in the literature revealed the possibility of achieving stable elasto-inertial focusing at comparatively similar Re and Wi numbers [41].Considering the mean flow velocity of the sample in our simulation, the throughput of the separation process in our microchannel is 45 μL/min.
Lagrangian particle tracking was also performed for the PEO solution with a molecular weight of 600 kDa.However, the obtained results indicate that, due to the lower elastic force exerted on particles, a proper size-based separation cannot be achieved within channel lengths of 25 mm (supplementary Figure S4).
The obtained particle tracking simulation results suggest that using a 1 MDa PEO solution as a carrier fluid we can achieve a reasonable separation efficiency 25 mm downstream of the inlet.In the present study, we aimed at isolating the particles representing the size of exosomes.Hence, the dimensions of the outlets were adjusted to separate particles with a maximum size of 100 nm from larger ones.While our simulations suggest a novel approach to separate small particles by exploitation of viscoelastic fluid properties, we note that experimental work must be accomplished for a holistic proof of concept.This will be the focus of future work.However, as a preliminary step toward this goal, we sought out recent experimental approaches reported in the literature that demonstrate the sorting of micron-size particles [42] under similar viscoelastic conditions.We Overall, we present a unique and optimized microfluidic chip, using computational fluid dynamics simulations.The chip, when combined with viscoelastic flow can remarkably facilitate the separation of nano-sized particles at a high purity and throughput.Moreover, the significance of this is that a simple channel can be designed, without the need for over-engineered shapes or external focusing systems.
Moreover, several biological fluids (blood, plasma, and saliva) have inherent viscoelastic properties [43,44], therefore we envision the possibility of separation of vesicles from samples taken directly from patients without the need for purification or sample preparation.
using computational fluid dynamics (CFD) simulations, unique geometrical conditions that are shown for the first time to enhance submicron particle separation.We incorporate spherical particles that range in size from micron to nano-scale to model heterogeneous samples of cells and extracellular vesicles.The feasibility of achieving high-throughput separation is demonstrated systematically by designing and testing with CFD simulations considering different flowfocusing geometries, identifying the influence of different geometrical parameters and fluid properties, modeling the flow field, and predicting the separation of particles.
2 |
THEORY AND DEMONSTRATION OF WORKING PRINCIPLEThe theory and demonstration of a novel microfluidic channel that incorporates viscoelastic fluid flow are described herein with a discussion of the varied simulation parameters.It was first conceived that a simple, flow-focusing geometry be simulated with two side inlets and one central inlet.A simple microdevice is ideal for manufacturing, reproducibility, and cost.Thus we simulated two side inlets to deliver the particulate sample (i.e., cell and EV mixture) while the inner channel is used to deliver the sheath fluid.The particulate sample was simulated in the work herein to be dilute and with the particle size ranging from 30 nm to 5 μm representing the size of different extracellular vesicles.The sheath and sample fluid are a mixture of a small amount of polymer with phosphate buffered saline to generate viscoelastic properties of the fluid (concentrations noted later).
schematic of the flow focusing geometry and size-based separation of particles (A) and stress distribution in the cross-section of the channel (B).[Color figure can be viewed at wileyonlinelibrary.com]
!
d , F ! e , and F ! i indicate the drag force, elastic lift force, and inertial lift force, respectively.
Finally, based on
figure shows regions within the channel (bird's eye view) where suspended particles in the sample gradually migrate toward the sheath fluid in the center of the channel.Of note, the rheological properties of sample and sheath fluids are identical and thus interfacial effects are absent.
3 | RESULTS AND DISCUSSION 3 . 1 |
FigureS2shows CFD simulation data indicating the formation of a region of high elastic stress with a flow-focusing geometry that is considered conventional (i.e., standard rectangular sheath and sample inlet channels).Such flow instabilities (color scale of FigureS2indicates fluctuating stress throughout the channel) result in undesirable mixing that adversely influences the size-based migration of the particles downstream of the junction, which in turn results in poor separation performance of the device.We then studied how an increase in the width of the sheath inlet and curvature in the geometry influences flow.We
9 .
98EÀ4 Pa s and 1000 kg/m 3 , respectively.The polymer viscosity and relaxation time were changed by varying the polymer molecular weight and concentration.The maximum PEO concentrations were chosen to be half of the critical overlap concentration, c/c* = 0.5.As seen in Table1, for a constant c/c* = 0.5, by increasing the polymer molecular weight from 300 kDa to 1 MDa the relaxation time increases from 0.384 to 1.2 ms.
Figure
Figure3Ashows the distribution of the first normal stress difference at the cross-section of a channel with the width and height of 30 and 200 μm and for PEO solutions with different molecular
F I G U R E 2
Formation of stable flow and uniform elastic stress distribution (A) and initial focusing of particles (B) in the revised flow focusing geometry for 600 kDa PEO solution at total flow rate and flow rate ratio of 252 μL/min and 5. [Color figure can be viewed at wileyonlinelibrary.com]F I G U R E 3 The influence of polymer molecular weight on first normal stress difference (N 1 ).Contours of predicted N 1 in channel section (left), distribution of N 1 in the width of the channel (right), V = 0.7 m/s, c/c* = 0.5.[Color figure can be viewed at wileyonlinelibrary.com] the maximum stress in our microchannel (22 kPa), and Young's modulus of EVs (26-420 MPa) [40] the maximum strain/deformation of EVs remains below 0.001.Hence, all particles were assumed to be spherical.
Figure 4
Figure 4 shows the result of the Lagrangian particle tracking with the 1 MDa PEO (c/c* = 0.5).The arrows indicate the flow direction toward the outlet of the channel, while the colors indicate the size of the particles.According to Figure 4A all particles initially flow near the walls of the channel at the inlet.However, a gradual migration and separation of larger particles toward the center of the channel is observed at a distance of 15 mm downstream from the inlet
Figure 4 4 |
Figure5Ais a CAD drawing of the dimensions of the microchannel outlet that might be ideal if located 25 mm downstream of the inlet.
employed a CFD simulation to model the published experimental results of particle separation.A comparison of our CFD simulation result to the published experiment is provided in a supplementary attachment.We find that our CFD model accurately predicts the trajectory of particles with F I G U R E 5 Particle separation at the outlet.(A) Geometry and dimensions of the outlet and (B) trajectory of particles as predicted by CFD simulation at the outlet.[Color figure can be viewed at wileyonlinelibrary.com]F I G U R E 6 Full design of the microchip showing the inlet and outlet where particles can be collected after separation occurs along the length of the main channel.[Color figure can be viewed at wileyonlinelibrary.com] different sizes, which was demonstrated experimentally.It is important to note that the work by Zhang et al. presents the separation of micron-size particles.Whereas the novelty of our work is the separation of nanoparticles, which would not be possible under the experimental conditions (i.e., requires significantly higher elastic stresses, which in turn, provokes the occurrence of elastic instabilities).The chip we design herein can achieve high elastic stresses as well as the throughput important for biomedical applications while preventing the occurrence of elastic instability.
1
Property of PEO solutions with different molecular weights and concentrations.
|
2023-06-20T06:17:11.191Z
|
2023-06-19T00:00:00.000
|
{
"year": 2023,
"sha1": "ee776728051bd01fb1dc184ed481e25f994dbc68",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/cyto.a.24772",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "44411d300d39119d063e83712982cf3b0d2a9fe4",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258150797
|
pes2o/s2orc
|
v3-fos-license
|
Summary of the National Advisory Committee on Immunization (NACI) Statement—Recommendations on Fractional Influenza Vaccine Dosing in the Event of a Shortage: Pandemic preparedness
Background At the commencement of a pandemic, it is important to consider the impact of respiratory infections on the health system and the possibility of vaccine shortages due to increased demand. In the event of an influenza vaccine shortage, a strategy for administration of fractional influenza vaccine doses might be considered. This article reviews the available evidence for efficacy, effectiveness, immunogenicity and safety of fractional influenza vaccine dosing, and summarizes the National Advisory Committee on Immunization (NACI) recommendations on fractional dosing strategies by public health programs in Canada. Methods Two rapid literature reviews were undertaken to evaluate the efficacy, effectiveness, immunogenicity and safety of fractional influenza vaccine dosing via the intramuscular or intradermal route. The NACI evidence-based process was used to assess the quality of eligible studies, summarize and analyze the findings, and apply an ethics, equity, feasibility and acceptability lens to develop recommendations. Results There was limited evidence for the effectiveness of fractional influenza vaccine dosing. Fractional dosing studies were primarily conducted in healthy individuals, mainly young children and infants, with no underlying chronic conditions. There was fair evidence for immunogenicity and safety. Feasibility issues were identified with intradermal use in particular. Conclusion NACI recommended that, in the event of a significant population-level shortage of influenza vaccine, a full-dose influenza vaccine should continue to be used, and existing vaccine supply should be prioritized for those considered to be at high risk or capable of transmitting to those at high risk of influenza-related complications or hospitalizations. NACI recommended against the use of fractional doses of influenza vaccine in any population.
Introduction
Influenza vaccination in Canada is provided annually through provincial and territorial seasonal influenza vaccine programs.Although provincial and territorial influenza vaccine programs vary across the country, all programs cover individuals who are at high risk of severe outcomes due to influenza and individuals that are capable of transmitting influenza to those at high risk (e.g.household members, healthcare workers).Due to the rapid timelines required for vaccine production each year, any significant impact to the manufacturing process may cause delays in influenza vaccine delivery or decrease the overall number of doses produced, potentially resulting in vaccine shortages for a season.A significant and unexpected increase in demand for the influenza vaccine could also lead to insufficient supply, as the number of doses available is based on orders made primarily in the spring months in advance of the next influenza season.This could be particularly relevant at pandemic times, as it was for the 2020-2021 influenza season when increased demand for seasonal influenza vaccine was observed in the southern hemisphere as a result of the coronavirus disease 2019 (COVID-19) pandemic.A strategy for the administration of fractional influenza vaccine doses (i.e. less than a full-dose) might be considered in these situations, as the use of fractional doses would provide vaccine programs the ability to vaccinate a larger number of people with the amount of vaccine that is available when supply is limited.The objective of this advisory committee supplemental statement is to review the available evidence for efficacy, effectiveness, immunogenicity, and safety of fractional influenza vaccine dosing, and to provide guidance on potential fractional dosing strategies in the event of a significant influenza vaccine shortage in Canada.
In Canada, influenza vaccines are currently authorized for intramuscular (IM) administration only, apart from the liveattenuated influenza vaccine (LAIV), which is administered intranasally.Intradermal (ID) administration is not covered within influenza vaccine product monographs and would therefore be considered off-label.For the purposes of these recommendations, the National Advisory Committee on Immunization (NACI) considered two different fractional dosing strategies: 1) fractional IM administration of influenza vaccine; and 2) fractional ID administration of influenza vaccine.
Methods
To inform NACI's recommendations, two rapid literature reviews were undertaken by the Methods and Applications Group for Indirect Comparisons (MAGIC) through the Drug Safety and Effectiveness Network (DSEN) on the topic of fractional influenza vaccine dosing.The rapid review methods were specified a priori in a written protocol that included the research questions, search strategy, inclusion and exclusion criteria, and quality assessment.The NACI Influenza Working Group reviewed and approved the protocol.The search strategies were developed in consultation with an experienced librarian based on pre-defined population, intervention, control, outcomes, study design and timeframe, and the following research questions (1,2): What is the safety and effectiveness of using fractional dosing strategies to deliver IM seasonal influenza vaccines?; and What is the safety and effectiveness of using fractional dosing strategies to deliver seasonal influenza vaccine by ID administration?
The reviews were completed by MAGIC, with additional data extraction (notably immunogenicity outcomes as indirect evidence for effectiveness for IM administration of fractional doses) completed by the Public Health Agency of Canada (PHAC).For both reviews, EMBASE and MEDLINE electronic databases, Cochrane Library, Cochrane Central Register of Controlled Trials and international clinical trial registries were searched for IM vaccine publications in the last 20 years and ID vaccine publications in the last 10 years.Searches were restricted to articles published in English.Additionally, hand-searching of the reference lists of included articles and relevant systematic reviews was performed.
For the ID fractional dose review, the DSEN MAGIC team conducted all data extraction and performed a meta-analysis for effectiveness, immunogenicity, and safety outcomes.The risk of bias for the included ID studies was assessed using the Cochrane risk-of-bias tool for randomized trails.
For the IM fractional dose review, the DSEN MAGIC team extracted and narratively summarized the data for effectiveness and safety, and provided PHAC with a list of studies that assessed immunogenicity outcomes to be used as indirect evidence for effectiveness for IM administration of fractional doses.PHAC technical staff then extracted the immunogenicity data from these studies and summarized the evidence narratively.The level of evidence (i.e.study design and methodological quality of studies) included in the IM review were assessed independently by two reviewers with PHAC using the designspecific criteria outlined by Harris et al. (3).
A systematic assessment of ethics, equity, feasibility, and acceptability of influenza vaccine fractional dosing strategies was also conducted according to established NACI methods (4).
The body of evidence of benefits and harms was synthesized and analyzed according to NACI evidence-based process (5) to develop recommendations.Following thorough review of the evidence, NACI formulated, reviewed and approved recommendations.Full details and results are presented in the NACI Recommendations on Fractional Influenza Vaccine Dosing (6).
Results
Key characteristics of the studies included in the DSEN MAGIC team reviews and additional analyses by PHAC are summarized in Table 1.Fractional intradermal dosing (efficacy/ effectiveness) Two studies (9,10) assessed the efficacy of fractional ID administration of influenza vaccine against laboratory-confirmed influenza infection or ILI in adults using IIV3.A meta-analysis of these two RCTs studies found no significant difference in the risk of influenza infection/ILI from the ID administration of a 9 mcg of HA per strain dose of influenza vaccine compared to 15 mcg of HA per strain IM dose (pooled risk ratio [RR]: 0.61, 95% CI, 0.19-1.91).
Immunogenicity
Overall, 10 RCTs and one meta-analysis of 16 RCTs reported immunogenicity outcomes for fractional doses of IM or ID influenza vaccine administration.The immunogenicity outcomes assessed by these studies included geometric mean-fold rise in hemagglutination inhibition (HI) titres (i.e.ratio of post to pre-vaccination geometric mean titre), seroprotection rate (i.e.proportion of participants with HI titres of at least 40 post-vaccination) and seroconversion rate (i.e.proportion of participants with at least a four-fold increase in HI titres post-vaccination, HI titre increase from less than 10 pre-vaccination to at least 40 post-vaccination, or both).
One study (8) in adults reported that the study groups that received a fractional dose of 7.5 mcg of HA per strain had statistically lower proportions of seroconversion and seroprotection post-vaccination than those who received the full-dose.Four studies (15)(16)(17)19) that statistically assessed the difference in immunogenicity between a full-dose and a half dose of influenza vaccine in children 6 to 35 months of age reported mixed results.Additional studies (one in adults and two in children) (13,17,19) that assessed varying fractional doses of influenza vaccine (3 mcg, 6 mcg, 7.5 mcg and 9 mcg of HA per strain) found that as the dose of influenza vaccine decreased, the immunogenic response also decreased.However, lower doses continued to meet criteria set for non-inferiority despite the reduced response compared to full-dose (according to current US Food and Drug Administration or previous European Medicines Agency criteria).
Fractional intradermal dosing (immunogenicity)
A meta-analysis (2) included 16 RCTs studies that assessed immunogenicity outcomes for fractional doses of influenza vaccine administered ID.The meta-analysis demonstrated no significant difference in the seroconversion rates for the study groups that had received fractionated doses (3 mcg, 6 mcg, 7.5 mcg or 9 mcg of HA per strain) by ID administration compared to 15 mcg of HA per strain dose given IM for all influenza strains.A meta-analysis was also performed for seroprotection rates compared to a full-dose of 15 mcg of HA strain per IM dose and found no significant difference for groups that received ID administration at doses of 3 mcg, 7.5 mcg or 9 mcg of HA per strain.Similarly, there was no significant difference in seroconversion or seroprotection rates between older adults that had received the fractional 9 mcg of HA per strain ID dose compared to those that received the full 15 mcg of HA per strain IM dose.However, seroprotection rates were significantly lower for those that had received a dose of 6 mcg of HA per strain for influenza A(H1N1) compared to a full IM dose.
Safety Safety of the intramuscular route of administration
The rapid review identified seven studies (13)(14)(15)(16)(17)(18)(19) that assessed safety outcomes (local, systemic and severe (local, systemic and severe adverse events [AEs]) of fractional IM influenza vaccine in infants or toddlers in the range of 6 to 36 months of age.Three studies were identified in the rapid review that assessed safety of fractional IM influenza vaccination in adults: two of the studies (8,11) involved adults between the ages of 18-64 years (18-49 years and 18-65 years) and one study (12) included adults older than 65 years of age.
Safety of intradermal route of administration
Twenty-three studies (9,10,12,(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39) were identified that assessed the safety of ID administration of influenza vaccine and were able to be included in a meta-analysis performed by the DSEN MAGIC team.The studies identified included various fractional doses (3 mcg, 6 mcg, 9 mcg of HA per strain), as well as a full non-fractional dose (i.e. 15 mcg of HA per strain) of ID-administered influenza vaccine.Overall, there was fair evidence that fractional doses of influenza vaccine administered via the IM and ID routes do not result in a significant difference with regard to severe systemic AEs post-influenza vaccination.No significant increases in pain have been reported with ID influenza vaccine administration compared to IM administration; however, the risk of local AEs, such as ecchymosis, erythema, pruritus and swelling occurring post-vaccination at the injection site, is significantly higher with ID administration of influenza vaccine compared to IM administration.
Feasibility
Several feasibility issues were identified when considering fractional dosing of current influenza immunizations or administration of ID doses of influenza vaccines.Administering a fractional IM or ID dose would require administering a lower volume of vaccine to achieve the desired lower dose, which is only possible when influenza immunizations have been packaged as multi-dose vials and not as pre-filled syringes.The ID administration of vaccine requires a different gauge needle than IM administration, multi-dose vials (which are not always available midway in the season if supplies run low), and training and skill in ID administration that not all vaccinators will have.
Significant training would also be required to ensure vaccinators are equipped in advance to provide ID influenza vaccinations and feel comfortable doing so.The number of vaccinators who are authorized and able to provide ID vaccination also vary by jurisdiction.
The volume of vaccine to be administered is high even if using a fractional dose and would therefore require two ID injections if regular needles and syringes were used rather than just one.The majority of studies of administration of influenza vaccine by the ID route used micro-needle injectors for administration, which are not yet authorized or widely available in Canadian settings.Furthermore, the use of fractional doses is not covered within influenza vaccine product monographs and would therefore require a novel communication and consent plan for any off-label dosing if it were adopted.Finally, implementation of such an ID immunization program would require structured monitoring for any potential modification to a seasonal influenza vaccine program running low on vaccine and advanced planning would have to factor this in a priori as multi-dose vials are not always available midway in the season.
National Advisory Committee on Immunization recommendations for public health program decision-making
1. NACI recommends that, in the event of a significant population-level shortage of influenza vaccine, a full-dose influenza vaccine should continue to be used, and existing vaccine supply should be prioritized for those considered to be at high risk or capable of transmitting to those at high risk of influenza-related complications or hospitalizations.(Strong NACI Recommendation) • NACI concluded that there is fair evidence to recommend the use of a full-dose influenza vaccine (15 mcg or 60 mcg HA per strain, dependent on vaccine product) compared to a fractional dose for individuals at high risk or those capable of transmitting to those at high risk of influenza-related complications or hospitalizations.(Grade B Evidence) 2. NACI recommends against the use of fractional doses of influenza vaccine in any population.(Discretionary NACI Recommendation) • NACI concluded that there is insufficient overall evidence at this time to recommend the use of fractional IM influenza vaccine doses.(Grade I Evidence) • NACI concluded that there is fair evidence that fractional ID influenza vaccine doses provide a sufficient immune response, but this route of administration is not feasible at this time.(Grade B Evidence) The detailed findings of the two rapid literature reviews, rationale and relevant considerations for these recommendations can be found in the NACI Statement, Recommendations on Fractional Influenza Vaccine Dosing (6).
Conclusion
In the event of a significant population-level shortage of the currently available influenza vaccine products, NACI recommends that full-dose influenza vaccine should continue to be used and existing vaccine supply should be prioritized for those considered to be at high risk or capable of transmitting to those at high risk of influenza-related complications or hospitalizations.NACI recommends against the use of fractional doses of influenza vaccines in any population.
Authors' statement
AS -Writing, original draft, review, editing PDP -Writing, review, editing KY -Review, editing RH -Writing, review, editing This work is licensed under a Creative Commons Attribution 4.0 International License.CCDR • April 2023 • Vol.49 No. 4
Table 1 :
Characteristics of included studies providing evidence related to the comparative efficacy, effectiveness, and immunogenicity of fractional vs. full-dose influenza vaccine for intramuscular and intradermal administration GMT rise 28 days (or 56 days for unprimed individuals) post-vaccination Local, systemic and/or severe AEs Halasa et al., 2015 RCT October 5, 2010 and March 2, 2012; the studies were conducted before the 2010-2011 and 2011-2012 influenza seasons (7.5 mcg vs. 15 mcg dose of IIV3-Fluzone) Healthy children 6-35 months of age Primed individuals: 7.5 mcg group (n=9) and 15 mcg group (n=21) Naïve individuals: 7.5 mcg group (n=55) and 15 mcg group (n=119) US multi-centre study
Table 1 :
Characteristics of included studies providing evidence related to the comparative efficacy, effectiveness, and immunogenicity of fractional vs. full-dose influenza vaccine for intramuscular and intradermal administration (continued)
|
2023-04-15T15:10:55.350Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "f7827478aae7de9d198db4183cac46f2a7854901",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14745/ccdr.v49i04a01",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "27028e8e8ab7c5295b72cbfe9c7600a15315eb20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
159218867
|
pes2o/s2orc
|
v3-fos-license
|
Migrations of Hungarian Peasants into and out of a Village at the Borders of Budapest. Social and Economic Changes in Vecsés in the Early 20th Century
The Hungarian capital, Budapest, witnessed unprecedented development during the
rapid modernization period of the Dual Monarchy. It was also the time period when
Austria-Hungary underwent the greatest loss of people in its history to international migration.
This paper attempts to analyze this phenomenon in relation to a small town in the
vicinity of Budapest. Vecses had been a peasant village but after the abolition of serfdom
and the beginnings of modernization, it lost its previous function and transformed into
a residential village. The paper analyzes the growth of the population and the changes
in the occupational structure, and briefly examines issues of land distribution in Vecses
based on a variety of archival records. The research demonstrates how at the turn of
the 19th and 20th centuries a typical agricultural village was utterly transformed by the
influence of modernization, the urbanization of the capital city, and domestic and international
migration.
Pro&Contra 1 (2018) 54-73. domestic and international migration from the viewpoint of Vecsés, a peasant village situated on the border of the Hungarian capital city, Budapest. 3 Vecsés is a rather small town which lies between the Hungarian capital, Budapest, and Liszt Ferenc International Airport. It has a population a little above 20 thousand. Vecsés is considered a Schwab town, although not more than 5 per cent of the population identifies as of German origin. 4 Despite being small in numbers, the Schwab minority has a strong identity -as they have had since their settlement in the area.
The town used to be a part of the dominium of Gödöllő, which belonged to the Grassalkovichs, one of Hungary's greatest aristocratic families. The inhabitants abandoned the area during the Ottoman era, and it soon became a so called "puszta," a barren land with no inhabitants. Resettlement in Hungary began under the rule of Queen Marie Theresa and was continued under her son, King Joseph II. Vecsés was resettled in the last wave of these relocations, in 1786, by Duke Antal Grassalkovich. According to the resettlement document, 50 serf families received a part of land in the territory that is today's Vecsés. The resettlement contract is the founding document of the village, and provides a glimpse at the composition of the population at the time. 5 Some of the family names among the signatories can be found in several archival records throughout the 19 th century.
Budapest became the capital of Hungary in 1873, when three towns: Buda, Pest, and Óbuda ("ancient Buda") were officially united, creating a 19 th -century metropolis. This marked the beginning of an extraordinary period of economic growth. In fact, Budapest was one of the fastest growing capital cities in Europe at the end of the 19 th century, with 3 This essay is based on some of my previous works, published in Hungarian over the past few years, such as Eszter Rakita, "A foglalkozásszerkezet elemzésének lehetőségei és néhány aspektusa egy funkciót váltó településen a modernizáció korában," in Tavaszi Szél / Spring Wind 2014, eds. Imre Csiszár and Péter Miklós Kőmíves (Budapest, 2014), 307-317, and Eszter Rakita, "Társadalmi változások a főváros vonzásában. A funkcióváltás és forrásai," in Vidéki élet és vidéki társadalom Magyarországon, eds. József Pap, Árpád Tóth and Tibor Valuch (Budapest, 2016). 443-453. Here I synthetize the most important points of said papers and set directions for the following stages of the research. 4 According to the 2010 state census. 5 The document was published by several authors, most importantly by Veronika Müller, "Vecsés újjátelepítése és reformkori fejlődése 1686-1847" [The Resettlement and Reform Era Development of Vecsés] in Vecsés története [History of Vecsés], ed Ernő Lakatos (Vecsés, 1984), 67-69. an immigration rate remarkable even in European terms. 6 The growing economy, and the proliferation of industry required more and more labor. As a result, swathes of the rural population started migrating towards Budapest from the Hungarian countryside, and they populated not only the capital, but also many of the surrounding settlements. The urbanization of Budapest created a situation in which the smaller settlements close to the capital lost their economic independence and became so called residential villages, a process which will be explained later in this essay.
As mentioned before, and many times in Hungarian academic literature, 7 in the last decades of the 19 th century Hungary witnessed two forms of migration: domestic migration, which primarily consisted of people moving from rural areas to the capital or its vicinity; and international migration, in which a large proportion of the peasantry sailed to the United States in the hope of better wages and living conditions. As seen in both cases, it was those in the rural areas that were most affected by migration. Leaving poverty behind and seeking better conditions for their families was another common feature of these migrations. The abolition of serfdom in 1848 (de facto in 1853) did provide most of the peasantry with lands of their own but did not solve the problem of unequal distribution. 8 As a consequence of this, the uneven system of Hungarian land ownership created a huge surplus of unskilled labor. People began migrating towards big cities such as Szeged, Debrecen, Miskolc, and, of course, most towards Budapest. But unfortunately, the growing but fractionally developed Hungarian industry was not ready to utilize most of this workforce. So, many of these people needed to find an industry that could provide them with jobs. They found it in America, but most of them did not want to move to the USA for good, rather their aim was to remain there long enough to save enough, and then to return to Hungary. 9 Usually, their plan was to buy land or start their own business 6 Gábor Gyáni, "Budapest története [History of Budapest] 1873-1945," in Budapest története a kezdetektől 1945-ig [History of Budapest from the Beginning to 1945], eds. Vera Bácskai. Gábor Gyáni and András Kubinyi (Budapest, 2000), 142. Gyáni also deals with the modernization of Budapest, and the changes of the city's identity in Gábor Gyáni, Budapest -túl in Hungary something they could never do with Hungarian wages. As Béla Várdy, one of the foremost chroniclers of Hungarian-American history, puts it: They were driven from their homeland by economic privation and drawn to the United States by the economic opportunities of a burgeoning industrial society. Most of them were young males who came as temporary guest workers with the intention of returning to their homeland and becoming well-to-do farmers. 10 This research explores both domestic and international migration with regard to Vecsés utilizing a wide variety of primary and secondary sources. Due to the space constraints of the article genre, a complete account of the research undertaken is not possible here; therefore, this paper will discuss the domestic issues of migration and the way it impacted on the settlement under study. Particular focus will be placed on the relationship between domestic migration caused by modernization and the social-economic transformation of Vecsés. The questions of international migration will be explored in a later essay.
According to the terminology established by Ferenc Erdei, a noted sociologist in 20 th -century Hungary, Vecsés belonged among the settlements in the surroundings of Budapest that were referred to as agglomerative villages. 11 This meant that the village was located within the sphere of the capital, and served as a place for those working in the industry in Budapest, such as factories, foundries, and public transport to live. Archival sources seem to confirm this: more than 50 per cent of the inhabitants of Vecsés worked in Budapest, at such companies as Ganz, 12 Hangya, 13 Beszkárt, 14
Population and Structure of Occupation in Vecsés
Vecsés had been a self-supporting serf village until the abolition of serfdom. But after 1849/1853, due to the growth of Budapest, Vecsés gradually lost its economic independence. While during most of the 19 th century, the population of the village both lived and worked in the same place, at the turn of the century, most worked in Budapest, commuting every day. During this time, Vecsés witnessed a huge growth in population due to domestic labor migration, as illustrated in figure 1. The data is taken from the 10-year censuses of 1850 to 1930. The reason this particular time frame was selected is that 1850 was the year when a census was conducted in Hungary, and 1930 is the closest to the years 1934-1936, from which I found archival records for this research. As the chart illustrates, the population of Vecsés displayed slow but steady growth until 1900, with only one small setback in the 1880s due to a cholera epidemic. 17 From 1900, the population grew more significantly every decade. The figures show a moderate shift between 1910 and 1920 which could be as a result of World War One and migration into the United States. What is striking is that during the course of just a century, the population grew five times in size.
16
All the data were derived from the official censuses of Hungary. Népszámlálási digitális adattár (NéDA). Magyarországi népszámlálások és mikrocenzusok 1784-1996. Központi Statisztikai Hivatal, http://www.konyvtar.ksh.hu/neda. It may be worth noting that during this period, the population of Pest-Pilis-Solt-Kiskun County was constantly growing. According to the state censuses, 472,744 people lived in the county in 1857. This figure almost doubled by the turn of the century: the census in 1900 showed 825,779 people. The population passed one million in 1910, and in 1930 the county had a headcount of 1,366,089.
In the following figures, the occupational structure of Vecsés from 1900 to 1930 is illustrated. The timeframe is narrower here since Hungarian census data has only included occupational information by settlement since 1900. For the sake of clarity and simplicity, only four of the most important occupation categories: agriculture, industry, commerce, and transport are included. In agriculture, all individuals who were involved in some ways in tillage, livestock breeding, or any other occupation in connection with land cultivation are counted. Within the industry category, people of all craftsmanship are included. The commerce category comprises those working in the field of finance. Finally, the transport category is for those whose jobs involved the fields of passenger and freight transportation, but mostly those who were employed by one of the big transport companies of the time, MÁV and Beszkárt. There is a fifth category, in which all other occupations were included, such as intellectuals (teachers, doctors, etc.), and pensioners. This is called the miscellaneous category, as they are not significant from the standpoint of this research. Servants were completely excluded as the nature of their occupation is in question even among statisticians and demographers, so it is hard to determine whether they belong to the agricultural or industrial category. 18 This is not clearly marked in the censuses and the archival records either. Servants often worked on the estates of noble landowners, but also often for urban middle class families, as wage earners. 19 Census of the Countries of the Hungarian Crown in 1900, In 1900, the number of wage-earners was 1,698. (Compare this to the population of the time, which was 4,119.) More than 75 percent of the 1,698, 1,312 people were occupied in the agriculture of the village. Of course, this did not only refer to the landholders, but everyone whose work was related in one way or another to farming: farmhands and shepherds. The other categories of occupation add up to less than one fourth of the wage earners, which means the vast majority of the inhabitants depended on agriculture in some form. This demonstrates that Vecsés remained close to the model of a typical 19 th -century agricultural village. Figure 3 illustrates the occupational structure of Vecsés a decade later. What is interesting here is that the numbers working in industry overtook those of agriculture. Of the 2,901 wage earners, only 1,115 were working in agriculture, so almost 200 less than ten years earlier. The population saw a more than 30 percent growth from 4,119 to 7,403, but these newcomers worked in occupations other than agriculture, and the numbers appear to also indicate that existing agricultural workers began looking for employment in areas that paid better. By 1920 the number of wage earners had grown by more than 1,000. Workers in agriculture and industry were growing at a very similar rate, 1,485 and 1,467, respectively. Beside this, the numbers employed in commerce and transport also grew, and the miscellaneous group more than doubled in ten years. workers almost doubled. The figures of the miscellaneous category also rocketed, more than doubling from the 1920 census. This was also a result of rapid modernization. Over the 30 years under analysis, once rare professions, such as teachers, entrepreneurs, or people living off annuities, became much more common, which explains the significant growth in the miscellaneous occupational category. These professions are not highlighted here because they do not belong within the four classic categories at the center of the current research.
To put the results into context, it is instructive to examine the figures for Pest-Pilis-Solt-Kiskun County. The sources are also the official census records, but the time frame is somewhat broader as the censuses contained occupation data in the counties from earlier, 1870. The figure below illustrates the main trends in the occupational structure of the county from 1900 to 1930 but also provides an interesting glimpse at the previous three decades. The data was gathered from the following sources: Census of the Countries of the Hungarian Crown in 1870; 1881; 1891; 1900; 1910; 1920; 1930. Pro&Contra 1 (2018) 54-73.
As can be gleaned from the figure, all four categories increased in numerical size, although in different intensities. For example, in 1870, a little less than 150,000 people were employed in the agricultural sector, a little more than 26 thousand in industry, and 5,394 people in commerce. At this time, so few people were working in transportation that it was not included as a category in the census data. It first appeared in the census of 1880, still together with commerce, and finally in 1890 it became an independent category. In the following decades, the number of people working in transport doubled every ten years. Almost the same phenomenon occurred in industry. As is clear, the number of industrial workers increased from 36,119 to 55,556 in the period between 1890 and 1900, and it rose to 101,244 by 1910.
1920 saw a huge growth in agriculture: the number working in this area was 215,170 in 1910, and 286,603 in 1920. This increase was most significant among women. Their numbers rose by more than 50 thousand in ten years. The reason for this may have been the outbreak of the First World War. Women who were forced to replace their husbands in the workplace no longer referred to themselves as dependents, rather they professed themselves as wage-earners so they were counted as such in the 1920 census. This was not noticeably present in the case of Vecsés.
Proportionally, the vast majority of working people in the county, more than 81 per cent, was employed in agriculture in 1870. This decreased to around 50 per cent by 1930. The other three categories, on the other hand, showed steady and sometimes rapid growth. By 1930, the number of people employed in industry had reached 40 per cent on par with those working in agriculture.
These figures demonstrate how modernization saw industry, commerce, and transport replace agriculture as the primary source of employment particularly in the vicinity of Budapest. The figures also indicate that Vecsés, the population of which had swollen due to domestic migration, transformed from a once typical peasant village into one inhabited by individuals working in the city . 24 Typically, a settlement has to maintain three major functions for its inhabitants. The first is the living function, which means that a settlement provides a place of living for its people. The second is the work function. This means that the settlement provides opportunities to make a living. Finally, the third is the recreation function, meaning that the settlement needs to provide opportunities for its residents to spend their leisure time. Based on these features, the secondary literature 24 The topic is widely discussed in both international and Hungarian literature. One of the bestknown book on this is József Tóth, Általános társadalomföldrajz [General Social Geography] (Budapest: Dialóg-Campus, 2002), 423-425. distinguishes between basic and non-basic settlements. Basic settlements provide only these three functions. Non-basic ones, on the other hand, are capable of functioning on higher levels as its infrastructure is developed enough to do so. 25 The main problem in Vecsés was that during the time period examined above, the village slowly lost its work function. This is demonstrated in the continuous decrease in the number of people who were employed in agriculture. It meant that a large part of the peasantry, who used to make a living from their own land, could no longer do so. In this sense then, modernization forced these people to leave their families' traditional profession, and seek work in factories, transport companies and other sectors of Hungarian industry. The growth in the village's population through domestic migration was not as a result of the fertility of Vecsés' soil, but simply because of its close proximity to the capital where the higher paying jobs were to be found.
Land Ownership and Occupation as Reflected in Archival Records
A more complete picture of society during this period is provided by the archival records. The primary sources include cadastral documents, land registers, 26 feudal court papers, 27 tax books, and other documents. The timeframe here encompasses nearly a century, and is limited by the availability of the archival records.
Research on the cadastral documents of Vecsés began in 2011. Archival records of cadastral documents ideally consist of registers, maps, and personal data sheets. In Hungary, the documents were produced during one of the three cadastral surveys organized by the government land administration. A cadaster is a comprehensive land record in which all the real estate and property of a town are recorded and measured in cadastral jugers. There were several cadastral surveys in Hungary, the first in the 1850s, the second was started in 1875 and lasted 10 years. The result of the latter was the most important and, for a long time, the official land registry for the whole county. 28 There were more surveys conducted in the 20 th century. The records referred to in this paper were produced between 1934 and 1936. As soon became apparent, there were many problems with researching these records. Cadastral documents have always been controversial among historians due to the difficulties in processing the vast amount of data they typically contain and the little-to-no success that can be reached by working with them. 29 Also, on many occasions, the records no longer exist, a countrywide phenomenon . 30 The Vecsés records were no exception as most had been destroyed during the course of the 20 th century. Some were burnt in World War II or the Revolution of 1956, others were damaged beyond repair when the archive building was flooded. According to the archivists of Pest County, there were occasions when some of the documents were "recycled:" the cadastral records were made on very fine quality paper with only one side written on, so it was only logical for some people to write on the reverse side instead of purchasing new sheets of paper, resulting in the disappearance of many documents. 31 Consequently, at the time of the research there were only 281 cadastral records available in the Archives, from the time period [1934][1935][1936]32 instead of the more than 5,000 pieces that should have been there. While these, to some degree, were useful in the first period of the research they proved insufficient to be the basis of the work. In any case, all the data from the records was entered into the MS Access database for further use. The cadastral papers, however few there were, provided invaluable information on several of the inhabitants. This data was compared with the official landholder statistics published by the Hungarian Royal Central Statistical Office. 33 The following tables are an attempt to show the differences between the results from the two sources mentioned. The goal with these is to provide a glimpse into the proportion of the missing data of the cadastral records. As illustrated in Table 1, most privately-owned lands, almost 3,000 properties, fell into the smallest category, those under 1 cadastral juger. These were commonly called "törpebirtok" ("dwarf lands"). This was the result of a process begun in 1786, the year the village was repopulated. The process is referred to as "birtokaprózódás" ("land fragmentation") in the secondary literature. When a serf father died, he usually divided his property among his (male) children, who then also left their land divided among their children, and so on. This resulted in the gradual deterioration of the soil. A snapshot of this process can be observed in the data above. 550 pieces of land were between one and five jugers, and 234 were between five and fifty. Two of the lots were between 50 and 100, and another two between 100 and 500 jugers. There was only one property larger than 1,000 cadastral jugers. It was more precisely 1,541 cadastral jugers (2,191.4 acres), and it was large enough to be called "nagybirtok" ("large estate"). There was no land greater in size than 3,000 jugers (4266,3 acres) in the village.
This was the structure of land ownership in Vecsés, in 1935. It would be of interest to examine the same figures based on the extremely sparse cadastral documents. The following table shows the findings of the records in a similar distribution as Table 1. But due to the lack of records, it is not complete. As seen in Table 2, less than 19% of the total number of lots (710 of 3,768) could be recovered compared to the official statistics. Of the 710 pieces of land recovered from the sources, 692, were dwarf lands, 13 were between 1 and 5 cadastral jugers, and five were larger than 5 jugers but smaller than 10. The 710 lots covered 181.56 jugers (258.2 acres), which is extremely low: only 2,3% of the 7,753 cadastral jugers (1,1025.5 acres) of land surrounding the village. This data shows best how big the problem is with the cadastral documents, and why they are not suitable for reconstructing how the society of the village looked in the first part of the 20 th century.
A much more useful group of records is the Major Tax Book (Adófőkönyv) from 1935. 36 This comprises six thick volumes and is stored in the Pest County Archives. The data from these volumes was also uploaded into the database and analyzed with SPSS Statistics analysis software. The correlations were then analyzed and visualized in tables and diagrams, some of which are included here.
This part of the research was conducted based on a sample of 3,300 people whose data was derived from the tax books. Of the 3,300 individuals, 2,225 paid taxes. There is more information on these individuals in the records, such as place of living, occupation, religion, etc. The most important of these is the occupational data as it shows in what field these people made a living. For 1,076 individuals there is no data in the records except for their names.
The following section shows a few of the conclusions that could be made based on this data, and how they confirm the claims made above concerning Vecsés and the way the town lost some of its traditional functions. 36 Firstly, the most telling data is the difference between overall taxes and land taxes paid by the people of Vecsés. The following bar chart shows the results. Overall taxes were 163,486.43 pengős. 37 Only 1,92% of this, 3,154 pengős were land taxes, which, again, demonstrates that agriculture had lost its importance as a means of making a living. The amounts paid were generally small: the 3,154 pengős were collected from a total of 1,289 people. The next table contains some interesting figures concerning land taxes.
Category (pengő)
Number of people Table 3 shows that the vast majority (810) of those paying land taxes paid less than one pengő. 386 people paid between 1.1 and 2 pengős, and 61 paid 2.1 to 5. More than 4 pengős were paid in only 32 cases, of which only one occurred where more than 100 was paid. It is interesting that one person paid almost half of the land taxes of the sample that year, 1,482.12 pengős, and it was not even a person. This outstanding amount was paid by Magyar Tudományos Akadémia (Hungarian Academy of Sciences) after the lands the institution owned in the surroundings of Vecsés.
There is one more aspect of the Tax Book that is worth looking at. Of the 2,225 taxpayers from the sample, we know the occupation of 60 per cent. The following table shows the occupational structure of the village based on the Tax Book. The four main categories mentioned in the first part of the essay are italicized.
Occupation
Occurrence % Similarly to the census data from 1930, the Tax Book sample also shows a majority of people working in industry. If the numbers of those employed in industry (723), transport (171), and commerce (65) are added together, it shows that more than 43 per cent of individuals were working in these fields, while only 92 (4,1%) still worked in agriculture.
Closing Remarks and Future Research
So, what can be taken away from these results? Most importantly, it seems clear that the majority of the village's inhabitants worked in the industrial field. What could be gleaned from the census data in the first part of the essay was confirmed by the tax records. Agriculture was no longer a major source of income for the population of Vecsés. The constant flow of people moving into the settlement found employment in the factories and other companies of Budapest, and not in the economy of Vecsés, which led to the village losing one of the important functions. As a result, Vecsés became a residential village, and the majority of its inhabitants lived there but their work did not tie them there. It should be noted that the sudden fall in agricultural employees between 1930 and 1935 is due to the incompleteness of the records. It is safe to say that the actual proportion of industrial and agricultural workers must have been roughly the same in the years 1930 and 1935. In conclusion, modernization, along with constant domestic labor migration, had a great impact on the economic, social, and demographic structure of the village. The once typical peasant settlement changed beyond measure at the turn of the century and owning a piece of land was marginalized as a source of income. Agriculture soon became ineffective at making a living. This was typical in the surroundings of large cities such as Budapest although not unique to them as it was also extremely hard to do so in the less developed, mostly northern and eastern parts of the country. This was the reason for the enormous emigration occurring in the same period, most importantly to the United States. Vecsés (and Pest County) was not among the areas most affected by emigration; 38 nevertheless, hundreds went to the USA from the county, and Vecsés was not immune either.
The first swarm aims for Pest, trying to find livelihood in the newly built factories. But there is not enough bread there either, so they wander further to find work. This is how some head for the mines and some overseas, to the New World of America. 39 Although several people did migrate to the United States to find better opportunities, international migration did not have the same impact on Vecsés as domestic migration did. But in order to build a detailed picture of the social processes that occurred, it is required as part of the research. This work is already in progress and a detailed analysis on the opportunities for social mobility, the social networks, and the careers of the emigrated "Vecsésians" based on Hungarian and American primary sources will be undertaken as a part of the author's further research. 1786-1936(Vecsés, 1937, 67.
|
2019-05-21T13:05:17.881Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f00834ed5d3e02f6eeb563a28012caff5c6a7fe4",
"oa_license": null,
"oa_url": "http://publikacio.uni-eszterhazy.hu/2631/1/Rakita.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "69edf9cd000ae1b660ebdf86fd1ee972b52f6a30",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": [
"Geography",
"Political Science"
]
}
|
15490402
|
pes2o/s2orc
|
v3-fos-license
|
Affine Weyl group approach to Painlev\'e equations
An overview is given on recent developments in the affine Weyl group approach to Painlev\'e equations and discrete Painlev\'e equations, based on the joint work with Y. Yamada and K. Kajiwara.
Introduction
The purpose of this paper is to give a survey on recent developments in the affine Weyl group approach to Painlevé equations and discrete Painlevé equations.
It is known that each of the Painlevé equations from P II through P VI admits the action of an affine Weyl group as a group of Bäcklund transformations (see a series of works [16] by K. Okamoto, for instance). Furthermore, the Bäcklund transformations (or the Schlesinger transformations) for the Painlevé equations can already be thought of as discrete Painlevé equations with respect to the parameters. The main idea of the affine Weyl group approach to (discrete) Painlevé systems is to extend this class of Weyl group actions to general root systems, and to make use of them as the common underlying structure that unifies various types of discrete system ( [10]). In this paper, we discuss several aspects of affine Weyl group symmetry in nonlinear systems, based on a series of joint works with Y. Yamada and K. Kajiwara. Before starting the discussion of (discrete) Painlevé equations, we recall some definitions, following the notation of [4]. A (generalized ) Cartan matrix is an integer matrix A = (a ij ) i,j∈I (with a finite indexing set) satisfying the conditions a ii = 2; a ij ≤ 0 (i = j); a ij = 0 ⇐⇒ a ji = 0. (1.1) The Weyl group W (A) associated with A is defined by the generators s i (i ∈ I), called the simple reflections, and the fundamental relations where m ij = 2, 3, 4, 6 or ∞, according as a ij a ji = 0, 1, 2, 3 or ≥ 4. When the Cartan matrix A = (a ij ) l i,j=0 is of affine type (of type A (1) l ,B (1) l ,. . . ,D 4 ), the corresponding Weyl group is called an affine Weyl group.
We fix some notation for the case of type A (1) l that will be used throughout this paper. The Cartan matrix A = (a ij ) l i,j=0 of type A (1) l is defied by The affine Weyl group W (A l ) by adjoining a generator π (rotation of indices) such that πs i = s i+1 π for all i = 0, 1, . . . , l; we do not impose the relation π l+1 = 1.
Variations on the theme of P IV
In this section, we present several examples of affine Weyl group action of type A (1) 2 to illustrate the role of affine Weyl group symmetry in (discrete) Painlevé equations and related integrable systems.
Symmetric form of P IV
Consider the following system of nonlinear differential equations for three unknown functions ϕ j = ϕ j (t) (j = 0, 1, 2): where ′ = d/dt denotes the derivative with respect to the independent variable t, and α j = 0 (j = 0, 1, 2) are parameters. When α 0 + α 1 + α 2 = 0, this system provides an integrable deformation of the Lotka-Volterra competition model for three species. When α 0 + α 1 + α 2 = k = 0, it is essentially the fourth Painlevé equation In fact, from (ϕ 0 + ϕ 1 + ϕ 2 ) ′ = k, we have ϕ 0 + ϕ 1 + ϕ 2 = kt + c. Under the renormalization k = 1, c = 0, system (2.1) can be written as a second order equation In view of this fact, we call (2.1) the symmetric form of the fourth Painlevé equation (N IV ). This type of representation for P IV was introduced by [19], [1] in the context of nonlinear dressing chains, and by [12] in the study of rational solutions of P IV .
The symmetric form N IV provides a convenient framework for describing the discrete symmetry of P IV . Let K = C(α, ϕ) be the field of rational functions in the variables α = (α 0 , α 1 , α 2 ) and ϕ = (ϕ 0 , ϕ 1 , ϕ 2 ). We define the derivation ′ : K → K by using formulas (2.1) together with α ′ j = 0 (j = 0, 1, 2); we regard the differential field (K, ′ ) as representing the differential system N IV . In this setting, we say that an automorphism of K is a Bäcklund transformation for N IV if it commutes with the derivation ′ . (A Bäcklund transformation as defined above means a birational transformation of the phase space that commutes with the flow defined by the nonlinear differential system.) As we will see below, N IV has four fundamental Bäcklund transformations that generate the extended affine Weyl group W = s 0 , s 1 , s 2 , π of type A (1) 2 . Identifying the indexing set {0, 1, 2} with Z/3Z, we define the automorphisms s i (i = 0, 1, 2) and π of K by π(α j ) = α j+1 , π(ϕ j ) = ϕ j+1 (j = 0, 1, 2).
(2.3)
Here A = (a ij ) 2 i,j=0 stands for the Cartan matrix of type A for the orientation matrix of the Dynkin diagram (triangle) in the positive direction: These automorphisms s i and π commute with the derivation ′ , and satisfy the fundamental relations for the generators of W (A 2 ). Hence we obtain a realization of the extended affine Weyl group W (A 2 ) as a group of Bäcklund transformations for N IV . Notice that the action of the affine Weyl group W = s 0 , s 1 , s 2 on the α-variables is identical to its canonical action on the simple roots.
We remark that the affine Weyl group symmetry is deeply related to the structure of special solutions of P IV (with the parameters α j as in N IV ). Along each reflection hyperplane α j = n (j = 0, 1, 2; n ∈ Z) in the parameter space, P IV has a oneparameter family of classical solutions expressed in terms of Toeplitz determinants of Hermite-Weber functions; each solution of this class is obtained by Bäcklund transformations from a seed solution at α j = 0 which satisfies a Riccati equation. Also, at each point of the W -orbit of the barycenter (α 0 , α 1 , α 2 ) = ( 1 3 , 1 3 , 1 3 ) of the fundamental alcove, it has a rational solution expressed in terms of Jacobi-Trudi determinants of Hermite polynomials.
q-Difference analogue of P IV
We now introduce a multiplicative analogue of the birational realization (2.3) of the extended affine Weyl group W = s 0 , s 1 , s 2 , π ( [5]). Taking the field of rational functions L = C(a, f ) in the variables a = (a 0 , a 1 , a 2 ) and f = (f 0 , f 1 , f 2 ), we define the automorphisms s 0 , s 1 , s 2 , π of L as follows: where a j are the multiplicative parameters corresponding to the simple roots α j . These automorphisms again satisfy the fundamental relations for the generators of W . In the following, the W -invariant a 0 a 1 a 2 = q plays the role of the base for q-difference equations. If one parameterizes a j and f j as with a small parameter ε, one can recover the original formulas (2.3) from (2.6) by taking the limit ε → 0. A q-difference analogue of (the symmetric form of) P IV is given by where T stands for the discrete time evolution ( [5]). Notice that (2.8) implies If we consider f j as functions of t, the discrete time evolution T is identified with the q-shift operator t → q t, so that T f j (t) = f j (q t). In this sense, formula (2.8) defines a system of nonlinear qdifference equations, which we call the fourth q-Painlevé equation (qP IV ).
The time evolution T , regarded as an automorphism of L, commutes with the action of W that we already described above. Namely, the q-difference system qP IV admits the action of the extended affine Weyl group W as a group of Bäcklund transformations. Again, by taking the limit as ε → 0 under the parametrization (2.7), one can show that the q-difference system qP IV , as well as its affine Weyl group symmetry, reproduces the differential system N IV . It is known that qP IV defined above shares many characteristic properties with the original P IV . For example, it has classical solutions expressed by continuous q-Hermite-Weber functions, and rational solutions expressed by of continuous q-Hermite polynomials, analogously to the case of P IV ([5], [7]). We also remark that, when a 0 a 1 a 2 = 1, one can regard qP IV as a discrete integrable system which generalizes a discrete version of the Lotka-Volterra equation.
Ultra-discretization of P IV
It should be noticed that the discrete time evolution of qP IV is defined in terms of a subtraction-free birational transformation; we say that a rational function is subtraction-free if it can be expressed as a ratio of two polynomials with real positive coefficients. Recall that there is a standard procedure, called the ultra-discretization, of passing from subtraction-free rational functions to piecewise linear functions ( [18], [2], see also [15]). Roughly, it is the procedure of replacing the operations Introducing the variables A j , F j (j = 0, 1, 2), from qP IV we obtain the following system of piecewise linear difference equations by ultra-discretization: which we call the fourth ultra-discrete Painlevé equation (uP IV ). Simultaneously, the affine Weyl group symmetry of qP IV is ultra-discretized as follows: (2.11) This time, the extended affine Weyl group W is realized as a group of piecewise linear transformations on the affine space with coordinates (A, F ). We also remark that, when A 0 + A 1 + A 2 = Q = 0, uP IV gives rise to an ultra-discrete integrable system. It would be an interesting problem to analyze special solutions of the ultra-discrete system uP IV .
Discrete symmetry of Painlevé equations
In this section, we propose a uniform description of discrete symmetry of the Painlevé equations P J for J = II, IV, V, VI. We also give some remarks on a generalization of this class of birational Weyl group action to arbitrary root systems.
For each J = II, III,. . . ,VI, it is known that the parameter space for H J is identified with the Cartan subalgebra of a semisimple Lie algebra, and that an extension of the corresponding affine Weyl group acts on K as a group of Bäcklund transformations ( [16]). A table of fundamental Bäcklund transformations for H J can be found in [9].
If the Hamiltonian H is chosen appropriately, the affine Weyl group symmetry of H J for J = II, IV, V, VI can be described in a universal way in terms of root systems. With the notation of [4], the type of the affine root system is specified as follows 1 .
1 , for describing the same group of Bäcklund transformations. It seems natural to expect that the same principle to be discussed below should apply to H III as well, but we have not completely understood the case of H III yet.
Discrete symmetry of H J
Our main observation concerning the discrete symmetry of H J can be summarized as follows.
(2) For each i = 0, 1, . . . , l, define s i to be the unique automorphism of K = C(q, p, t, α) such that Then these s i are canonical Bäcklund transformations for H J . Furthermore, the subgroup W = s 0 , s 1 , . . . , s l of Aut(K) is isomorphic to the affine Weyl group W (X l ).
Note that, for each ψ ∈ R, s i (ψ) is determined as a finite sum since the action of ad {} (ϕ i ) on R is locally nilpotent. A choice of the generators ϕ i (i = 0, 1, 2, . . . , l) with the properties of Theorem 1 is given as follows: (3.9) We remark that, in the case of H J (J = II, IV, V) of type A (1) l (l = 1, 2, 3), we also have the Bäcklund transformation π corresponding to the diagram rotation; its action is given simply by π(α j ) = α j+1 , π(ϕ j ) = ϕ j+1 .
If we use the polynomials ϕ j as dependent variables, the Hamiltonian system H IV , for example, is rewritten as with the convention ϕ j+3 = ϕ j , from which we obtain the symmetric form N IV by a simple rescaling of the variables. We remark that the polynomials ϕ j are the factors of the "leading term" of the Hamiltonian H. Also, in the context of irreducibility of Painlevé equations, the polynomials ϕ j are the fundamental invariant divisors along the reflection hyperplanes α j = 0 (see [8], [17], for instance). When α j = 0, the specialization of H J by ϕ j = 0 gives rise to a Riccati equation that reduces to a linear equation of hypergeometric type; for J = II, IV, V and VI, the differential equations of Airy, Hermite-Weber, Kummer and Gauss appear in this way, respectively. Apart from differential equations, this class of birational realization of Weyl groups as in Theorem 1 can be formulated for an arbitrary Cartan matrix by means of Poisson algebras (see [13], for the details). In this sense, Bäcklund transformations for Painlevé equations P J (J = II, IV, V, VI) have a universal nature with respect to root systems. In the case where A is of affine type, such a birational realization of the affine Weyl group appears as the symmetry of systems of nonlinear partial differential equations of Painlevé type, obtained by similarity reduction from the principal Drinfeld-Sokolov hierarchy (of modified type). The case of type A (1) l will be mentioned in the next section. As for the original Painlevé equations, P II , P IV and P V are in fact obtained by similarity reduction from the (l + 1)-reduced modified KP hierarchy for l = 1, 2, 3, respectively. For P VI , an 8 × 8 Lax pair is constructed in [14] in the framework of the affine Lie algebra so (8). This Lax pair is compatible with the affine Weyl group symmetry of Theorem 1. It is not clear, however, how this construction should be understood in relation to the Drinfeld-Sokolov hierarchy of type D
Painlevé systems with W (A
In this section, we introduce Painlevé systems and q-Painlevé systems with affine Weyl group symmetry of type A; this part can be regarded as a generalization, to higher rank cases, of the variations of P IV discussed in Section 2. In the following, we fix two positive integers M, N , and consider a Painlevé system, as well as its q-version, attached to (M, N ).
Painlevé system of type (M, N )
We investigate the compatibility condition for a system of linear differential equations where ψ = (ψ 1 , . . . , ψ N ) t is the column vector of unknown functions, and A, B m are N × N matrices, both depending on (z, t) = (z, t 1 , . . . , t M ). We assume that for the coefficients of the B matrices. We define the Painlevé system of type (M, N ) to be the system of nonlinear partial differential equations (4.3) with the similarity constraint (4.6). We remark that, when (M, N ) = (3, 2), (2, 3), (2, 4), this system reduces essentially to the Painlevé equations P II , P IV , P V , respectively. When (M, N ) = (2, N ) (N ≥ 2), it corresponds to the higher order Painlevé equation of type A (1) N −1 discussed in [11]. Note that the linear problem (4.1) defines a monodromy preserving deformation of linear ordinary differential system of order N on P 1 , with one regular singularity at z = 0 and one irregular singularity at z = ∞.
The Painlevé system of type (M, N ) admits the action of the affine Weyl group W = s 0 , s 1 , . . . , s N −1 of type A (1) N −1 as a group of Bäcklund transformations. Expressing the matrix A as as coordinates for the matrix −A, the ring R of polynomials in the ϕ-variables has a natural structure of Poisson algebra. In these coordinates, the Bäcklund transformation s i is determined by the universal formula We consider this condition as the Zakharov-Shabat equation for the N -reduced modified q-KP hierarchy; in this formulation, all the time variables t 1 , . . . , t M are treated equally. Note that, as to the Euler operator T = T 1 · · · T M , we have In the linear q-difference system (4.10), we choose the following matrix for A: 1). (4.14) Then the compatibility condition is equivalent to the homogeneity condition in general, these F (m,n) i (t, u) are complicated rational functions. It turns out, however, that by introducing new variables the time evolution of the q-Painlevé system can be described explicitly by means of a birational affine Weyl group action on the x-variables.
A birational Weyl group action on the matrix space
For describing the time evolution T i of the q-Painlevé system, we introduce a birational action of the direct product W (A for the two extend affine Weyl groups. Introducing two parameters q, p, we take K = C(q, p) as the ground field. Let K = K(x) be the field of rational functions in the M N variables x i j (1 ≤ i ≤ M ; 1 ≤ j ≤ N ); we regard the x-variables as the canonical coordinates of the affine space of M × N matrices. For convenience, we extend the indices i,j of x i j to Z by setting x i+M j = qx i j , x i j+N = px i j . We define the automorphisms r k (k ∈ Z/M Z), ω, s l (l ∈ Z/N Z), π of K as follows: where Note that all these automorphisms represent subtraction-free birational transformations on the affine space of M × N matrices.
Theorem 2 The automorphisms r 0 , . . . , r M−1 , ω and s 0 , . . . , s M−1 , π of K defined as above give a realization of the product W M × W N of extended affine Weyl groups.
By using this birational action of affine Weyl group W M = r 0 , . . . , r M−1 , ω we define γ 1 , . . . , γ M by γ k = r k−1 · · · r 1 ω r M · · · r k (k = 1, . . . , M ). (4.22) We remark that these elements γ k generate a free abelian subgroup L ≃ Z M , and that the extended affine Weyl group W M decomposes into the semidirect product L ⋊ S M of L and the symmetric group of degree M that acts on L by permuting the indices for γ k .
Theorem 3 In terms of the variables x i j defined by (4.18), the time evolution of the q-Painlevé system of type (M, N ) is described by N −1 as a group of Bäcklund transformations. One can show that the fourth q-Painlevé equation qP IV discussed in Section 2 arises from the q-Painlevé system of type (M, N ) = (2, 3), consistently with the differential case.
Finally we give some remarks on the ultra-discretization. From the birational action of W M × W N with two parameters q, p, we obtain a piecewise linear action of the same group on the space of M × N matrices, with two parameters Q, P corresponding to q, p. When P = 0, the commuting piecewise linear flows γ k ∈ W M may be called the ultra-discrete Painlevé system of type (M, N ). When P = Q = 0, it specializes to an integrable ultra-discrete system; it gives rise to an M -periodic version of the box-ball system. This class of piecewise linear action is tightly related to the combinatorics of crystal bases. The coordinates of the M × N matrix space can be identified with the coordinates for the tensor product B ⊗M of M copies of the crystal basis B for the symmetric tensor representation of GL N . Under this identification, it turns out that the piecewise linear transformations r k and s l with P = Q = 0 represent the action of the combinatorial R matrices and the Kashiwara's Weyl group action on B ⊗M , respectively (see [15]).
|
2014-10-01T00:00:00.000Z
|
2003-04-28T00:00:00.000
|
{
"year": 2003,
"sha1": "84eb0c5545946be8c5394c7e15070f511a64e7ad",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7b8450148f019e13a9c916dd08a8e89e7ab3f255",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
256008872
|
pes2o/s2orc
|
v3-fos-license
|
Patient satisfaction with anaesthesia services and associated factors at the University of Gondar Hospital, 2013: a cross-sectional study
Patient satisfaction is the degree of fulfilling patients’ anticipation which is an important component and quality indicator in anaesthesia service. It can be affected by anaesthetist patient interaction, perioperative anaesthetic management and postoperative follow up. No previous study conducted in our setup. The aim was to assess patient satisfaction with anaesthesia services and associated factors. Institutional based cross sectional study was conducted from April 15–30, 2013 at the University of Gondar referral and teaching hospital. All patients who were operated upon both under general and regional anaesthesia during the study period were included. Standardized questionnaire used for postoperative patient interview. Data was entered and analyzed using Statistical Package for Social Sciences (SPSS) window version 20. Chi Square test used to assess the association between each factor and the overall satisfaction of patients. The proportion of patients who said they were satisfied with anaesthesia services was presented in percentage. A total of 200 patients were operated upon under anaesthesia during the study period. Of these, a total of 156 patients were included in this study with a response rate of 78 %. The overall proportion of patients who said they were satisfied with anaesthesia services was 90.4 %. Factors that affected patient satisfaction negatively (dissatisfaction level and p value) were general anaesthesia (12.6 %, P = 0.046), intraoperative awareness (50 %, P = <0.001), pain during operation (61.1 %, P = <0.001), and pain immediately after operation (25 %, P = <0.001) respectively. Patient satisfaction with anaesthesia services was low in our setup compared with many previous studies. Factors that affected patient satisfaction negatively may be preventable or better treated. Awareness creation about the current problem and training need to be given for anaesthetists.
Background
Measuring the degree of patient satisfaction can be achieved with a variety of tools such as post operative visits and questionnaires [1,2]. Many factors contribute to patient satisfaction including accessibility and convenience of services that depend upon institutional structures, interpersonal relationships, competence of health professionals and patient expectations and preferences [3].
Poor quality of anaesthesia services may discourage patient from using available services. Because health concerns are among the most important of human concerns [4,5]. Patient satisfaction is the balance between expectations and perceptions of what was received. If there was concern, staff must continue to identify, monitor and modify factors that may improve it [6]. Patient satisfaction with anaesthesia services remains the best way to assess the outcome from the patients' point of view [7]. But it is difficult to measure satisfaction especially during the preoperative and intra operative periods [8][9][10][11][12].
There is inconsistency regarding patient satisfaction report which may be attributed to differences in institutional structures, interpersonal relationships, competence of health professionals, patient expectations and preferences, and variations in tools that used for data collection.
Of patients who operated upon under general anesthesia and local anesthesia, 87 % of patient was satisfied, 0.5 % dissatisfied and 12.5 % had no opinion [13]. Whereas another study revealed that 54 % of patients achieved an overall satisfaction less than 85 %, and female, educated and American Society of Anesthesiologists physical status (ASA) 1-2 patients were less satisfied [14].
From a study in Australia, patient satisfaction and other predetermined outcomes such as nausea, vomiting, pain and complication were assessed. The overall level of satisfaction was high (96.8 %), 2.3 % patients were somewhat dissatisfied and 0.9 % were dissatisfied with their anaesthetic care [15].
In a study done in Japan patients, 3.9 % had dissatisfaction with anaesthesia. The rates of dissatisfaction were higher in women than men and in spinal anaesthesia than in general anesthesia, and were observed mostly in patients aged from 20 to 39 years [16]. In the study done in British perception of anaesthetist, patient satisfaction with continuity of personal care by the anaesthetist were significantly increased by the introduction of a single post operative visit by the anaesthetist compared with no visit at all [17].
A study conducted in Taiwan showed a higher score of satisfaction in anaesthesia inclusive of waiting time for surgery in the operation room, attitude towards an anaesthetic staffs and during post operative visit in the management of complications of patients who were offered small video teaching in comparison with patients of traditional preoperative visit [18]. The clothing worn by the anaesthetist did not affect patients' satisfaction [19]. Studies showed that 62 % recalled being very anxious and afraid of anaesthetic as well as the operation and 40 % said that their recall before operation was of a mask covering their face [20].
Patient involvement in decision making increases satisfaction with anaesthesia services [21]. A study conducted in Kashihara showed that the most undesirable postoperative outcomes were vomiting, nausea, sore throat, postoperative pain and memory of extubation [22]. A study in Greek showed that the overall patient satisfaction in anaesthesia services rates in the range of 96.3-98.6 % [23].
Patient satisfaction is one of the indicators of the quality of health care provision. The department of anaesthesia at the University of Gondar referral and teaching hospital provides health services for millions of patients. As far as our search and knowledge is concerned, there was no previous research done in the study area that may reveal the level of patient satisfaction. Therefore, the aim of this study was to determine the level of patient satisfaction with anaesthesia services and associated factors.
Methods
Study design Quantitative cross sectional study design was used.
Study area and period The University of Gondar referral and teaching hospital is found in North Gondar Administration Zone, Amhara Regional state which is far from about 727 km Northwest of Addis Ababa (the capital city of Ethiopia). It is one of the largest hospitals in the country which provides health services for more than five million people in the catchment area. The hospital has 500 hundred beds, seven operation theatres and one medical intensive care unit. According to the annual report of anaesthesia department, 5695 patients were operated upon under anaesthesia in 2013. The study was conducted from April 15-30, 2013.
Study population All emergency and elective patients (minor, major, surgery, gynecology, obstetric and ophthalmology) who were operated upon under anaesthesia during the study period were included. Patients less than 18 years old, patients who were unconscious 24 h after operation, patients who were discharged before 24 h and patient who refused to participate in the study were excluded.
Variables of the study
Dependent variables Proportion of patients who would say they were satisfied.
Related variables Socio-demographic variables: sex, age, educational Status.
Factors related with surgery, anaesthesia and disease status Type of surgery, type of anaesthesia and ASA status.
Factors related with preoperative anaesthesia evaluation Self introduction of anaesthetist, way of approach, give chance for the patient to choose type of anesthesia, give chance for the patient to ask question, giving adequate information about the type of anesthesia.
Factors related with reception in the operation theatre and intra operative management Reception, pain on induction, privacy, awareness, pain during operation and immediately after operation.
Factors related with postoperative anaesthetic revisit Anaesthetist revisit and number of visits.
Factors related with anaesthetic related discomforts and complications Sore throat, discomfort, depression, pain, nausea, vomiting and shivering.
Operational definitions Patient satisfaction
The proportion of patients who would say they were satisfied with anaesthesia services at the University of Gondar hospital would be classified as satisfactory if the proportion of patients who would say they were satisfied would be >95 % (average from previous studies).
Major operation
A major operation was defined as any invasive operative procedure in which a more extensive resection is performed, e.g., a body cavity is entered, organs are removed, or normal anatomy is altered in general, if a mesenchymal barrier was opened (pleural cavity, peritoneum, meninges).
Minor operation
A minor operation was defined as any invasive operative procedure in which only skin or mucus membranes and connective tissue are resected, e.g., vascular cutdown for catheter placement or implanting pumps in subcutaneous tissue.
Intraoperative awareness
A recalled event occurring during anaesthesia/surgery that was confirmed (or otherwise) by the attending personnel present in the operating room was considered as awareness [24].
Postoperative depression
Defined as having unexplained feelings of sadness and hopelessness after a surgery.
Postoperative nausea and vomiting
When the patient experiences at least one or more episode of nausea or vomiting and/or both within 24 h of operation.
Postoperative sore throat
When the patient experiences pain during swallowing (liquid or solid food) or develops hoarseness of voice within 24 h after operation among those patients who managed with general anaesthesia with tracheal intubation.
Sample size and sampling procedure All elective and emergency patients (minor, major, surgery, gynecology, obstetric, ophthalmology) who were operated upon under anaesthesia during the study period were included.
Data collection procedure A pre-tested Amharic version questionnaire and check list used for data collection. The Amharic version questionnaire was first developed and translated into English language by English language experts. The English version questionnaire translated back into Amharic language by Amharic language experts. The pre-test was done on twenty patients in other hospital and corrections were made before the data collection. The data collected on socio-demographic characteristics and factors associated with preoperative, intraoperative, postoperative anaesthesia management of patients after 24 h of operation using face to face interview. Two BSc holder anaesthetists were involved in data collection after training.
Data analysis and interpretation SPSS window version 20 was used for data entry and analysis. Chi square test used to assess the association between each factor and the overall satisfaction of patients. The findings are presented in percentages and tables used for presentation of descriptive statistics.
Ethical consideration Ethical approval was obtained from the School of Medicine (Gondar College of Medicine and Health Sciences) ethical review committee since our department is under school of medicine. There is no other independent ethical approval body other than from the respective schools, institutes and Vice President for Research and Community Services ethical approval committees where all of them are parts of the University of Gondar. Oral informed consent was obtained from the study subjects and the aim of the research was explained to the patients and those who were not voluntary were excluded. Confidentiality was ensured by avoiding personal identifications, keeping questionnaires and checklists locked. Those patients with pain during induction of anaesthesia as well as the intraoperative and postoperative periods, postoperative nausea and vomiting and intraoperative awareness were reassured and their responsible anaesthetists, nurses and physicians were informed about the problem.
Sociodemographic characteristics of the study participants
A total of 200 patients were operated upon under anaesthesia during the study period. Of these, 4 (2 %) refused, 20 (10 %) were <18 years old, 10 (5 %) were discharged in the first 24 h after surgery, and 10 (5 %) were unconscious postoperatively.
A total of 156 patients were included in this study with a response rate of 78 %. The majority of study subjects, 81 (51.9 %) were females and the rest 75 (48.1 %) were males. Most of the respondents 83 (53.2 %) were in the age group of 18-29 years, 47 (30.1 %) were in the age group of 30-49 years, 17 (10.9 %) were in the age group of 50-65 years and 9 (5.8 %) were in the age group of >65 years old respectively. The male to female ratio was (0.9:1) and the mean age of the study subjects was 33 years (SD = 14.8), with minimum and maximum value 18 and 81 years respectively. Of the participants, 62 (39.7) were unable to read and write, 45 (28.8 %) were able to read and write, 35 (22.4 %) were 9-12 grades and 14 (9 %) were above grade 12.
Preoperative anaesthetic evaluation
From 156 patients, the anaesthetist introduced him/herself to 80 (51.3 %) patients, 153 (98.1 %) of participants explained that the approach of the anaesthetists were good, 79 (50.6 %) of the respondents were not given adequate information about anesthesia, 114 (73.1 %) of respondents were not given a chance to choose type of anesthesia and 119 (76.3 %) of respondents were not given a chance to ask questions about anesthesia ( Table 2).
Reception in the operating theatre and intraoperative patient management
Out of 156 patients, 154 (98.7 %) explained that the reception of the anaesthetist was good, 151 (96.8 %) of respondent's privacy was kept by the anaesthetists, 14 (94.2 %) of participants did not feel pain during induction of anaesthesia, 89 (57.1 %) of respondents did not recall anything intraoperatively. One hundred and thirty-eight (88.5 %) and 104 (66.7 %) patients had no pain during and immediately after operation respectively (Table 3).
Postoperative anaesthetist revisit
Out of 156 patients, 125 (80.1 %) were not visited postoperatively, 19 (12.2 %) and 12 (7.7 %) were visited once and greater or equal to twice respectively. All patients who were revisited postoperatively were treated for their complaints (Table 4).
Patient satisfaction with anaesthesia services
The overall proportion of patients who said they were satisfied with anaesthesia services in this study was 90. Twenty-two out of 111 patients who operated upon under general anaesthesia experienced intraoperative awareness where 11 (50 %) of the patients dissatisfied (satisfied vs. dissatisfied, P = <0.001). Eighteen patients experienced pain during operation which dissatisfied 11 (61.1 %) patients (satisfied vs. dissatisfied, P = <0.001). In addition, 52 patients experienced pain immediately after operation. Of these, 13 (25 %) patients dissatisfied (satisfied vs. dissatisfied, P = <0.001) ( Table 6).
Discussion
This study showed that the overall proportion of patients who said they were satisfied with anaesthesia services at the University of Gondar teaching hospital was 90.4 %. This finding was low compared with other studies [2,4,15,20,22]. This could be due to poor preoperative evaluation and preparation, intraoperative and postoperative patient management in our setup. In this study, the level of satisfaction of males was 88 % and that of females was 93.6 %. But a study done in India showed that females were less satisfied [1]. This difference could be due to only short-stay surgical inpatients were included in Indian study.
In our study, those who were unable to read and write had low level of satisfaction (87 %), but a study conducted in Saudi Arabia showed that the educated were less satisfied [14]. This discrepancy might be due to a difference in study design. We interviewed patients after operation which could enable to explore the problem properly but self administered questionnaire was used in Saudi Arabia.
As age increases patient satisfaction rate decreased, but in the study conducted in Japan patient dissatisfaction rate was high at the age of 20-39 years old. This difference might be due to small number of aged patients were included in our study. On the contrary, in Japan they included large number of aged patients [16]. In this study, more patients operated upon under regional anesthesia were satisfied than those operated upon under general anesthesia (general vs. regional, P = <0.001), but in the study conducted in Japan patients who operated with regional anaesthesia were dissatisfied [16]. This discrepancy might be due to small number of patients were given regional anesthesia and of these, the majority were managed under spinal anaesthesia in our setup. On the other hand, most patients were operated upon under peripheral block which might cause pain during the block that may lead to dissatisfaction in the case of Japan.
Patients who had post operative visit (100 %) were more satisfied than those who did not get visit at all (visit vs. no visit, P = 0.488). This finding was similar with a study conducted in Austria [17]. This could be because of reassurance of patients and patients might get treatment for their complaint at the right time.
Perioperative pain relief is the one of the key reflection of the level of patient satisfaction with anaesthesia service provided in hospitals. Eighteen (11.5 %) patients had pain during operation. From these, 11 (61.1 %) patients were dissatisfied (pain vs. no pain, P = <0.001). On the other hand, 52 out of 156 (33.3 %) patients had pain immediately after operation. Of these, 13 (25 %) were dissatisfied (pain vs. no pain, P = <0.001). This could be due to the substandard perioperative pain management practice in our hospital which is consistent with another study [20].
Only 42 (26.9 %) patients had a chance to choose anaesthesia, from these 4 (9.5 %) were dissatisfied (chance vs. no chance, P = 0.981). On the other hand, 114 (73.1 %) patients did not get a chance to choose anaesthesia, from these 45 (36.5 %) were dissatisfied. Out of 156 participants, 37 (23.7 %) got a chance for questioning, of these 1 (2.7 %) were dissatisfied. Whereas 119 (76.3 %) did not get a chance for questioning and from these, 14 (11.8 %) patients were dissatisfied (chance vs. no chance, P = 0.102) which might be due to anxiety. These findings showed that patient involvement in decision making seems to have a positive impact on patient satisfaction with anaesthesia service which is in accordance with another study [21].
Twenty-two out of 111 patients who operated upon under general anaesthesia experienced intraoperative awareness where 11 (50 %) of the patients were dissatisfied (awareness vs. no awareness, P = <0.001). Our finding was high compared with most previous studies [24][25][26][27]. This may be attributed to lack of optimal perioperative patient management including pain management experience of professionals in our set up compared with the developed countries.
Limitation of the study
The limitation of the study was that the sample size was small due to time constraint, the short time elapsed between anaesthesia and assessment of satisfaction, and the use of a questionnaire with dichotomous answers.
Conclusion
The overall proportion of patients who said they were satisfied with anaesthesia services was low at the University of Gondar referral and teaching hospital.
General anaesthesia, pain during operation, intraoperative awareness, and pain immediately after operation were the determinant factors for patient dissatisfaction with anaesthesia services in our set up. These factors may be preventable or better treated.
Recommendation
The problem should be given emphasis and awareness creation need to be made for all anaesthetists. Anaesthetists need to give a chance for the patient to ask a question and choose type of anaesthesia and should tell any possible side effects and potential complications related to anaesthesia. Further study should be conducted with large sample size.
|
2023-01-20T14:25:01.057Z
|
2015-08-26T00:00:00.000
|
{
"year": 2015,
"sha1": "c9a06352ec1bee3e0c134cba46c62476884c98e9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13104-015-1332-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c9a06352ec1bee3e0c134cba46c62476884c98e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233245762
|
pes2o/s2orc
|
v3-fos-license
|
A Positive Regulatory Feedback Loop between EKLF/KLF1 and TAL1/SCL Sustaining the Erythropoiesis
The erythroid Krüppel-like factor EKLF/KLF1 is a hematopoietic transcription factor binding to the CACCC DNA motif and participating in the regulation of erythroid differentiation. With combined use of microarray-based gene expression profiling and the promoter-based ChIP-chip assay of E14.5 fetal liver cells from wild type (WT) and EKLF-knockout (Eklf−/−) mouse embryos, we identified the pathways and direct target genes activated or repressed by EKLF. This genome-wide study together with the molecular/cellular analysis of the mouse erythroleukemic cells (MEL) indicate that among the downstream direct target genes of EKLF is Tal1/Scl. Tal1/Scl encodes another DNA-binding hematopoietic transcription factor TAL1/SCL, known to be an Eklf activator and essential for definitive erythroid differentiation. Further identification of the authentic Tal gene promoter in combination with the in vivo genomic footprinting approach and DNA reporter assay demonstrate that EKLF activates the Tal gene through binding to a specific CACCC motif located in its promoter. These data establish the existence of a previously unknow positive regulatory feedback loop between two DNA-binding hematopoietic transcription factors, which sustains mammalian erythropoiesis.
Introduction
Erythropoiesis is a dynamic process sustained throughout the whole lifetime of vertebrates for the generation of red blood cells from pluripotent hematopoietic stem cells (HSCs). In the ontogeny of mouse erythropoiesis, the major locations of HSC change orderly several times, convert from embryonic yolk sac to fetal liver, and then to the spleen and bone marrow in adult mice [1]. At each of these tissues, the multistep differentiation process of erythropoiesis is accompanied with a series of lineage-specific activations and the restriction of gene expression, as mediated by several erythroid-specific/erythroid-enriched transcription factors, including GATA1, TAL1/SCL, NF-E2, and EKLF [2][3][4][5]. Among the factors regulating erythropoiesis is the Erythroid Krüppel-like factor (EKLF/KLF1), a pivotal regulator that functions in erythroid differentiation and fate decision through the bipotential megakaryocyte-erythroid progenitors (MEPs) [6][7][8][9][10], as well as the homeostasis of HSC [11]. Eklf is the first identified member of the KLF family of genes expressed in the erythroid cells, mast cells, and their precursors [12,13], as well as some of the other types of the hematopoietic cells, but at low levels [6,14] (Bio GPS). The critical function of Eklf in erythropoiesis was initially demonstrated through gene abolition studies, with the Eklf -knockout mice (Eklf −/− ) displaying severe anemia and dying in utero at around embryonic (E) day 14.5 (E14.5) [15,16]. The impairment of the definitive erythropoietic differentiation is a major cause of embryonic lethality in Eklf −/− mice, in addition to β-thalassemia [17,18].
EKLF regulates its downstream genes, including the adult β globin genes, through binding of its C-terminal C 2 H 2 zinc finger domain to the canonical binding sequence CCNC-NCCC located in the promoters or enhancers [18][19][20] and the recruitment of co-activators, e.g., CBP/p300 [21] and SWI/SNF-related chromatin remodeling complex [22,23], or corepressors [7,18]. Moreover, clinical associations exist between the Eklf gene and different human hematopoietic phenotypes or diseases [10,24,25]. In erythroid progenitors, e.g., CFU-E and Pro-E, EKLF is mainly located in the cytoplasm. Upon differentiation of Pro-E to Baso-E, EKLF is imported into the nucleus and forms distinct nuclear bodies [20,26]. Genome-wide analysis of the global functions of mouse EKLF through the identification of the direct transcription target genes has been conducted using ChIP-Seq in combination with gene expression profiling [27,28]. The results from these studies suggest that EKLF functions mainly as a transcription activator in cooperation with TAL1/SCL and/or GATA1 to target genes including those required for terminal erythroid differentiation [27,29]. However, much remains to be reconciled between the two studies with respect to the diversity of the genomic EKLF-binding locations and the deduced EKLF regulatory networks.
Besides EKLF, there are several other factors that have been shown to regulate erythropoiesis [2,5]. In particular, the T-cell Acute Lymphocytic Leukemia 1 (TAL1), also known as the Stem Cell Leukemia (SCL) protein, plays a central role in erythroid differentiation as well. The role of Tal1/Scl in primitive erythropoiesis has been demonstrated by the lethality of Tal1 −/− mice at E9.5 [30,31]. Studies using erythroid cell lines [32] or adult-stage conditional Tal1 gene knockout mice [33,34] have shown the requirement of Tal1 in definitive erythropoiesis. Another transcription factor known to play important roles in erythropoiesis is the zinc-finger DNA-binding protein GATA1, which recognizes the consensus binding box ((T/A)GATA(A/G)) [35][36][37]. The cooperative functioning of TAL1 and GATA-1 in the regulation of erythroiepoiesis is closely associated with their physical associations at thousands of genomic loci [38]. Interestingly, Eklf appears to be a downstream target gene of the TAL1 factor. Whole-genome ChIP-seq analysis has identified the binding of the TAL1 protein on the Eklf promoter in primary fetal liver erythroid cells [39]. Furthermore, in the Eklf gene promoter, the composite sequence of GATA-E box-GATA exists, which is a potential binding site of the GATA1-TAL1 protein complex required for the expression of the Eklf gene in a transgenic mouse system [40].
In the study reported below, we combined a promoter-based ChIP-chip technique using a high-specificity anti-EKLF antibody and microarray-based gene expression profiling to provide a genome-wide overview of the genes targeted by EKLF in the E14.5 mouse fetal liver cells. Remarkably, Tal1 has turned out to be a direct target gene of EKLF, indicating the existence of a positive feedback loop between Eklf and Tal1 for the regulation of erythropoiesis in mammals.
Genome-Wide Identification of EKLF Target Genes by Microarray Hybridization and Promoter-Based ChIP-Chip Analyses
We first carried out a gene profiling analysis to identify the genes regulated by EKLF. The microarrays were hybridized with cDNAs derived from the E14.5 fetal liver RNAs of four wild-type (WT) and four Eklf knockout (KO or Eklf −/− ) embryos ( Figure 1A). Overall, there were 6975 genes with differential expressions levels between the WT and KO mice ( Figure 1B-D). Notably, the confidence index of the microarray hybridization analysis was approximately 70%, as the upregulation/downregulation of 8 out of 12 genes could be validated by semi-quantitation RT-PCR analysis (data not shown). We then performed a ChIP-chip analysis ( Figure 1C) using a promoter-based microarray and the high-specificity polyclonal anti-mouse EKLF antibody (anti-AEK) [19,26]; (Figure 1). The probes on the ChIP-chip array were grouped into 21,536 sequence IDs (SEQ_ID). The promoter of each annotated gene was defined as the region from −3.75 kb upstream to 0.75 kb downstream of the transcription start site (TSS). The SEQ_IDs with at least one significant peak were defined as the potential target binding sites of EKLF. Overall, enriched EKLF-binding was present in 5323 SEQ_IDs, corresponding to 4578 promoters ( Figure 1C,D). We also validated the ChIP-chip data by ChIP-qPCR; 9 of 13 promoters on the ChIP-chip list were bound with EKLF, as shown by this assay (data not shown).
The data from the ChIP-chip and the microarray gene profiling experiments were then combined to identify the putative EKLF target genes. After matching between the 11,549 differentially expressed probe sets from the microarray data and the 5323 significant SEQ_IDs from the ChIP-chip array data, 2391 SEQ_IDs (11.1%) from the ChIP-chip data and 3467 probe sets (7.7%) from the microarray hybridization data remained. In the end, the combination of the two data sets resulted in 2644 distinct genes ( Figure 1C and Table S1A). This gene list included only genes with an altered expression level in the Eklf −/− fetal liver at the p < 0.05 level, and with at least one statistically significant EKLF-binding site (p < 0.0017) in the promoter region, without considering the fold change of expression and enrichment of binding.
Upon filtering with the effect sizes of the ChIP-chip data (>0.25) and microarray profiling data (>0.5), the number of EKLF-bound and regulated targets was reduced from 2644 to 1866, among which 1156 were down-regulated and 710 were up-regulated in the E14.5 WT fetal liver ( Figure 1C,D, Table S1B,C). The above data support the scenario that the promoter-bound EKLFs could function as either a repressors or activators in vivo. Notably, the promoters bound with and regulated by EKLF were distributed throughout the mouse genome, with no obvious preference for any chromosome (Figure 2A,B).
Functional and Pathway Analysis of EKLF Target Genes
Previous reports by others using the Ingenuity Pathway analysis (IPA) and GeneGo MetaCore analysis platform showed that EKLF target genes were associated with a variety of cellular activities or pathways, including general cellular metabolism, cell maintenance, cell cycle control, DNA replication, general cell development, and the development of a hematologic system [28,41]. To gain further insight into the potential biological roles and functions of EKLF, we applied the IPA software for the analysis of the putative 1866 direct target genes of EKLF. The analysis identified the top five over-represented networks of the down-regulated EKLF targets and the up-regulated EKLF targets, respectively (Table S2). The significance of the relevant networks was strengthened with use of the higher cut-off score of 25 to ensure that reliable functional networks built by IPA were eligible (Tables S2 and S3A,B). Our network analysis confirmed the previously established association of the hematological system development/function with the up-regulated EKLF targets [28]. Notably, the top network associated with either the down-regulated EKLF targets or up-regulated EKLF targets was related to the metabolism and small molecule biochemistry (Table S2). Additionally, the significant networks/functions associated with the down-regulated EKLF targets were more broad than those with the up-regulated EKLF targets (Table S2). Overall, our analysis was consistent with previous studies [27,28], in that the promoter occupancy by EKLF also played important roles in the developmental processes, other than erythropeisis and hematological development. We also used IPA software to group the number of EKLF targets according to their respective biological functions, regulatory pathways, and physiological functions (Tables S1 and S4A,B). Of the top five molecular and cellular functions, cell death/survival, cellular assembly/organization, and cellular function/maintenance were overrepresented in both the upregulated and down-regulated EKLF targets. Further enrichment analysis of the canonical pathways using the IPA software revealed significant overrepresented pathways across the same two genes lists (Tables S3 and S5A,B). The prominent enrichment of the up-regulated EKLF targets was related to cell cycle control of the chromosomal replication, EIF2 signaling, mitochondrial dysfunction, hypusine biosynthesis, and tryptophan degradation III (Eukaryotic; Supplementary Figure S3). The enrichment of the down-regulated EKLF targets was related to insulin receptor signaling, chondroitin sulfate degradation (Metazoa), gap junction signaling, nitric oxide signaling in the cardiovascular system, and PDGF signaling (Supplementary Figure S4). Together, the above further established the specific functions and associated biological pathways associated with the EKLF target genes.
Identification of Potential Transcription Factors Co-Regulating the EKLF Targets
As co-occurrences of specific transcription factor-binding motifs in the promoters would suggest the cooperation of these factors in transcriptional regulation [42,43], we searched factor-binding motifs across the EKLF-bound promoters, as described in the Material and Methods. Specifically, the consensus transcription factor-binding motifs were ranked based on how often a particular motif occurred within the sequence ID. This observed frequency was applied to all the consensus transcription factor-binding motifs identified within the EKLF-bound regions on each sequence ID (Tables S4 and S6). As expected, the most abundant transcription factor-binding motif in the EKLF-bound and regulated promoters was the consensus EKLF-binding sequence CACCC, which was present a total of 2390 times in 2391 of the EKLF target sequence IDs, corresponding to 2143 times in 2644 distinct gene promoters. Consistent with Tallack et al. [27], the binding motifs of known transcription factors functionally interacting with EKLF, such as TAL1 and GATA1 [27], were also identified, which were present at least once in 2390 (2143 distinct gene promoters) and 2384 (2139 distinct gene promoters) of the EKLF target sequence IDs, respectively. In addition, the binding motifs of a number of other transcription factors possibly interacting with EKLF functionally, such as PEA3, LVa, H4TF-1, and XREbf were also identified in this way (Tables S4 and S6).
To investigate the functional cooperation between TAL1 and EKLF or between GATA1 and EKLF, we further analyzed the distance between the binding motifs of GATA1 or TAL1 and that of EKLF. Indeed, the distribution of the TAL1 binding motifs had the highest frequencies between +100 bp and −100 bp from the EKLF binding motifs, indicating a functional cooperation between TAL1 (or possibly Ldb1 complex) and EKLF. Moreover, this cooperation likely acts through the binding of TAL1 upstream of the EKLF protein ( Figure 2C). The distribution pattern of the GATA1 binding motifs also supported the cooperation between this factor and EKLF ( Figure 2D), although there was no obvious upstream/downstream preference between these two factors.
Likelihood of Tal1 Gene as a Regulatory Target of EKLF in E14.5 Fetal Liver Cells
Our motif analysis across the EKLF-bound promoters revealed that the binding motifs of the transcriptional factor TAL1 had the highest frequency of co-occupancy with the binding motifs of EKLF (Table S4). Interestingly, in the hematopoietic system, the transcription factor duet EKLF-GATA1 or TAL1-GATA1 served as part of the specific activation complex(es) in the erythroid cells [43][44][45], and both the Eklf gene [46,47] and the Tal1 gene [48,49] were activated by the GATA1 factor [28,29]. We thus further investigated whether there was also an epistatic relationship between Eklf and Tal1.
Gene expression profiling by microarray hybridization revealed a 2.5-fold (effect size = 1.3139) down-regulation of the Tal1 transcript in the E14.5 Eklf −/− fetal liver in comparison with the wild-type E14.5 fetal liver (Table S1A). This microarray data were validated by RT-qPCR. As shown, the level of Tal1 mRNA in the Eklf −/− fetal livers was decreased significantly, down to 47% of the level detected in wild-type fetal liver (left histograph, Figure 3A). In parallel, the TAL1 protein level in the Eklf −/− fetal livers cells was also down-regulated by 70% when compared with the wild type (right panels and histograph, Figure 3A). Thus, not only the TAL1 factor could activate the Eklf gene transcription [39,40], but the Tal1 gene might also be a regulatory target of EKLF. As Eklf −/− mice and Tal1 −/− mice both exhibited a deficit of erythroid-lineage cells after the stage of basophilic erythroblasts [33,46], we suspected that the promotion of erythroid terminal differentiation from pro-erythroblasts to basophilic erythroblasts very likely required EKLF-dependent activation of the Tal1 gene transcription. The gel band signals were all normalized to that of actin. * p < 0.05, and *** p < 0.001 by Student's t test. Error bars, SD.
EKLF As an Activator of Tal1 Gene Expression during Erythroid Differentiation
To further examine whether EKLF was an activator of Tal1 gene transcription in erythroid cells, we first analyzed the expression level of Tal1 mRNA in cultured mouse erythroid leukemic (MEL) cells during DMSO induced erythroid differentiation. Similar to βmaj mRNA, the Tal1 mRNA was expressed in un-induced MEL at a basal level, which was increased by 2-3 fold upon DMSO differentiation (top, Figure 3B). Consistent with this, the protein level of TAL1 was also up-regulated during a 48 h period of DMSOinduced differentiation, but was down-regulated subsequently (bottom, Figure 3B). The upregulation of the Tal1 gene supported the scenario that a sustained higher expression of the Tal1 gene was required for erythroid differentiation. The biphasic expression profile of the TAL1 protein further suggested that the requirement of TAL1 for MEL cell differentiation was up to 48 h after DMSO-induction, which corresponded to the basophilic/polychromatic stages of erythroid differentiation.
We then analyzed the Tal1 mRNA levels in two independent MEL cell-derived stable clones, 4D7 and 2M12. As shown in Figure 3C, the knock-down of Eklf mRNA by the doxycycline-induced shRNAs led to a significant reduction in the Tal1 mRNA under the condition of DMSO-induced erythroid differentiation, but not in the un-induced MEL cells. The latter result further supported that EKLF was not part of the regulatory program of Tal1 gene transcription in MEL cells prior to their differentiation. The data of Figure 3C indicate that EKLF was required for the activation of Tal1 gene transcription during the DMSOinduced erythroid differentiation of MEL cells. Together with the loss-of-function of Eklf studied in the mouse fetal liver ( Figure 3A), we conclude that while TAL1 is a known activator of Eklf gene transcription, EKLF also positively regulates Tal1 gene transcription during erythroid differentiation from CFU-E/ pro-erythroblasts to the basophilic/polychromatic erythroid cells.
Binding In Vivo of EKLF to the Upstream Promoter of Tal1 Gene
How would EKLF activate the Tal1 gene transcription during erythroid differentiation? It could either directly activate the Tal1 gene through DNA-binding in the regulatory regions of the gene, e.g., its promoter or enhancer, or indirectly through other transcriptional cascades. In an interesting association with the above data of Tal1 expression in the presence and absence of EKLF, the ChIP-chip analysis identified two regions with significant reads of EKLF-binding, one of which (region I, 114,551,700-114,552,900 on chromosome 4, NCBI 36/mm8) was located around the Tal1 gene in the E14.5 fetal liver cells ( Figure 4A, Tables S2 and S1A). In the mouse erythroid cells, the Tal1 gene encodes a Tal1 mRNA isoform A ( Figure 4B) consisting of five exons, with the most upstream exon1 located at 115,056,426-115,056,469 (NCBI 36/mm10). However, no CCAAT box or TATA box or CACCC box could be found 300 bp upstream of this exon 1. Instead, we found these motifs in a region~860 bp upstream of isoform A exon 1 ( Figure 4B,C; see also sequence in Figure 5A). We thus suspected that exon 1 of the Tal1 gene might be longer than currently documented in the database. Alternatively, there might be another exon upstream of the exon 1 of isoform A. To solve the issue, we carried out a semi-quantitative RT-PCR analysis of the MEL cell RNAs using different sets of primers. As shown in the bottom panels of Figure 4B, the use of the forward primer PF-3 with any one of four different reverse primes (AR-1, AR-2, AR-3, and AR-4) would not generate a RT-PCR band on the gel. On the other hand, the use of the forward primer PF-1 or PF-2 together with the four reverse primers generated RT-PCR bands, the lengths of which were consistent with the existence of an exon (115,055,766-115,056,469, NCBI 36/mm10) consisting of the previously known isoform A exon 1 at its 3 region (diagram, Figure 4B). Based on these RT-PCR data and the common distance (25-27 bp) between the promoter TATA box and transcription start site(s) of the polymerase II-dependent genes, we suggest a map of the promoter region of Tal1 gene upstream of the newly identified exon1, which contains the TATA box at −28, two CCAAT boxes at −133 and −57, and three CACCC boxes (−788, −710, and −185) upstream of the transcription start site or TSS (Figures 4C and 5A).
To validate the in vivo binding of EKLF in the newly identified Tal1 promoter, we carried out a ChIPq-PCR analysis. As shown in Figure 4C, use of four different sets of primers spanning different regions upstream and downstream of the Tal1 transcription start site (TSS) indicated EKLF-binding to region b containing the distal CACCC boxes E1 at −788/E2 at −710, and to region c containing the proximal CACCC box E3 at −185.
Binding In Vivo of EKLF to the Proximal CACCC Box of Tal1 Promoter-Genomic Footprinting Analysis
In order to examine whether EKLF indeed bound to the proximal CACCC box of the Tal1 promoter in differentiated erythroid cells, we next carried out a genomic footprinting assay of the Tal1 promoter in MEL cells before and after DMSO induction ( Figure 5). As shown, upon DMSO induction of the MEL cells, genomic footprints appeared at the distal CACCC box E1 and more prominently the proximal CACCC box E3 at −185 ( Figure 5). On the other hand, the distal CACCC box E2 at −710 was not protected in MEL cells with or without DMSO induction. Notably, the intensities of gel bands at −133, −132, −57, and −56 appeared to be enhanced upon DMSO induction, suggesting a binding of factor(s) at the two CCAAT boxes as well ( Figure 5). These genomic footprinting data support the scenario that EKLF positively regulates the Tal1 promoter activity through binding mainly to the proximal promoter CACCC box E3. This would facilitate the recruitment of other factors, including the CCAAT box-binding protein(s) to the Tal1 promoter.
Requirement of the Proximal CACCC Motif for Transcriptional Activation of the Tal1 Promoter by EKLF
To investigate whether EKLF was indeed an activator of Tal1 gene transcription through binding to the proximal CACCC promoter box, we constructed a reporter plasmid pTal1-luc, in which the Tal1 promoter region from −900 to −1 was cloned upstream of the luciferase reporter. Three mutant reporter plasmids, pTal1(Mut E1)-Luc, pTal1(Mut E2)-Luc, and pTal1(Mut E3)-Luc, were also constructed, in which the CACCC box E1, E2, or E3 was mutated ( Figure 6A). Human 293T cells were then co-transfected with one of these four reporter plasmids, plus an expression plasmid pFlag-EKLF. As shown in Figure 6B, the luciferase reporter activity in the cells co-transfected with pTal1-Luc, pTal1(Mut E1)-Luc, or pTal1(Mut E2)-Luc increased in a Flag-EKLF dose-dependent manner. However, mutation at the E3 box of the reporter plasmid pTal1(Mut E3)-Luc prohibited this increase. This result, in combination with the genomic footprinting data of Figure 5, demonstrate explicitly that the binding of EKLF to the proximal CACCC box E3, but not the distal E1 or E2 box, in differentiated erythroid cells is required for the transcriptional activation of the Tal1 promoter. W T -p T a l 1 -0 u g W T -p T a l 1 -1 u g W T -p T a l 1 -2 u g W T -p T a l 1 -3 u g M u t a n t -1 -p T a l 1 -0 u g M u t a n t -1 -p T a l 1 -1 u g M u t a n t -1 -p T a l 1 -2 u g M u t a n t -1 -p T a l 1 -3 u g M u t a n t -2 -p T a l 1 -0 u g M u t a n t -2 -p T a l 1 -1 u g M u t a n t -2 -p T a l 1 -2 u g M u t a n t -2 -p T a l 1 -3 u g M u t a n t -3 -p T a l 1 -0 u g M u t a n t -3 -p T a l 1 -1 u g M u t a n t -3 -p T a l 1 -
Discussion
A well-coordinated group of transcription factors regulate similar or distinct sets of target genes, which build up the diverse functional networks and biological pathways governing the process of erythropoiesis. Among these factors are GATA1, FOG1, FLI1, PU.1, TAL1/SCL, and EKLF [2][3][4]10]. Previously, global analyses by gene expression profiling with the use of the microarrays have suggested the potential target genes and genetic pathways that function in erythropoiesis, as regulated by GATA1, TAL1/SCL, and EKLF [17,18,29,36,39,49]. Later, ChIP analysis in combination with next-generation sequencing and microarray hybridization further provided lists of genes that could be regulated directly, through DNA-binding, by these factors [27,28,38,39,50]. Among the factors the potential regulatory targets of which have been studied globally is EKLF. In particular, the two sets of ChIP-Seq analyses have each provided a set of direct target genes of EKLF in the mouse fetal liver cells [27,28]. The change of binding of EKLF to its potential gene targets during differentiation from erythroid progenitors to erythroblasts in the E13.5 fetal liver has also been analyzed [28]. However, these two studies have displayed divergent data with respect to the identities of genes directly regulated by EKLF.
In this study, we analyzed the regulatory functions of EKLF in E14.5 mouse fetal liver cells through the combined use of genome-wide expression profiling and a promoter ChIP-chip assay. Unexpectedly, the number of direct gene targets (1866), as defined by the occupancy of EKLF within −3.75 kb to +0.75 kb relative to TSS (1.2-fold enrichment) and a >1.4 fold change in the expression levels upon depletion of Eklf in the gene knockout mice, are significantly higher than those derived from Tallack et al. [27] and Pilon et al. [28]. As shown in Figure S5A, of the 1866 EKLF target genes that we identified, 257 genes (13.7%) overlapped with the data set from Tallack et al. [27], and 231 genes (12.3%) overlapped with the data set from Pilon et al. [28]. Furthermore, the number of overlapping genes between those two data sets was only 199. Moreover, among the direct targets identified in the three studies, only 55 (2.9% of 1866) were in common ( Figure S5A). The inconsistencies of the conclusions among the three groups with respect to the direct target genes of EKLF likely resulted from the use of different antibodies; different approaches; different developmental stages of the embryos analyzed; different mouse strains; different cell types; and, finally, analyses using different peak calling methods. Moreover, we used 1.4-fold as the cutoff line, rather than 2-fold chosen by the other two groups [17,28,29], when comparing the WT and Eklf −/− expression profiles. This lower cutoff line may have allowed us to find more candidate targets that display subtle expression differences, but have a prominent functional significance. Notably, the use of a higher cut-off line, i.e., 2-fold instead of 1.4-fold, reached a similar conclusion (Supplementary Figure S5B).
One surprising outcome of our genome-wide study is the existence of a positive feedback loop between the two well-known erythroid-enriched transcription factors, EKLF and TAL1, in early erythroid differentiation. By loss-of-function analysis, we show that EKLF also positively regulates the expression of Tal1 during erythroid differentiation ( Figure 3). In particular, the induced depletion of Eklf drastically lowers the expression level of Tal1 in DMSO-induced MEL cells ( Figure 3C). The combined data from the ChIP-chip, genomic footprinting, and transient reporter assays further indicate that EKLF activates the Tal1 gene transcription through binding to the proximal CACCC box in the newly identified Tal1 promoter (Figures 4 and 5). Consistent with this scenario of mutual activations of Tal1 and Eklf, the mRNAs of Tal1 and Eklf were both progressively up-regulated during erythroid differentiation of the primary mouse fetal liver cells [51]. Thus, our finding of the positive regulation of the Tal1 gene by EKLF demonstrates the existence of a Tal1-Eklf positive feedback loop that promotes the mammalian erythroid differentiation in a tightly regulated time window, from the transition of Pro-E to Baso-E of the erythroid lineage.
We propose the following scenario for the mutual activation of Tal1 and Eklf, as well as the functional consequences of this positive feedback loop during erythroid differentiation. In the erythroid lineage at the BFU-E/CFU-E/Pro-E stages, the two factors are already expressed at basal levels. EKLF is retained by FOE in the cytosol [26], while the TAL1 protein positively regulates the expression of Eklf [38,39]. When the cells enter the Baso-E stage, the EKLF protein is released from its physical interaction with FOE in the cytoplasm and is imported into the nucleus [26]. The imported EKLF binds to the E3 box of the Tal1 promoter to enhance the promoter activity of Tal1 (Figures 5 and 6). This positive feedback loop rapidly amplifies both factors during erythroid terminal differentiation. As a result, the EKLF-mediated activation of Tal1 may act as a valve that facilitates the commitment of the erythroid lineage from MEP through promoting the differentiation transition from Pro-E to Baso-E, thus sustaining the process after Baso-E. Furthermore, there is a high frequency of co-occupancy of EKLF and TAL1 in a number of promoters that are active in erythroid cell lines or erythroid tissues (Table S4) [27][28][29]. Thus, the Eklf /Tal1 loop would irreversibly promote erythroid terminal differentiation through the up-regulation of not only Eklf and Tal1, but also their mutual downstream targets that are crucial for erythroid differentiation. For the latter process, the EKLF and TAL1 proteins may work within the same transcriptional complex(es) that bind to the composite CACCC box-E box in the promoters of these downstream targets [45]. The positive feedback regulatory loop between EKLF and TAL1, as identified in this study, provides a mechanism ensuring the commitment to erythroid differentiation among the multiple lineages of the hematopoietic system.
Generation of Eklf −/− Mice
As described elsewhere [11], the generation of B6 mouse lines with homozygous knockout of the Eklf gene, Eklf −/− , was carried out in the Transgenic Core Facility (TCF) of IMB, Academia Sinica, following the standard protocols with the use of the BAC construct containing genetically engineered Eklf locus and E2A-Cre mice.
Gene expression Profiling by Affymetrix Array Hybridization
The E14.5 mouse fetal livers from WT and Eklf −/− mouse fetuses were homogenized by repeated pipetting in phosphate-buffered saline (PBS) (10 mM phosphate, 0.15 M NaCl [pH 7.4]). The total RNAs were then isolated with Trizol reagent (Invitrogen, Carlsbad, CA, USA) and were subjected to genome-scale gene expression profiling using the Mouse Genome Array 430A 2.0 (Affymetrix, Inc., Santa Clara, CA, USA). The standard MAS5.0 method was applied to normalize the gene expression data. The gene expression values were log-transformed for later comparative analysis. Statistical analysis was carried out using R 3.0.2 language (R Development Core Team, 2013, http://www.R-project.org, accessed on 21 July 2021)
Identification of Differentially Expressed Genes
The genes with differential expression patterns between the WT and Eklf −/− mice E14.5 fetal liver were first identified using a two-sided two sample t-test with the significance level at 0.05. As the set of probes for each annotated gene should all exhibit the same direction or sign when comparing the WT and Eklf −/− samples, this consistency check was used to remove 257 ambiguous genes from the gene list. After filtering by the p-value threshold, a subset containing 12,277 statistically significant probe sets was obtained.
Identification of EKLF-Bound Targets by Using NimbleGen ChIP-Chip Array Hybridization
The E14.5 mouse fetal liver cells were cross-linked, sheared, and the EKLF boundchromatin complexes were immuno-precipitated (ChIP) with the AEK antibody [20] and rabbit IgG, respectively. The DNA was then purified from the immunoprecipitated chromatin samples using a QIAquick PCR purification kit (Qiagen, Hilden, Germany) and amplified by the Sigma GenomePlex WGA kit for hybridization with the Roche NimbleGen Mouse ChIP-chip 385 K RefSeq promoter arrays.
There were 768,217 probes on the NimbleGen 385 K ChIP-chip array. These probes were grouped into 21,536 sequence IDs, each of which contained 5 to 320 probes that ranged from 49 bp to 74 bp in length. The distances between the probes in the same sequence ID ranged from 100 bp to 3700 bp. In general, these sequence IDs are located in the promoter regions of the genes, roughly from −3.75 kb to +0.75 kb, relative to the transcription start site (TSS). The sequence ID was assigned a gene name when the gene's coding sequence overlapped with the region from 10 kb upstream to 10 kb downstream of the sequence ID. In this way, 652 sequence IDs were found to be located in the intergenic regions and 20,884 sequence IDs were near the coding regions of the genes.
To identify the binding targets of EKLF, the moving window with a size equal to 5 was adopted to test the hypothesis on a positive mean value using a one-sided t-test with the significant level set at 0.0017. This smaller cut-off value was chosen to account for the multiple comparisons. Specifically, there were 35 probes in each sequence ID, and the adjusted p-value was derived by (1 − (1 − 0.0017) 30 )~0.05. The moving window was applied to each sequence ID separately. Finally, the results were summarized at the sequence ID level, and a sequence ID would be defined as a target site of EKLF if there was a significant peak in the sequence ID.
Matching between Affymetrix Probes and NimbleGen ChIP-Chip Probes
The E14.5 fetal liver gene expression data obtained from the Affymetrix array hybridization analysis allowed us to further reduce the false positives from the ChIP-chip dataset. To do this, the annotation strategy used in annotating the sequence IDs in the ChIPchip array was adopted to match the probes from these two platforms by gene symbols. After this procedure, there were a total of 78,634 matched pairs between the ChIP-chip sequence IDs and Affymetrix probes.
Co-Occurrence of Binding Motifs and Relative Distance Distribution
First, 226 known transcription factor-binding motifs were extracted from the previous report [28]. For each of these binding motifs, the number of sequences in the mouse genome bearing the motif was sorted. The top co-existing binding motifs with EKLF were then further investigated. The relative distance between a co-existing motif and the EKLF-binding motif was calculated for each sequence ID. However, when multiple binding motifs existed in the same sequence ID, multiple distances would be generated. In that case, the shortest distance was selected as the representative distance in that sequence ID. The relative distance distribution was then plotted to inspect the potential localization biases. The existence of a localization bias provided further indication that two transcription factors might interact in certain way to regulate the particular target gene(s).
Functional Enrichment Analysis
An analysis was carried out with the use of IPA (Ingenuity ® Systems, www.ingenuity. com, accessed on 21 July 2021) to identify the genes significantly associated with specific biological functions and/or diseases in the Ingenuity Knowledge Base. Right-tailed Fisher's exact test was used to calculate the p-value, determining the probability that each biological function and/or disease assigned to that data set was due to chance alone. The list of genes with significant EKLF-binding enrichment and deemed to be expressed differentially in WT and KO mice fetal livers was imported into IPA. The up-regulated and down-regulated EKLF targets were first mapped to the functional networks available in the IPA database, and then ranked by scores computed with the right-tailed Fisher's exact test mentioned above. As listed in Table S3A,B, this analysis identified significant over-represented molecular and cellular functions (p value < 0.05) associated with the imported up-regulated and down-regulated EKLF targets that were eligible (score > 25), with significance scores of 26 and 34, respectively ( Figures S1 and S2).
ChIP-qPCR
The ChIP-PCR analysis followed the procedures of Daftari et al. [52]. The sonicated cell extracts from formaldehyde cross-linked E14.5-day mouse fetal liver cells were immuno-precipitated with anti-EKLF and purified rabbit IgG, respectively. The precipitated chromatin DNA was purified and analyzed by quantitative PCR (qPCR) in the Roche LightCycle Nano real-time system. The sequences of the primers used for q-PCR designed by our lab are listed in Table S7. Each target gene was amplified with one set of primers flanking the putative EKLF-binding CACCC motif(s) and two sets of nonspecific primers bracketing the regions located upstream and downstream of the CACCC motif(s), respectively.
Plasmid Construction
Mouse Eklf cDNA was derived by the RT-PCR of RNA from DMSO-induced MEL cells and was cloned into the vector pCMV-Flag (Invitrogen), resulting in pFlag-EKLF. Plasmids for the luciferase reporter assay were constructed in the following way: Tal1 promoter region from −1 to −900 relative to the transcription start site of the newly identified Tal1 exon 1 was amplified by PCR of mouse genomic DNA, with the addition of a XhoI cutting site at the 5 end and a HindIII site at 3 end, and cloned into the XhoI and HindIII sites in the psiCHECK™-2 Vector (Promega, Madison, WI, USA), resulting in the plasmid pTal1-Luc. Tal1 promoter DNA fragments with the putative EKLF-binding CACCC box(es) mutated were generated by fusion PCR, using the endogenous Tal1 promoter as the template. The sequences of the three mutated CACCC boxes and their flanking regions in these fragments were E1 box, 5 -CAGGCAAAACCAGGGACCAcatatTTAAAAATGATTCCCCTTCTCAAG-3 ; E2 box, 5 -CAATAGCTCTTCAGTTAGCGGTGAAGGCTCATGAAcatatCCAC-3 ; and E3 boxes, 5 -GAGTTATTGACACAGCCCTGTcatatCCTCCCCCCACTG-3 . The inserts of all of the plasmids were verified by DNA sequencing before use.
Cell Culture, Differentiation, DNA Transfection, and Knockdown of Gene Expression
The murine erythroleukemia cell line (MEL) was cultured in Dulbecco's modified Eagle medium containing 20% fetal bovine serum (Gibco, Carlsbad, CA, USA), 50 units/mL of penicillin, and 50 µg/mL of streptomycin (Invitrogen). For the induction of differentiation, the cells at a density of 5 × 10 5 /mL were supplemented with 2% dimethyl sulfoxide (DMSO; Merck) and the culturing was continued for another 24 to 72 h. DNA transfection of the MEL cells and K562 cells was carried out using the TurboFect transfection reagent (Thermo Scientific, Waltham, MA, USA) and Lipofectamine ® 2000 transfection reagent (Life Technologies, Carlsbad, CA, USA) respectively.
For knockdown of the Eklf gene expression, MEL cell line-derived clones 4D7 and 2M12 [7] were maintained in 20 µg/mL of blasticidin (Invitrogen) and 1 mg/mL of G418 (Gibco). Differentiation of the 4D7 and 2M12 cells was induced by 2% dimethyl sulfoxide (DMSO; Merck) for 48 h. The expression of shRNA targeting and the knocking-down Eklf was induced with the addition of 2 µg/mL of doxycycline (Clontech, Kusatsu, Japan) for 96 h, as described in Bouilloux et al. [7].
RNA Analysis
The total RNA from the MEL cells and fetal liver suspension cells were extracted with the TRIzol reagent (Invitrogen). cDNAs were synthesized using SuperScript II Reverse Transcriptase (RT) (Invitrogen) and oligo-dT primer (Invitrogen). Taq DNA polymerase was used for semi-quantitative RT-PCR analysis of the cDNAs. Quantitative real-time PCR (qPCR) analysis of the cDNAs was carried out using the LightCycler ® 480 SYBR Green I Master (Roche Life Science, Penzberg, Germany) and the products were detected by a Roche LightCycler LC480 Real-Time PCR instrument. The primers used for the qPCR analysis were designed following previous reports or from the online database PrimerBank: http://pga.mgh.harvard.edu/primerbank, accessed on 21 July 2021. The primers used for validating the microarray data and for Tal1 exon 1 identification by RT-PCR were designed by our lab. The sequences of the DNA primers used in semi-quantitative RT-PCR and real-time RT-qPCR are available upon request.
|
2021-04-16T13:26:33.239Z
|
2021-04-08T00:00:00.000
|
{
"year": 2021,
"sha1": "40f2af46319403029fee38289dddec5b15c78688",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/15/8024/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dddd1d49965a08a20cbc51db0d78687d3510e188",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
14040031
|
pes2o/s2orc
|
v3-fos-license
|
How accurate is visual estimation of perioperative blood loss in adolescent idiopathic scoliosis surgery?
Objective The aim of this study was to assess whether the visual estimation method for perioperative blood loss is accurate in adolescent idiopathic scoliosis surgery. Methods Sixty-five consecutive patients, who were operated on from 2012 to 2015 and had a diagnosis of AIS, were included into the study. Gender, age, preoperative weight and height, preoperative major curve magnitude and T5‒T12 kyphosis angles, the fusion level, and the time of surgery were recorded. Perioperative blood loss was estimated by the same anesthesiologist for all patients. Then, an experienced surgeon estimated the perioperative blood loss by a gravimetric method, and the results were compared. Results Seventeen (26.2%) of the patients were male and 48 (73.8%) were female. The mean age was 15.8 ± 1.9. The mean height of the patients was 162.1 ± 8.9 cm and the mean weight was 52.6 ± 8.9 kg. The mean preoperative major curve magnitude and kyphosis angles were 49.5 ± 9.2 and 47.1 ± 12.7 respectively. The mean estimate of the surgeon was 1009 ± 404.5 ml and the mean estimate of the anesthesiologist was 434 ± 217.6 ml and the difference was statistically significant (p < 0.05). Moreover, if blood loss was high during the operation, the difference between the estimates of the surgeon and anesthesiologist was also higher. Conclusions Even in operations where most of the blood goes into a suction canister, such as for AIS, a visual estimation method is not accurate. A short training regarding optimizing the amount of blood contained in sponges that are not fully soaked may be sufficient to improve this method.
Introduction
Adolescent idiopathic scoliosis (AIS) is the most common type of spinal deformity. It affects individuals between 10 and 20 years of age, and multilevel posterior instrumentation and fusion is the primary surgical option for correction of the deformity. 1 Although AIS surgery is associated with less blood loss than other types of scoliosis surgery, the mean blood loss can reach 1500 ml; this lost blood should be replaced to an adequate level. 2 Underestimation may lead to inadequate fluid and blood replenishment, which may be associated with shock, organ damage, and impaired tissue oxygenation. 3,4 Meanwhile, overestimation may lead to an unnecessary transfusion and, as a result, increased complications and mortality. 5,6 Thus, for adequate replacement of blood loss, a reliable estimation of perioperative blood loss (PBL) is essential.
Although there are several methods for estimating PBL, all have limitations and estimating PBL remains a challenge. It may be especially difficult for long-duration operations and when much bleeding is expected, as in scoliosis surgery. The most commonly used method, as at our institution, is visual estimation by anesthesiologists, although several studies have shown its inadequacies. 7,8 In this method, the anesthesiologist estimates PBL by visually examining blood collected in suction canisters, surgical sponges, drapes, towels, and on other surfaces; it has been reported that large losses are typically underestimated, while smaller losses tend to be overestimated. 9 The Manuscript submitted does not contain information about medical device(s)/ drug(s).
Other methods, like gravimetric techniques and photometry, are not used widely. Although they are more objective, they are not always practical and are also time-consuming. As a result, there is no reliable and routinely applicable method for estimating PBL.
One reason for inaccurate estimates using visual estimation method (VEM) is the inability to determine the exact amount of blood not in the suction canister and sponges, or on drapes and other surfaces. 10 Most of the studies in the literature about the inaccuracy of VEM are related to obstetric operations, which typically involve bleeding out of the surgical site on to sponges and drapes.
We hypothesized that VEM would be accurate for AIS surgeries. Because of the characteristics of the operation site in scoliosis surgery, which is deep and has a distinct border, like a pool, only a small amount of blood leaks away, and most of the blood is suctioned into the canister. Thus, this should result in an accurate estimation. In summary, this study was designed to assess whether estimates of PBL were reliable in spinal fusion surgery when compared with the more objective gravimetric method.
Materials and methods
This was a prospective clinical study approved by the Institutional Ethics Committee/Review Board. Sixty-five consecutive patients, who were operated on from 2012 to 2015 and were diagnosed with AIS between the age of 10 and 20 years, were included in the study. The surgical indication was a deformity with a Cobb angle >40 . Patients with abnormal preoperative laboratory findings, a history of spinal surgery, or congenital anomalies on preoperative spinal magnetic resonance imaging (MRI) were excluded. Patients' gender, age, and preoperative weight and height were recorded. Preoperative major curve magnitude and T5-T12 kyphosis angles were measured according to the Cobb method. The fusion level and time of surgery were also recorded. PBL was estimated by the same anesthesiologist for all patients. Then, an experienced surgeon estimated the PBL independently by a gravimetric method and the estimates were compared.
Posterior instrumentation and fusion was performed for all patients. All of the surgeries were performed by the same senior spine surgeon from the beginning to the end. Patients were placed in the prone position on a radiolucent table. After a standard midline incision, subperiosteal dissection of the posterior soft tissue was done, to the tips of the transverse processes, by electrocautery. Surgicel, padding, bone wax and electrocautery were used to maintain homeostasis as required. Pedicle screws were placed bilaterally and parallel at each level using a free-hand technique. The posterior release was performed with partial facetectomies at all instrumented levels by using osteotome and hammer. There was no need to perform major osteotomy in any patient. Titanium rods, 6.0 mm in diameter, were contoured to correct deformities. The rods were attached to screws, initially at the top of construct, bilaterally. Deformity were corrected using a direct derotation technique. Then, fluoroscopic control of the coronal and sagittal alignment was performed and compression, distraction and in situ bending maneuvers were added if necessary. The laminae and transverse processes were thoroughly decorticated by rongeur to facilitate the fusion. Allograft bone material was used for fusion. Double hemovac drain was used without activation and they were removed on the second day after surgery. All patients practiced ambulation within the first day after surgery. Stressful activities were avoided for at least 2 months after surgery.
The anesthesiologist was informed about the study and estimated the PBL clinically, i.e., by VEM, and maintained normovolemia by replacing the lost blood with appropriate crystalloids, colloids, or blood products. To estimate the blood loss, they multiplied the number of blood-soaked gauze pieces by 20 cc, and the mopping pads by 100 cc, and then summed them with the blood estimated to be in the suction bottle and around the surgical area. Surgeons estimated the blood loss by weighing all of the soaked gauze pieces and mopping pads postoperatively with a sensitive balance, and summing those data with the amount of blood and irrigation solution mixture in the suction bottle. Then, total dry weight of items and the weight of the irrigation solution in the suction canister (which was calculated by a nurse), were subtracted from the total of stained weight of the items and mixture of suction canister. The difference in weight was noted. This method is called the gravimetric method. As a result, the blood loss estimated by the anesthesiologist and the surgeon could be compared.
Descriptive statistics were used to describe continuous variables. Spearman's rho correlation analysis was used to analyze the relationship between two continuous variables with non-normal distributions, and Pearson correlation analysis was used to analyze the relationship between two continuous variables with normal distributions. Student's t-test was used to compare two independent and normally distributed variables, and the ManneWhitney U test was used to compare two independent variables with non-normal distributions. Statistical significance was set at p < 0.05. Analyses were performed using MedCalc software (ver. 12.7.7; MedCalc Software bvba, Ostend, Belgium).
Regarding the surgeon's estimates, preoperative major curve magnitude was the only variable that correlated with blood loss. Preoperative kyphosis magnitude, patient height and weight, total operative time, and fusion levels showed no correlation with blood loss. Regarding the anesthesiologist's estimates, in contrast to those of the surgeon, the fusion level was the only variable that correlated with blood loss, while total operative time, magnitude of Cobb angle, preoperative magnitude of kyphosis, and patient height and weight showed no association with blood loss. Age and gender also showed no association with the amount of blood loss in either the surgeon or anesthesiologist's data. There were only three operations in which the anesthesiologist estimated greater blood loss than did the surgeons. Moreover, if blood loss was high during the operation, the difference between the estimates of the surgeon and the anesthesiologist was greater.
Discussion
Estimating PBL is an important issue for both anesthesiologists and surgeons, so as to be able to replace lost blood adequately, and there is often disagreement between surgeons and anesthesiologists about the degree of blood loss. Especially in long operations and operations with significant bleeding, estimating blood loss may be a challenge. The most widely used method is visual estimation by anesthesiologists, but its reliability is controversial. 7,8 However, there are several other methods that, although being more objective and better able to measure the exact amount of blood loss, are not always practical; they are also time-consuming and thus are not used routinely. While it is not particularly accurate, VEM remains the most widely used method for PBL estimation.
Because of the frequency of potentially fatal complications related to PBL, most studies on the relevance of VEM concern obstetric operations. Moreover, hemorrhage continues to be a leading cause of maternal mortality in the United States, and perioperative blood replacement is important. 11 To achieve that, accurate estimation of blood loss is needed. According to Prasertcharoensuk et al postpartum hemorrhage was underestimated by visual estimation versus direct measurements. 12 In contrast, Razvi et al reported that estimated blood loss was 20% greater than measured blood loss after vaginal births. 13 Moreover, Brant et al suggested that when the actual amount of blood loss increased, the incidence of underestimation also increased. 14 The differences in these studies are understandable, because most of the blood goes into sponges, and the most important component of VEM is estimation of the amount of blood in sponges and towels. Guidelines for these estimations state that each fully soaked 10.16 cm (4 in) Â 10.16 cm surgical sponges holds~10 ml of blood, and each fully soaked 30.48 cm (12 in) Â 30.48 cm gauze holds~100e150 ml of blood. 15 If these items are only partially saturated, the anesthesiologist must estimate how much blood they contain. Furthermore, the surgeon typically uses irrigation solutions that dilute the blood and increase the volume of liquid in the items. For that reason, if most of the blood is outside the suction canister, the measurement will be more subjective, whereas if most of the blood goes into the suction canister, rather than the sponges, more appropriate estimations may result.
Estimation of PBL in spinal surgery is an important issue, as in other surgeries. There are studies in the literature on preoperative estimations and minimization of blood loss during spinal surgery, but if the perioperative measurement is not accurate, adequate replacement of lost blood will be impossible. Thus, an accurate method is needed to evaluate blood loss. Mooney et al conducted a study on the validity of estimated PBL during pediatric spinal surgery. 8 They compared the blood loss estimates of surgeons and anesthesia providers. Anesthesia providers made estimates by VEM, while the surgeon's estimate were based on the volume of blood in products processed by a Cell-Saver device. Their results also suggested the inaccuracy of the VEM. However, congenital and neuromuscular scoliosis patients were included in their study. These patients mostly required osteotomies for correction of abnormalities, where PBL would be expected to be higher than for AIS patients. There was also a patient who needed both anterior and posterior approaches. All of these factors increase PBL and lead to inaccuracy in VEM. In AIS surgery, the posterior approach alone is typically sufficient, and osteotomies are rarely needed. Thus, nearly all of the blood is lost from the paravertebral muscle during the surgical approach, and then from the corpus while opening the screw hole. As a result, most of the blood goes into the suction canister as soon as it leaves the body. Moreover, if it accumulates at the surgical site, because of the pool-like characteristics of the paravertebral region, it mostly goes into the suction canister thereafter. Thus, in contrast to other reports about the inconsistencies of the visual estimation method, it may be accurate for AIS surgeries.
However, the results of our study suggest that VEM is not accurate even for AIS surgery. The anesthesiologist's blood loss estimates were approximately half as large as those of the surgeon. We believe that the underestimation resulted from a high number of sponges that were not-fully soaked. The senior surgeon in this study used a given sponge only once, after which, if needed, another was used; thus, there were many non-fully soaked sponges after the operation. As mentioned above, non-fully soaked sponges are an important factor in the subjectivity of this method. Estimation of the amount of blood in a non-fully soaked sponge depends on the anesthesiologist; although the anesthesiologist in this study was experienced, estimates may differ between anesthesiologists. The anesthesiologist in this study was also an experienced physician, but it has been demonstrated that the accuracy of PBL estimation is unrelated to seniority or experience. 16e20 Zuckerwise et al reported that it was possible to improve the PBL estimates using visual cards. 21 Dildy III et al also suggested the possibility of improving PBL measurements with a 20-min presentation that focused on estimating the amount of blood in nonfully soaked items. 22 In conclusion, we suggest that, even in operations where most of the blood goes into the suction canister, like AIS surgeries, VEM is not accurate. A training session regarding optimizing the amount of blood contained in sponges that are not fully soaked may be sufficient to improve this method.
Funding
No funding received for this work from any agency.
|
2018-05-03T02:53:37.579Z
|
2018-04-26T00:00:00.000
|
{
"year": 2018,
"sha1": "27b1f8b229ec5ce75cee4dff9ff6bd0ec83d3242",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.aott.2018.03.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27b1f8b229ec5ce75cee4dff9ff6bd0ec83d3242",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246186711
|
pes2o/s2orc
|
v3-fos-license
|
Toxicity Study and Quantitative Evaluation of Polyethylene Microplastics in ICR Mice
The production, use, and waste of plastics increased worldwide, which resulted in environmental pollution and a growing public health problem. In particular, microplastics have the potential to accumulate in humans and mammals through the food chain. However, the toxicity of microplastics is not well understood. In this study, we investigated the toxicity of 10–50 μm polyethylene microplastics following single- and 28-day repeated oral administration (three different doses of microplastics of 500, 1000, and 2000 mg/kg/day) in ICR mice. For the investigation, we administered the microplastics orally for single- and 28-day repeated. Then, the histological and clinical pathology evaluations of the rodents were performed to evaluation of the toxicity test, and Raman spectroscopy was used to directly confirm the presence of polyethylene microplastics. In the single oral dose toxicity experiments, there were no changes in body weight and necropsy of the microplastics-treated group compared with that of controls. However, a histopathological evaluation revealed that inflammation from foreign bodies was evident in the lung tissue from the 28-day repeated oral dose toxicity group. Moreover, polyethylene microplastics were detected in the lung, stomach, duodenum, ileum, and serum by Raman spectroscopy. Our results corroborated the findings of lung inflammation after repeated oral administration of polyethylene microplastics. This study provides evidence of microplastic-induced toxicity following repeated exposure to mice.
Introduction
The production and use of plastics increased worldwide [1,2]. Approximately 6.6 billion tons of plastic waste was generated globally from 1950 to 2015 [3]. Some of these plastics are recycled but the remainder are dumped into the ocean and landfills, acting as an environmental pollutant. Plastics dumped into the ocean continue to accumulate and spread widely because they are not readily degradable, and hence, are considered an environmental problem. Plastics with a size of less than 5 mm are referred to as microplastics and are classified into primary and secondary microplastics. Primary microplastics are industrially produced and secondary microplastics are small fragments formed when plastics are crushed after exposure to environmental conditions [4]. To date, studies on
Raman Spectroscopy
Polyethylene microplastics were analyzed using a Raman microscope (RAMANtouch, Nanophoton, Japan) equipped with a laser diode (785 nm). After navigating the morphology of microplastics with a 20× objective lens (Nikon LU Plan Flour 20×/0.45), Raman spectra were collected in the 160-3000 cm −1 range using 300 lines per mm grating with 50 µm slit width. The spectrum was measured over a 16-bit dynamic range with Peltier cooled charge-coupled device (CCD) detectors. The acquisition time and number of accumulations were adjusted for each scan to obtain sufficient signals for performing a library search. The spectrometer was calibrated with silicon at a line of 520.7 cm −1 prior to spectral gaining. Raw Raman spectra underwent noise reduction by polynomial baseline correction and vector normalization to improve spectral quality (Labspec 6 software, Horiba Scientific). The Raman spectra were compared to that of the SLOPP Library of Microplastics and the spectral library of KnowItAll software (Bio-Rad Laboratories, Inc., Hercules, CA, USA). Similarities above the Hit Quality Index of 80 were considered satisfactory.
Aniumal Treatment and Experimental Conditions
Five-weeks-old male and female ICR mice (68 per gender, KOATECH Inc., Pyeongtaek, Gyunggi-do, Korea) were acclimatized for 1 week. The animals were then divided into groups for a single-dose toxicity study (12 animals per gender, 3 animals per group), a 28-day repeated-dose toxicity study (40 animals per gender, 10 animals per group), and a quantitative evaluation test (16 animals per gender, 4 animals per group). During the experimental period, the animals were acclimated in ventilated IVC cages (395 W × 346 D × 213 H) at a temperature of 22 ± 1 • C, relative humidity of 50 ± 10%, ventilation time of 10-15 h, light for 12 h per day, and an illumination of 150-300 lux. For the single-dose toxicity study, animals in the control group were administered corn oil (Daijung Chemicals Inc, Daejeon, Korea), whereas the animals in the test groups were injected orally with polyethylene microplastics in corn oil at doses of 500, 1000, and 2000 mg/kg (low-, middle-, and high-dose groups) and a dosage of 10 mL/kg. For repeated-dose toxicity studies and quantitative evaluation tests, each group of animals was treated once a day for 28 days in the same manner as the single-dose toxicity study. The two toxicity tests were conducted based on OECD guidelines (408, 423) and the Korea Food and Drug Administration's Toxic Test Standards Guide (No. 2017-71).
Clinical Observations
Animal observation, the presence of moribund or dead animals, and the measurement of animal weight were conducted once a day, twice a day, and once a week, respectively, for the single-dose and 28-day repeated-dose toxicity studies. Additionally, food and drinking water consumption were measured once a week for the four-week repeated-dose toxicity study.
Necropsy
At the end of the two-week observation period of the single-dose toxicity study, all animals were anesthetized with CO 2 and exsanguinated through the abdominal aorta. Complete gross postmortem examination was performed on all of the animals. For the 28-day repeated-dose toxicity study, blood from all of the animals was collected from the abdominal aorta under isoflurane (Hana Pharm, Co., Ltd., Seoul, Korea) anesthesia. A complete gross postmortem examination was performed on all animals and tissues (adrenal gland, brain, cecum, colon, duodenum, epididymis, esophagus, heart, ileum, jejunum, kidney, liver, lungs, ovary, pancreas, parathyroid gland, pituitary gland, rectum, spinal cord, spleen, stomach, testis, thymus, thyroid gland, trachea, and uterus) were harvested from male and female mice. After organ extraction, organ weight was measured for the brain, spleen, heart, kidney, liver, testis, epididymis, and ovary. In the quantitative evaluation test, blood was collected from the abdominal aorta under isoflurane anesthesia and tissues (heart, lungs, spleen, liver, kidney, stomach, duodenum, and ileum) were harvested and the organs were weighed.
Clinical Pathology Analysis
Blood samples collected in the 28-day repeated-dose toxicity study were analyzed using a blood cell analyzer (ADVIA 2120i, SIEMENS, Muenchen, Germany) and a serum biochemistry analyzer (TBA 120-FR; Toshiba, JP).
Histopathological Analysis
Tissues harvested from animals from the 28-day repeated-dose toxicity study were fixed in 10% neutral buffered formalin (BBC Biochemicals, Mount Vernon, WA, USA), except for the testis, which were fixed in Davidson's fixative followed by storage in 10% neutral buffered formalin. For histopathological evaluation, a tissue processor (Thermo Fisher Scientific, Inc., Runcorn, UK) was used to prepare the organs and tissues from the formalinfixed samples for analysis by fixing, staining, and dehydrating. The paraffin embedded tissue blocks were cut to a 4-µm thickness and mounted onto glass slides. Staining was performed with hematoxylin and eosin using an autostainer (Dako Coverstainer; Agilent, Santa Clara, CA, USA). The histopathological evaluation of all the samples from all animals was conducted in a blind manner.
Quantitative Evaluation of Polyethylene Microplastics in Blood and Tissues
To quantitatively evaluate the number of the polyethylene microplastics in biological samples, the serum and organs were pretreated. After pooling the serum and organ samples, a 10 wt% aqueous KOH solution (20 times the sample weight) was added. The pooled samples were incubated in 37 • C for 48 h with shaking at 250 rpm following homogenization. The samples lysed in KOH solution were filtered stepwise using a stainless-steel filter (47 mm disc, 45 µm pore size) and a silicon filter (1 cm × 1 cm, 1 um pore size) provided by Nanophoton. The number of PE microplastics filtered on the silicon filter was counted using the Raman microscope as described above. Briefly, PE microplastics in the biological samples were scanned by the automated Raman point-by-point mapping mode in both x and y directions on the area of 500 × 375 µm 2 . The number of the total frames per the filtered biological sample was about 534, and the number of PE microplastics per the frame were automatically counted.
Statistical Analysis
All data of the hematology, serum biochemistry, body and organ weight data are presented as the mean ± standard deviation (SD). The statistical significance of the differences between the treated groups and the control group was evaluated by a Student's t-test and one-way analysis of variance using the SAS program (version 9.4 SAS Institute Inc., Cary, NC, USA).
Characterization of Polyethylene Micrplastics
According to PSA analysis, the average size of the polyethylene microplastics was 27.0 ± 10.9 µm (Figure 1a). Confocal analysis revealed that the surface of the microplastics was irregular (Figure 1b). After filtering polyethylene dispersed in ethanol, the representative Raman spectrum obtained from filtered microparticles was identified as Polyethylene based on the peaks observed in the region of 1000 cm −1 to 1600 cm −1 , presenting C-C symmetric and asymmetric stretch peaks at 1063 cm −1 and 1130 cm −1 , respectively (Figure 1c). In addition, the methyl CH 2 groups in Polyethylene is further confirmed by peaks in the region of 2600 cm −1 to 3000 cm −1 attributed to the CH 2 and CH 3 stretching modes [35].
( Figure 1c). In addition, the methyl CH2 groups in Polyethylene is further confirmed by peaks in the region of 2600 cm −1 to 3000 cm −1 attributed to the CH2 and CH3 stretching modes [35].
Single Oral Dose Toxicity Study of the Polyethylene Microplastics
During the two-week observation period following a single oral administration of polyethylene microplastics, no specific clinical signs or significant changes in weight were observed in males or females (Figure 2a,b). At the end of the observation period, at necropsy, no changes from the administration of polyethylene microplastics were observed. Therefore, the lethal dose of polyethylene microplastics was determined to be more than 2000 mg/kg.
Single Oral Dose Toxicity Study of the Polyethylene Microplastics
During the two-week observation period following a single oral administration of polyethylene microplastics, no specific clinical signs or significant changes in weight were observed in males or females (Figure 2a,b). At the end of the observation period, at necropsy, no changes from the administration of polyethylene microplastics were observed. Therefore, the lethal dose of polyethylene microplastics was determined to be more than 2000 mg/kg. Figure 1c). In addition, the methyl CH2 groups in Polyethylene is further confirmed by peaks in the region of 2600 cm −1 to 3000 cm −1 attributed to the CH2 and CH3 stretching modes [35].
Single Oral Dose Toxicity Study of the Polyethylene Microplastics
During the two-week observation period following a single oral administration of polyethylene microplastics, no specific clinical signs or significant changes in weight were observed in males or females (Figure 2a,b). At the end of the observation period, at necropsy, no changes from the administration of polyethylene microplastics were observed. Therefore, the lethal dose of polyethylene microplastics was determined to be more than 2000 mg/kg.
28-Day Repeated Oral Dose Toxicity Study of Polyethylene Microplastics
During the observation period of the four-week, repeated oral administration of polyethylene microplastics, no significant changes were observed with respect to clinical signs (Table S1) (Table 1) in male and female mice. Additionally, no changes were observed in absolute or relative organ weight (data not shown). Histopathological evaluation revealed granulomatous inflammation with mixed inflammatory cells (lymphocytes and mononuclear cells) in the alveolar space of the lungs from two females in the low-dose group (500 mg/kg), two males and two females of the middle-dose group (1000 mg/kg), and two males and two females of the high-dose group (2000 mg/kg) (Figures 4b-g and 5b-g, Table 2). Granulomatous inflammation is a cellular response to agents that are difficult to eradicate, such as foreign bodies. These findings in the lungs are thought to represent changes caused by foreign bodies, presumed to be polyethylene microplastics. Therefore, we confirmed that the histopathological findings are caused by the administration of polyethylene microplastics and that an inflammatory reaction occurred because of a toxic reaction to foreign substances. Therefore, in the four-week, repeated oral administration toxicity study of polyethylene microplastics, the no-observed-adverse-effect-level (NOAEL) was estimated to be less than 1000 mg/kg in males and 500 mg/kg in females.
28-Day Repeated Oral Dose Toxicity Study of Polyethylene Microplastics
During the observation period of the four-week, repeated oral administration of polyethylene microplastics, no significant changes were observed with respect to clinical signs (Table S1) (Table 1) in male and female mice. Additionally, no changes were observed in absolute or relative organ weight (data not shown). Histopathological evaluation revealed granulomatous inflammation with mixed inflammatory cells (lymphocytes and mononuclear cells) in the alveolar space of the lungs from two females in the low-dose group (500 mg/kg), two males and two females of the middle-dose group (1000 mg/kg), and two males and two females of the high-dose group (2000 mg/kg) (Figures 4b-g and 5b-g, Table 2). Granulomatous inflammation is a cellular response to agents that are difficult to eradicate, such as foreign bodies. These findings in the lungs are thought to represent changes caused by foreign bodies, presumed to be polyethylene microplastics. Therefore, we confirmed that the histopathological findings are caused by the administration of polyethylene microplastics and that an inflammatory reaction occurred because of a toxic reaction to foreign substances. Therefore, in the four-week, repeated oral administration toxicity study of polyethylene microplastics, the no-observed-adverse-effect-level (NOAEL) was estimated to be less than 1000 mg/kg in males and 500 mg/kg in females.
Quantitative Evaluation of Polyethylene Microplastics
After pretreatment, the harvested organs were analyzed by Raman spectroscopy. No particles were observed in the low-and middle-dose groups (500 and 1000 mg/kg) (data not shown). A total of 14 particles in the lung (8 for males, 6 for females), 1 particle in the serum (1 for female), 9 particles in the stomach (2 for males, 7 for females), 5 particles in the duodenum (2 for males, 3 for females), and 4 particles in the ileum (2 for males, 2 for females) were observed in the high-dose group (2000 mg/kg) (Figure 6a-f). No particles were detected in the liver, spleen, kidney, or heart of the high-dose group.
OR PEER REVIEW 11 of 16
Discussion
As the production of plastic products increases, a concomitant increase in plastic waste is inevitable. Plastic waste collects in the ocean and microplastics are formed by weathering and environmental exposure. Microplastics are environmental pollutants and recently attracted significant interest in wider society. In the marine environment, microplastics have the potential to be ingested by aquatic organisms leading to human exposure through the food chain. One study documented the presence and types of microplastics in human feces [36]. Accordingly, there was increased interest in studying the prevalence and effects of environmental microplastics [37][38][39][40][41]. The impact of microplastics was evaluated using aquatic [42], rodent [41] and human [43] cells. Nevertheless, little is known about the toxicity of microplastics. In this study, using standard toxicity evaluation methods (OECD guideline 408, 423), three concentrations of microplastics (500, 1000, and 2000
Discussion
As the production of plastic products increases, a concomitant increase in plastic waste is inevitable. Plastic waste collects in the ocean and microplastics are formed by weathering and environmental exposure. Microplastics are environmental pollutants and recently attracted significant interest in wider society. In the marine environment, microplastics have the potential to be ingested by aquatic organisms leading to human exposure through the food chain. One study documented the presence and types of microplastics in human feces [36]. Accordingly, there was increased interest in studying the prevalence and effects of environmental microplastics [37][38][39][40][41]. The impact of microplastics was evaluated using aquatic [42], rodent [41] and human [43] cells. Nevertheless, little is known about the toxicity of microplastics. In this study, using standard toxicity evaluation methods (OECD guideline 408, 423), three concentrations of microplastics (500, 1000, and 2000 mg/kg/day) were administered to ICR mice at single and repeated doses for 28 days to evaluate toxicity. In addition, we determined whether the administered microplastics were present in tissues and organs using Raman spectroscopy.
Polyethylene microplastics (PE-MPs) were pulverized to a size of 10-50 µm to increase the similarity with microplastics found in the environment. The fabricated microplastics were atypical and exhibited a fragmented shape (Figure 1). There were many studies using spheroid-shaped microplastics [44][45][46]; however, we considered that microplastics existing in the environment are atypical and have various shapes. Therefore, microplastics that reflect these characteristics were prepared and used in the present study. To confirm the lethal dose50 (LD 50 ) of PE-MPs, a single oral dose toxicity study was performed in three groups of mice (500, 1000, and 2000 mg/kg). Clinical signs, body weight, mortality, and gross postmortem evaluation at necropsy showed no significant differences in the treated versus untreated groups ( Figure 2). Therefore, as a result of a single oral dose toxicity study of PE-MPs, we established that the LD 50 was greater than 2000 mg/kg. These data provide insight into the response and effect of mammals to short-term microplastic exposure. Based on the single oral dose toxicity study, a rationale for observing the in vivo effects of repeated administration of microplastics was evident. Three treatment groups (500, 1000 and 2000 mg/kg) were established, and a 28-day repeated oral dose toxicity study was conducted to evaluate the effects of microplastics in mice. No specific changes were observed in the treated groups compared with the control group with respect to body weight, food and water consumption, absolute and relative organ weight, and clinical pathology features. There were no animal deaths in this 28 day, repeated-dose toxicity study (Figure 3, Tables 1 and 2).
When evaluating toxicity in laboratory animals, spontaneous findings should be distinguished from those caused by the administered substance. For example, prolapse of the penis was observed in one subject of the middle-dose group (500 mg/kg), but it was not a dose-dependent finding. Additionally, no specific findings were observed by necropsy or histopathological analysis of the male reproductive organs compared with the control group. Therefore, this symptom was considered to be a spontaneous finding. According to a previous study [32], polystyrene microplastics can induce male reproductive toxicity. However, we evaluated the toxicity of PE-MPs, thus the applied substance was different compared with that of the previous study [32]. Nonetheless, the evaluation of male reproductive toxicity by each type of microplastic should be the subject of a future study. With respect to wounds on the dorsal skin at high doses in males (Table S1), these findings may result from a conflict among the mice during housing. Individuals with wounds on the dorsal skin were separated into single housing to prevent additional injury [47,48]. According to previous studies, the administration of PE-MPs to mice induces higher anxiety [49]. Therefore, it is important to document that PE-MP exposure can induce behavioral changes. This may provide a clue as to whether there is a correlation between the nervous system and microplastics. Although our study focused on the toxicity of microplastics, it is worthwhile to conduct additional studies on behavioral and neurophysiology.
According to the histopathological results, granulomatous inflammation resulting from foreign bodies in the lungs of the PE-MP-treated group was observed in both males and females (Figures 4 and 5). Inflammation of the lungs is a major finding after repeated administration of PE-MPs for 4 weeks. These results suggest that repeated exposure to PE-MPs causes damage to the lungs. To determine whether the foreign body observed in the lungs was PE-MPs, Raman spectroscopy was performed on the lung tissue and PE-MPs were detected ( Figure 6). Using Raman spectroscopy, the shape and number of PE-MPs were confirmed. In addition, PE-MPs were measured in specific regions and revealed that microplastics circulate throughout the body following absorption from the gastrointestinal tract. Raman spectroscopy analysis is a useful method for detecting microplastics and there were reports demonstrating the detection of microplastics in living organisms, the environment, and food using this method [50][51][52][53][54][55][56]. However, to our knowledge, there were no reports that directly demonstrate the detection of microplastics in rodents using Raman spectroscopy, which was done for the first time in our study.
Single and 28-day repeated oral dose toxicity studies were done using PE-MPs. Our study included a clinical pathology and histopathological evaluation and confirmed the presence of PE-MPs in specific tissues. We also established that the lethal dose (LD 50 ) was greater than 2000 mg/kg in the single-dose toxicity study using PE-MPs with a particle size of 10-50 µm. The NOAEL value was less than 500 mg/kg in females and 1000 mg/kg in males in the 28-day, repeated oral dose toxicity study. Most importantly, in the case of repeated administration for 28 days, PE-MP was detected in the lungs and inflammation was observed. Raman spectroscopy revealed that PE-MP was present in the lung, gastrointestinal system, and serum. This suggests that microplastics can accumulate in the body and disseminate to specific organs and serum in vertebrates following exposure. This study provides new insights to improve our understanding of the toxicological effects of PE-MP and the biological safety of microplastics to human health following exposure.
To overcome some of the limitations of this study, it will be necessary to evaluate the toxicity of repeated microplastic administration for longer than 28 days. In addition, the study of the toxicological mechanism of microplastics should be done concurrently. Because humans and other organisms are continuously exposed to microplastics through food intake, toxicity evaluations and the effects of long-term exposure to microplastics should be evaluated in future studies.
Conclusions
We hypothesized that Polyethylene microplastics (PE-MPs) accumulate in the body and cause damage during long-term exposure. The lethal dose of the polyethylene microplastics was determined to be more than 2000 mg/kg through a single dose toxicity study, and no-observed-adverse-effect-level (NOAEL) was less than 1000 mg/kg and 500 mg/kg in male and female mice, respectively. Our results indicate that damage occurred in the lungs when PE-MPs were administered repeatedly for 28 days and microplastics were directly detected in specific tissues and serum from treated mice, which confirms our hypothesis. Further studies will be necessary to identify the molecular mechanisms for the toxicity and effects of long-term exposure to various types of microplastics.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2022-01-23T16:52:27.562Z
|
2022-01-20T00:00:00.000
|
{
"year": 2022,
"sha1": "4fe98779397a45b5cfad1d0b577584388f9ac53f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/3/402/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cef615adc4c5dacd9802bed502e63d2eb5446ee",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267995397
|
pes2o/s2orc
|
v3-fos-license
|
The Invisible Discrimination: Biases in the Clinical Approach Regarding Migrants: A Study to Help Ethnopsychology Services and Clinicians
The complexity of migration flows across the world has led to a redefinition of psychological and social services users. The access of migrants from different cultural backgrounds to clinical services or social health services has diversified the demand for concomitant help. Biases and misinterpretations have been created by unaccustomed professionals in this field, which could lead to serious consequences and invalidate diagnostic and treatment procedures. The purpose of this study is to summarize the evidence about errors or prejudices observed in clinical practices regarding the provision of social health services to people from different cultural backgrounds. Results show three main types of biases: racial stereotype activation, ethnocentrism and micro-aggressions. Some implications on the clinical setting were discussed, as being aware of these biases can help mental health professionals manage communication more consciously with users.
Introduction
Despite the fact that the history of humanity has been characterized by innumerable migrations over time, in the last several years, globalization has contributed to the creation of a network that necessitates more intensive interactions between individuals and communities with regard to migration [1,2].More immediately, recent wars in Syria, Afghanistan, and Ukraine are leading to an increase in the number of refugees across the world who are likely to seek psychological (and not only material) help.
In light of these phenomena, everyday life experiences in the current world are becoming richer and more diversified [3][4][5].There is an urgent need to develop procedures of interpretation to read and understand these new processes.Moreover, the increasing number of second-and third-generation immigrants worldwide renders critical the definition of identity constructed only on the basis of ethnicity.Words such as "immigrant", "foreigner" and "culture" are no longer sufficient to capture the complexity of this process, which is produced by the continuous increase in migratory flows.Specifically, the use of these labels to refer to a wide range of situations, together with the human cognitive need to process the unknown as if it were already known [6][7][8][9], can have serious consequences in the context of clinical services provided to migrants.In the abovementioned context, the access of migrants or culturally different individuals to public health services is an issue that has provoked an increasing number of studies concerning cultural differences and the development of intervention programs ad hoc [10].Since the 1970s, scholars have focused on the impact that patient variables such as race can have on mental health professionals' clinical judgments, often adopting an archival method to investigate epidemiologic results Behav.Sci.2024, 14, 155 2 of 15 about diagnosis rate [11][12][13] and type of treatment [14,15] among patients deemed as having a different cultural background.These studies aimed to clarify the influence of the variable named "culture" within the mental health field, thus initiating the emergence of ethnopsychology [16].Since then, many studies have investigated how culture may impact the role and work of mental health professionals and patients' concomitant responses.First, some authors have highlighted a clear difference in identifying mental health symptoms according to the cultural background of a patient [17].Other scholars have noted a lack of validity of related tests and assessment instruments [18,19], some errors of judgment made by mental health professionals [20][21][22] and bias based on bullying [23].Some researchers have also investigated the tendency of clinicians and their services to interpret behaviors specific to certain cultures using the diagnostic symptoms of Western psychiatry [24], which amounts to the implementation of a form of ethnocentrism [25].With regard to patients, some authors have highlighted that poor use and certain inequalities in public mental health services [26,27] added to a clearly identifiable distrust toward Western therapy [28].All these can have a direct impact on the well-being of many people who come to clinicians.Moreover, as discussed above, in many countries, migration flows have raised new issues involving services and clinicians, who need a more precise and error-free key that will help them understand their responsibilities.In fact, these social changes may also affect the way clinicians usually operate [29] or the kinds of training they need to avoid compromising their operational work in shifting circumstances [30].It is essential to focus on the necessary skills and knowledge that clinicians must acquire to avoid the aforementioned errors.In 2017, the American Psychological Association [31] (p.7) published a set of guidelines meant to "provide psychologists with a framework from which to consider evolving parameters for the provision of multiculturally competent services".In recent years, many studies have focused on clarifying the role of racial stereotypes and biases in the medical field [32][33][34].According to these studies, the variable of culture can indeed affect a medical diagnosis; thus, the necessity to explore cultural processes in the field of psychological services arises.Crucially, the clinical encounter can be affected by difficulties and criticalities that result in the adoption of cognitive shortcuts, which may, in turn, lead to assessment errors [35].The use of these shortcuts is usually aimed at simplifying the clinical relationship between patients and their mental health professionals; however, they can lead to biases.Our study is, therefore, guided by the following research questions: In the literature, what are the main biases that can be detected among mental health professionals when they are confronted with people from different cultural backgrounds?How do the relevant studies describe the implications of these biases?Thus, our systematic review aims to summarize the main biases identified in the literature as occurring during the clinical encounter between migrants or culturally different patients and mental health professionals.Our study adopts an interactionist epistemological framework [36,37].According to this position, reality is not given, but it is a process that develops via the process of communication between individuals (interactions).This position differs from the mechanistic (or ontological) paradigm, which postulates reality as external to the observer, governed by empirical laws within ontological entities, such as causality.Sure enough, when we refer to "culture", we are not discussing a defined and static entity but something adaptive and changeable precisely because it is the result of social negotiation [38].
Method
To answer our research questions, we conducted a systematic review.Our methodology was chosen for its strengths.According to Grant and Booth [39], the systematic review method seeks to systematically search for, appraise and synthesize research evidence, often adhering to guidelines on the conduct of a review.Unfortunately, pertinent studies in the intercultural field lack methodological rigor in terms of the search strategy of materials.For this reason, our study adopted a methodology based on a rigorous analysis and an accurate evaluation of the available literature.
Our primary objective was to provide the reader with an overview of the so-called "primary sources" [40], summarizing the main biases in the literature identified as occurring during the clinical encounters between migrants or culturally different patients and mental health professionals.
On our chosen topic, there are many studies that have not been formalized in scientific publications.Moreover, related public opinion has produced a vast number of online posts and communications.For this reason, we chose to conduct a systematic review of this topic in order to provide an overview of the main scientific results obtained regarding our study topic [41,42].In turn, we found a considerable increase in the amount of research in this area over time; we intend to provide reliable accounts of the extant research in this review [43].We systematized and summarized the available data using a systematic review method and in accordance with the PRISMA (Preferred reporting items for systematic review and meta-analysis protocols) protocol.
Inclusion and Exclusion Criteria
The inclusion criteria were the following: (1) full-length articles published in peerreviewed journals in the English language from 1980 to 2022; (2) empirical articles (quantitative or qualitative) related to the biases shown by professionals toward/against patients with different cultural backgrounds.The following types of articles were excluded: (1) articles that were not peer-reviewed, such as theses, book chapters and conference papers; (2) articles addressing different topics or not specifically including biases regarding patients with different cultural backgrounds; (3) studies on rural-to-urban migrant workers.
Information Sources and Search Strategy
To collect data concerning our aim, we used Scopus as a database.This was because we found Scopus to be the largest and most thorough international database of peerreviewed literature, including publications offered by other databases.Compared with other databases, Scopus seemed more pertinent to our review.The search strategy was executed by combining keywords pertaining to three different subject areas: -Mental health: psychology, psychotherapy, psychiatry, counseling; -Culture and cultural background: cultural, culture, multicultural, minority group, racial, ethnic, race, racism; -Bias and judgments errors: bias, mistake, stereotyping, prejudice.
Regarding the use of different keywords, our aim was to reach sources discussing the same issue by referring to it differently.We were aware that errors, bias and mistakes are words with different meanings.Nevertheless, since defining them theoretically is beyond the purpose of our study, in this article, we use them as synonymous words.
Data Screening and Extraction
We excluded articles written before 1980.After an initial analysis, we included the word "clinical" in each keyword set since most documents addressed the issue in a nonclinical manner.In turn, we found 629 articles.Afterward, we added the following search strings: "AND NOT (family or couple or marriage)" and "AND NOT (kids or adolescent)" to exclude family therapy, couple's therapy and therapy with children.The searches addressing these areas were, in fact, beyond the scope of this study.The 377 documents we subsequently selected also excluded literature reviews and books (secondary sources), as we limited the search according to the "document type" of "article".Thereafter, the total number of remaining articles was 316.These were reviewed in terms of "title" and "abstract" so we could exclude the non-pertinent articles.More precisely, our exclusion criteria concerned the type of treatment (e.g., group psychotherapy) and the research area (e.g., assessment instruments and usage of mental health public services).After excluding 283 articles, the final number of the selected documents became 33.Details of this process are provided below and displayed in the flow chart in Figure 1.The 33 articles covered a period of 39 years, from 1980 to 2022.They were equally divided among the three main outcomes.All the articles we found were written in English, and the majority (99.9%) of the studies were conducted in the United States of America (USA).All the selected articles are displayed in Table 1.
assessment instruments and usage of mental health public services).After excluding 283 articles, the final number of the selected documents became 33.Details of this process are provided below and displayed in the flow chart in Figure 1.The 33 articles covered a period of 39 years, from 1980 to 2022.They were equally divided among the three main outcomes.All the articles we found were written in English, and the majority (99.9%) of the studies were conducted in the United States of America (USA).All the selected articles are displayed in Table 1.
Prisma Statement:
The reporting of this systematic review was guided by the standards of the preferred reporting items for systematic review and meta-analysis (PRISMA) statement.Prisma Statement: The reporting of this systematic review was guided by the standards of the preferred reporting items for systematic review and meta-analysis (PRISMA) statement.
It is essential to mention that most of the articles we selected examined the study topic using two methodologies.One methodology involved presenting a clinical study in a video, audio, or textual format to several participants (psychologists, therapists, counselors or psychiatrists).Subsequently, the participants answered a questionnaire about global evaluation, clinical judgments, their severity and the perception of factors that can emerge during therapy.
The other modality involved analyzing the epidemiological characteristics of a specific sample in order to figure out the reason why mental health professionals choose one diagnosis over another.These two modalities were aimed at investigating if and how culture is a variable that can influence the clinical judgment or the perception of a case's severity.Only two articles addressed the above issue using a different methodology.One of them [56] was a phenomenological inquiry conducted by distributing questionnaires to 108 members of a multicultural counseling and psychotherapy training organization; the questionnaires asked them about their experience regarding the ways in which the issues of race and culture affect counseling, psychotherapy and psychological intervention.The other [50] tried to offer a dynamic point of view about the relationship between a mental health professional and a migrant patient by considering the construct of counter-transference.
From our analysis, we found that the presence of judgment biases and distortions in such a situation was controversial.In fact, four of the studies [11,44,52,53] did not provide evidence of any variation in variables such as diagnosis, global evaluation or the quality of the therapeutic relationship that was influenced by cultural variables.This result was interpreted as evidence of the absence of biases and prejudices toward migrant patients.However, the use of clinical vignettes assumed that the diagnosis was an individual cognitive process that could be isolated from the context, with ethnicity deemed an independent variable [53].Hence, we did not consider the possible contingencies that can play an influential role in a real clinical situation.Therefore, one must remain cautious in interpreting these results.
Results
All the studies highlighted the presence of biases that could be classified into three macro-categories: the activation of racial stereotypes, ethnocentrism and micro-aggressions.Each of these three categories is discussed below.
Racial Stereotypes
Some of the selected studies highlighted how stereotypes and cultural prejudices can affect the therapeutic pathway.These judgmental errors are based on the so-called representativeness heuristic [69], which leads to the categorization of one stimulus based on its similarity with another stimulus or a set of stimuli.This mental operation results in an improper generalization of judgments and evaluations used within a specific set of stimuli to easily categorize a new stimulus.
As per our review, an ethnic minority patient might be seen as a member of the category "Afro-American", "Hispanic", or simply "different" or "other".Subsequently, mental health professionals may use characteristics and peculiarities attributed to a specific category throughout the process of knowing their patients.
The contents of these prejudices, as individuated by the studies, can be organized into three categories.The first type of prejudice regards the dangerousness and the violent nature of patients belonging to specific ethnic groups, particularly Afro-Americans.It was found that this type of prejudice may lead mental health professionals to use tranquilizers, isolation and restraints in a more consistent way and decide not to use recreational and occupational therapies [14].According to a study by Lawson, Hepler, Holladay and Cuffel [54], this kind of cognitive bias may be one of the reasons behind the higher number of hospitalizations among black mental patients compared with Caucasian mental patients.Such a type of prejudice may also influence a diagnosis: Afro-American patients were found to be more likely to receive a diagnosis consistent with violence issues, aggressiveness and suspiciousness compared with patients of other ethnicities [21,51].
The second type of prejudice concerns work issues.Jones and Grey's [15] study found that most mental health professionals consider the symptoms shown by black patients as issues concerning their working situation and tend to exclude the possibility of mood disorders.Three other studies [58,60,61] found that in a counseling setting, culture might affect the evaluation of a patient's future work-related potential.The authors interpreted these results as evidence for the argument that among mental health professionals, stereotypes involved in evaluating the clinical situation of patients may lead to divergent conclusions.The third type of prejudice concerns the abuse of alcohol by Afro-American people.The study by Luepnitz, Randolph and Gutsh [48] showed that in a group of white and black patients with the same symptoms, the latter were more likely to receive a diagnosis of alcoholism owing to the widespread common sense that black people drink more than white people.
The fourth and fifth types of prejudice could possibly lead mental health professionals to evaluate black and white patients' origins in different ways based on some outcomes from the scientific literature: black patients were assessed to have lower levels of verbal skills [12] and a higher rate of schizophrenia [47] compared with white people.In the study by Bell and Mehta [47], a phenomenon called misdiagnosis emerged: most black patients were more likely to receive an improper diagnosis of schizophrenia, even if their symptoms were more consistent with a diagnosis of manic depressive disorder.In fact, many studies underlined the issue of the overdiagnosis of schizophrenia [17,20,49,59].
Related to stereotypical representations intended as a strong and rigid opinion not acquired by experience and independent of a single case's evaluation, we need to consider a mental health professionals' possible perception that cultural differences make it difficult to manage current or future clinical situations [14,15,42,46].This kind of judgmental error can be seen as a prevision, and such a mental operation assumes that an empirical bond exists between two entities [70]: considering the ontological existence of the entity-cause (in our case, cultural diversity), it is possible to foresee the entity-effect (in our case, difficulty and stress).In our review, this prevision influenced how mental health professionals presented themselves to and acted toward migrants [71]; the clinicians evaluated the cross-cultural clinical situation as more challenging and harder even before meeting their patients.
A common issue in several of the selected studies concerned these prejudices' implicit or explicit nature.Three of the studies [57,64,66] used different methods (presentation of prime words, Implicit Association Test) to investigate implicit stereotypes toward immigrant patients.Two of these three studies [57,64] provided evidence of implicit biases.Katz and Hoyt [66] found that the variance of an outcome expectations scale among clinicians was fully explained by the score concerning the explicit nature of their prejudices.Nevertheless, they suggested that social desirability may be a confounding variable among mental health professionals.
Ethnocentrism
Another error emerging from the studies was ethnocentrism.This word refers to the interpretation of a behavior or a sentence acted or pronounced by the patient and detected by the mental health professional based on psychological theories and practices derived from the standards and attitudes normalized by the professional's native culture.As per our review, a judgmental error assumes that the cognitive categories, behaviors and symptoms shown by the patients are the same as those associated with Western cultural scripts.Indeed, in an illustrative research, Li-Repac [45] showed that mental health professionals tended to evaluate Asian patients as more depressed and inhibited than their Caucasian counterparts.According to the researcher, these differences could stem from a "Western" interpretation of the behavior of Chinese clients.In fact, "the value placed on being frank and open by American culture contrasts with the Chinese tendency to be quiet, to listen to, and to be cautious about one's effect on others" [65, p. 338].Thus, the difference in interpersonal style might be seen by a white therapist as a sign of social introversion and even diagnosed as depression.
Similarly, in another study, Arroyo [55] underlined that differences in pronunciation and accent in Hispanic patients were interpreted as signs of reduced emotional expressiveness or blunted affect.In this study, the cultural background influenced the interpretation of patients' speech; thus, linguistic differences were confused with emotional withdrawal.
Two other studies [28,62] focused on ethnocentrism regarding assessment instruments: specific answers in diagnostic interviews were immediately linked to a certain diagnosis.However, the diagnostic system's norms did not adequately consider variations among different cultures.Thus, this phenomenon led mental health professionals to describe the patients in front of them with labels that were deemed as lacking validity.
Ridley [50] tried to provide a psychoanalytic point of view to the above question by proposing the paradigm of pseudo-transference, a phenomenon reportedly occurring during interracial therapy.According to the author, some defensive reactions of black clients were triggered by behaviors and stereotyped attitudes among white therapists.Consequently, the therapist might have misinterpreted and labeled their patient's response as pathological even though the reaction was justified.
Microaggression
Another form of bias that emerged from our systematic review concerned microaggressions.This type of bias includes a wide range of judgmental errors that, in some cases, could be classified as belonging to the categories we proposed above.However, because of the focus received by this error and its peculiar form, it was necessary to diversify this category from those previously examined.In 1978, the first author to use the term microaggression was Chester Pierce [72], who defined it as a subtle, stunning, often automatic and non-verbal exchange that berates a person.In 2007, Sue et al. [73] (p.273) reused this term by specifying its meaning: "brief, everyday exchanges that send denigrating messages to people of color because they belong to a racial minority group".In any case, microaggressions consist of a multiplicity of attitudes and communications, intentional and unintentional, which depict a lack of sensibility, respect and attention toward some aspects of a culture that is different from someone's own.Sue et al. [73] described three different types of microaggressions.First, Microassaults are severe offenses, always explicit and intentional, involving denigrations of an individual's racial group (e.g., referring to someone as "colored").Second, Microinsults are subtler and more unconscious communications that put down an individual's racial group (e.g., asking a person of color, "how did you get this job?").Third, Microinvalidations are communications that tend to negate or deny the thoughts, feelings or experiences of a person of color (e.g., telling a person of color, "I don't see color").Sue et al. [73] showed that the last two are quite common in the field of counseling.
Further, Constantine [63] identified 12 categories of racial microaggressions that can occur in a counseling context: (a) colorblindness, (b) overidentification, (c) denial or personal or individual racism, (d) minimization of racial-cultural issues, (e) assignment of a unique or special status based on race or ethnicity, (f) stereotypical assumptions about members of a racial or ethnic group, (g) accused hypersensitivity regarding racial or cultural issues, (h) meritocracy myth, (i) culturally insensitive treatment considerations or recommendations, (j) acceptance of less-than-optimal behaviors based on racial or cultural group membership, (k) idealization and (l) dysfunctional helping/patronization.These categories were converted into a 12-item, three-point Likert-type questionnaire, the Racial Microaggressions in Counseling Scale (RMCS), measuring respondents' perceptions of racial microaggressions in counseling and the perceived impact of these microaggressions on them.
According to researchers, many of the racial microaggressions committed in counseling rooms have less to do with the counselor saying or doing something offensive than with minimizing the importance of cultural issues or communicating defensiveness or discomfort with regard to being reminded about one's biases or prejudices [67,74].Moreover, the working alliance appeared to moderate the impact of perceived microaggressions on clients' psychological well-being [65].Clients who perceived microaggressions with their therapists but maintained a high alliance still reported improvements: a strong alliance could temper the negative impact of ruptures caused by microaggressions.
The nature of microaggressions as subtle and unconscious phenomena could prevent therapists from recognizing that their clients experience such offenses.Furthermore, therapists are deemed reluctant and uncomfortable when addressing issues of race and ethnicity.In fact, client and therapist dyads that discussed the microaggression experiences had higher-quality alliances than those that did not discuss it, being deemed similar to the ones in which there was no perceived microaggression.According to some researchers, this result revealed the benefit of addressing the missteps that can occur during therapy [67].
Hook and colleagues [68] aimed to examine how the perception of cultural humility is associated with the frequency and impact of microaggressions.The researchers defined cultural humility as "the ability to maintain an interpersonal stance that is other-oriented (or open to the other) in relation to aspects of cultural identity that are most important to the client" [68] (p.2).Higher rates of perceived cultural humility in therapists were found to be associated with lowered frequency and impact of racial microaggressions.These findings were consistent with the hypothesis concerning the importance of this characteristic during therapy involving ethnically diverse clients.Counselors who were perceived by their patients as having high levels of cultural humility were deemed less likely to commit racial microaggressions; when they did so, they were able to acknowledge and admit their limitations and mistakes regarding cultural issues.
Discussion
This review was aimed at evaluating the presence and typology of cognitive errors committed by mental health professionals toward those patients who are perceived as culturally different.Although some of the studies denied the existence of these errors, most of them contributed to confirming the hypothesis regarding the presence of cognitive mistakes and heuristics in mental health services; this mistake could be divided into three main categories.
The first category concerns stereotypes.In the clinical encounter with the immigrant patient, mental health professionals can be influenced by the information concerning a patient's cultural background.This information can contribute to the formulation of prejudgments based on ideas and beliefs that are commonly accepted or theories that belong to the observers' native cultural scripts, along with notions of common sense, none of which are valid in a scientific way.The results of our review also showed that stereotypes could be divided into five different areas: -Violent nature, which concerns the unjustified attribution of characteristics to a supposed violent nature of the patient.This attribution leads to cognitive distortions concerning diagnostic evaluation and treatment indications; -Work issues that deem the consideration of problems in the patient's work-related sphere as central to diagnostic evaluation and related to their future potential; -Alcohol abuse, which concerns the belief regarding the diagnostic evaluation that problems concerning the abuse of alcoholic substances are diffused and common among people of specific ethnicities; -Lower verbal skills, which concern a prejudicial negative evaluation regarding the patient's verbal capacity that leads to distortion during diagnostic and treatment processes; -Higher rates of schizophrenia, which concern the act of basing an individual diagnosis on the epidemiological frequency of a diagnosis within a specific ethnic group; this leads to the phenomenon of misdiagnosis.
A further critical aspect regarding the judgmental errors related to stereotyping concerns foreseeing a higher degree of difficulty and stress in the management of culturally distant patients.This prediction is epistemologically unfounded since it is based on applying a modality of knowledge that is typical of a mechanistic paradigm.The latter postulates the existence of empirical relations within ontological entities [75].This error may have significant effects on the interaction between therapists and patients during the clinical pathway regarding diagnostic judgments and evaluations of the displayed symptoms.
The second category of errors concerns the phenomenon of ethnocentrism.In this case, mental health professionals take for granted that the cognitive categories they use to define their experiences are identical to the ones used by their patients.This error in the interpretation of some behaviors shown by the patients and in the use of assessment instruments that are valid for the Western cultural scripts may have substantial consequences for the culturally different patients because their behaviors are classified using diagnostic labels that lack validity, as they do not account for cultural variations.
The third category of judgmental error concerns the phenomenon of microaggressions, a subtle bias appearing in the communicative exchange between professionals and immigrant patients.According to the studies we reviewed, in clinical situations, verbal and non-verbal exchanges often contain offensive messages toward patients whose cultural backgrounds are different from those of mental health professionals.Moreover, microaggression can involve mental health professionals using interactive modalities that minimize the emphasis on cultural issues and expressing discomfort at becoming aware of bias, stereotypes and prejudices shown toward some patients.Thus, microaggressions affect the perceived therapeutic alliance and can provoke therapy drop-outs.
It is essential to underline that the errors found in our analysis of the literature may be both implicit and explicit.Indeed, several of the selected studies, characterized by specific methodologies, emphasized that the reasons leading to decisional processes (diagnostic evaluation, symptoms interpretations and treatment) could be unconscious.
Limitations and Future Research
First, this review is limited in its concerns over the scarcity of specific studies involving patients from different cultural backgrounds.It focuses on research conducted in the Western academic world; when discussing ethnic minorities or migrants from unfamiliar areas, there is, thus, the risk of taking many concepts for granted.It is important to underline that most of the studies we examined were conducted in the USA; some of its ethnic minorities are indigenous.This factor may have guided the results and analyses, and therefore, one must tread with caution in generalizing the results.Furthermore, there was considerable variability among the studies included, such as the number of participants, survey methods and evaluation of the interventions.Future studies should aim to investigate the topic under review in different contexts, e.g., the European context.
Conclusions
According to the World Bank and ONU data, it is possible to anticipate that in the next 35 years, the total population of the world will increase by 50%, especially in the poorest parts of the world, in addition to a population decrease in most European countries.This would probably intensify migration flows from the poorest and most populous parts of the world to the richest and least populous areas.In this scenario, reflections regarding the interactive modalities aimed at welcoming communities and migrants appear to be imperative.Even in the psychological and clinical field, it is vital to confront this theme since the number of migrants who seek help will increase not only in public health services but also in private facilities.Results from the available studies clarify that during the clinical encounter and the interpretative process, mental health professionals are likely to resort to biases and distortions.The clinical encounter with migrant patients is experienced as a problem to which the majority of mental health professionals respond by adopting heuristics in order to reduce the feeling of disorientation.Instead, it is crucial to adopt an intercultural perspective according to which the culturally different person is not seen as "other" or "diverse" but as an actor in a co-participated process of community building.That is why it is particularly important to help mental health professionals recognize their own biases and cognitive issues and to put them in interaction with those of the users in a co-participatory process that can renew the identity of psychological work and the mental health service itself.Only in this way can alterity be valued and perhaps self-knowledge be continued over time.
Figure 1 .
Figure 1.PRISMA flow diagram illustrating the processes of literature searches and screening.Figure 1. PRISMA flow diagram illustrating the processes of literature searches and screening.
Figure 1 .
Figure 1.PRISMA flow diagram illustrating the processes of literature searches and screening.Figure 1. PRISMA flow diagram illustrating the processes of literature searches and screening.
Table 1 .
Selection of studies.
Author Country Purpose of Study Type of Study and Method(s) 1
Racial attribution effects on clinical judgment: A failure to replicate among white clinicians Bloch, Weitz, Abramowitz 1980 [44] USA To investigate the variation in the countertransference phenomenon according to the patient's ethnicity
|
2024-02-27T17:31:45.429Z
|
2024-02-21T00:00:00.000
|
{
"year": 2024,
"sha1": "7bc0cf4f5ba15524d8459210e2548853c84d98c0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "597c97770d04ef791c7b9b23d22548e97dec6388",
"s2fieldsofstudy": [
"Sociology",
"Psychology"
],
"extfieldsofstudy": []
}
|
253447023
|
pes2o/s2orc
|
v3-fos-license
|
Multistatic Sensing of Passive Targets Using 6G Cellular Infrastructure
Sensing using cellular infrastructure may be one of the defining feature of sixth generation (6G) wireless systems. Wideband 6G communication channels operating at higher frequency bands (upper mmWave bands) are better modeled using clustered geometric channel models. In this paper, we propose methods for detection of passive targets and estimating their position using communication deployment without any assistance from the target. A novel AI architecture called CsiSenseNet is developed for this purpose. We analyze the resolution, coverage and position uncertainty for practical indoor deployments. Using the proposed method, we show that human sized target can be sensed with high accuracy and sub-meter positioning errors in a practical indoor deployment scenario.
I. INTRODUCTION
The sixth generation (6G) wireless systems will continue to evolve towards higher frequency bands and wider bandwidths [1]. Typical 6G deployment will be spread over low, mid and higher frequency bands to enhance coverage and capacity [2]. The increase in operating frequency could result in communication bands operating closer to traditional radar bands. We see this trend already in fifth generation (5G) mmWave communication bands merging with K band and Ka band (26.5 GHz−40 GHz) and this trend will continue in 6G. High frequency operation of 6G enables transceivers to employ massive antenna arrays. This coupled with wider bandwidth can aid in high resolution sensing solutions with fine range, Doppler and angular resolutions [3], [4].
As visualized in Fig. 1, sensing of targets (also referred to as passive objects) involves target detection and, if targets are deemed to be present, estimation of their parameters [5]. Passive sensing include sensing of targets that do not have communication capabilities nor will aid in any form to the sensing process. Employing communication infrastructure for passive sensing of objects can enable several new use cases, such as optimizing energy consumption by controlling the internet of things (IoT) devices, intruder detection, tracking of equipment among others [6]. In these systems, sensing can piggyback on ubiquitous communication infra-structure there by reducing the cost for realizing these use cases. Sensing using communication signals can also ensure privacy and security aspects compared to the existing methods which typically employ cameras to sense passive targets in indoor environments [7]. Methods for sensing passive objects from the reflected signal using radars along with other onboard sensors are commonly employed in automotive use cases [8]. These methods cannot be directly extended towards passive sensing using communication infrastructure since the sensors needed are typically not available and to mimic a traditional radar using these systems require full duplex operation to harness the reflected signals from the environment [4]. In [9]- [12] authors propose methods which use wireless signals for passive sensing. These methods extract features like received signal strength indication (RSSI), channel state information (CSI) or micro-Doppler shifts from communication signal for passive sensing, based on mid-band (2 − 10 GHz) carriers. High frequency 6G channels exhibit clustered multi-paths with each cluster pertaining to a highly reflective surface in the environment. These channels are generally represented through environment specific ray-tracing channel models. To ensure that the conclusions drawn form the work is applicable to many environments, stochastic geometric models, such as the Saleh-Valenzuela (SV) channel model [13], [14] is more appropriate. To the best of our knowledge, this model has not been adopted towards indoor passive sensing. The passive target localization problem is also treated in the literature under the umbrella of device free localization, where the focus is only on localization and not on target detection [15], [16]. Typically, these works use non cellular channel models and the proposed artificial intelligence (AI) methods does not exploit the correlation in anglular domains from multiple links. In parallel, there have been works on using radio tomographic imaging (RTI) for position estimation [17]. In these methods, a high-resolution attenuation image caused by the presence of the object is exploited by an image estimator to arrive at the position. These methods require many communication links to get high resolution attenuation image for accurate position estimation and is not suitable for practical indoor cellular deployment.
In this paper, we develop methods that exploit the 6G infrastructure capability towards sensing of passive targets. The main contributions of this paper are summarized as follows. (i) An AI method that exploits the multi-input multioutput (MIMO) CSI from multiple links between transmitter and receiver towards target sensing by perturbations in the geometric channel model. The method naturally exploits the angular dimension of the CSI using the rich beamforming capability of the large MIMO array towards target sensing and parameter estimation. (ii) Analysis of the resolution (i.e., size of the target that can be sensed), coverage (i.e., probability of detection of a fixed size target at different spatial locations), and position estimation accuracy, using practical indoor cellular deployments. (ii) Comparison of the proposed position estimation method with an angle-based method to demonstrate the utility of the proposed AI-based solution.
II. SYSTEM MODEL
In the following, we describe the system model for target sensing in the indoor environment. We assume that the deployment has multiple links between transmit and receive devices having beamforming capabilities. We consider a single transmit device creating links towards L receive devices. In a typical indoor deployment the transmit devices could be a fixed anchor UE with an omni-directional antenna and the receive devices could be a base stations (BS) with beamforming capability. In the rest of the paper, we use the term transmitter and receiver to keep the discussion more general.
A. Channel Model
Channels in 6G systems operating at high frequency bands (> 24 GHz) are sparse. Propagation paths in these channels are primarily due to the highly reflective scatterers in the environment and they arrive as clusters. Generally, deterministic channel models based on ray-tracing are commonly employed at these frequency bands. However, such channels are environment specific and does not generalize well to other environments. To overcome this and to ensure that the inference drawn from the work to be widely applicable, we adopt a stochastic geometric channel model called SV channel model [13], [14]. In this model, each cluster is comprised of the combination of discrete set of rays. We consider transmissions from a signal low cost transmitter with an omni-directional antenna pattern and each L receivers having an uniform linear array (ULA) with N r elements separated by half wavelength. Moreover, we consider a communication-centric integrated sensing and communication (ISAC) system, where only a small portion of the 6G bandwidth will be used for sensing, resulting in a narrowband channel with only spatial resolution [4].
1) Default Channel without Target: During default or null state, i.e., when the object is absent, we have where h null l ∈ C Nr , l ∈ {0, . . . , L}, denotes the CSI for the link between the transmit device and l-th receive device in the indoor environment. N cl is the number of clusters and N rays indicates the number of rays within each cluster. The u-th ray of the v-th cluster corresponding to the l-th link has a complex gain β l,u,v . Each ray has an angle of departure from the transmit array ψ l,u,v and angle of arrival at the receive array φ l,u,v . The transmit gain pattern is denoted by G(ψ l,u,v ), while the receive array response is given by . All angles are measured in the local coordinate frame of the transmitter or receivers.
2) Perturbed Channel with Target: CSI pertaining to each link gets perturbed uniquely when the object is placed in the environment. As shown in Fig. 1, the occlusion angles O 1 tx and O l rx are created based on the position of the target, transmitter and receiver. Due to the high frequency of operation, we assume that the target completely blocks the rays and there is no diffraction of rays. This creates a L + 1 convex shadow regions, namely S tx ⊂ R 2 behind the object as seen from the transmitter and S rx,l ⊂ R 2 behind the object as seen from receiver l. Then, during alternate hypothesis, the CSI of the channel is given by where where x(φ l,u,v , ψ l,u,v ) ∈ R 2 is the unique location induced by the angle of departure ψ l,u,v from the transmitter and angle of arrival φ l,u,v from the l-th receiver. The second term of (2) represents the contribution due to the scattering from the target resulting in N s rays arriving at the receiver, having complex gains α s , angles of arrival φ s and a fixed angle of departure ψ T . Here, ψ T denotes the angle of the impinging ray from the transmitter to the center of the target. So far we assumed a single target of interest in the scene during alternate hypothesis. However when there are multiple targets (T > 1), the perturbed CSI is due to the creation of T (L + 1) shadow regions, together with the new reflection paths reaching the receivers due to the scattering from T targets. Without loss of generality, the above proposed methods can be extended to the multi-target scenarios with much richer interaction between the objects and the impinging rays.
B. Deployment Model
We consider an indoor deployment in a 25 m 2 area with a transmit device (fixed anchor UE) having an omni-directional antenna (N t = 1) and multiple receive devices (BSs) having an ULA with N r = 8 antennas. We place the transmit and receive device such that the boresight direction is normal to the walls as shown in Fig. 1. Each receiver has beamforming capability to scan between −π/2 to + π/2 using N b beams. An illustration of three deployment scenarios with number of links, L ∈ {1, 2, 3} with receivers performing a beam scan using N b = 7 beams is shown in the Fig. 2. During each coherent processing interval (CPI), CSI is captured in all the N b = 7 angular dimensions synchronously for each link and transferred to an AI agent where detection and parameter estimation on the passive target is performed.
III. METHODS
The complex relationship between high dimensional CSI space to the target detection and parameter estimation can be learned by AI methods directly from data without modeling. In this section, we discuss the AI methods and required data pre-processing for the sensing problem.
A. Data Preprocessing and AI Architecture
We represent the CSI for each CPI in the form of a 2D frame. which is fed to an AI based imaging processing pipeline consisting of stacked convolution neural network (CNN) to extract relevant features. Similar to AI based image processing, the pipeline is supervised to learn the relation between input 2D-CSI space to output space. The structure of the CSI data is used to tune the hyper parameters of the AI -pipeline. Tuning is done in such a way to have the network as shallow as possible at the same time yields good performance so that it can be used on an embedded platforms. We call this tuned CNN network as CsiSenseNet, and is shown in Fig. 3. Both target detection and position estimation pipelines share the same network except for the last two layers shown in green shaded area for target detection and blue shaded area for position estimation.
The CSI for all the L links are concatenated in the horizontal dimension, that is for a given receiver beamforming angle, θ i at all receivers, where | is the concatenation operation, h θi is the aggregated CSI in a particular angular direction θ i and h T l,θi ∈ C Nr , l ∈ {1, . . . , L} denotes the CSI for l-th link. The collected CSI from different angular directions are further concatenated in the vertical dimension forming a 2D-CSI frame Both pipelines are separately trained for target detection and position estimation respectively.
B. Target Detection
For target detection, several realizations of channel H are generated using a simulator for both hypotheses (i.e., with and without a target). A labeled training set consisting of M records (H i , hyp i ) | i = 1, 2, . . . , M with H i as channel realization and hyp i as hypothesis is used to supervise the target detection (green shaded) part of the AI pipeline shown in Fig. 3. Detection network is trained to minimize binary cross-entropy loss.
C. Position Estimation
The position estimation part of the CsiSenseNet shown in the blue shaded area of Fig 3 has two neuron output (for X and Y coordinate estimates) with a linear activation. Similar to target detection a labeled data set consisting of (H i , p i ) | i = 1, 2, . . . , M with p i ∈ R 2 representing the position of the target for channel realization H i , is used to supervise the position estimation network.
We use angle-based position estimation to compare performances with the proposed CsiSenseNet based position estimator. Since in the representative deployment scenarios shown in Fig. 2, the receivers employ multiple antennas and are beamforming capable, the baseline method identifies the angular direction of the beam which is observing maximum perturbation (attenuation) from multiple receivers for triangulating to the position.
A. Simulation Setting
We use Matlab for MIMO (MFM) simulator discussed in [18] to create the deployments shown in the Fig 2. Due to the high absorption characteristics of high frequency 6G channels, we modify the SV channel model to have single bounce reflection from scatter to the receiver, as detailed in the Appendix. We consider beamforming only in the azimuth direction and assume circular shapes for the target to aid in analysis. The proposed methods can be easily extended to have beamforming in both azimuth and elevation with arbitrary shaped targets. Although we consider a single target in the simulations, the AI pipeline can be trained with data from a much larger input domain space having many targets of interest at various positions for multi-target sensing. We configure the simulator as shown in the Table I. In terms of performance metrics, we first of all consider the accuracy score P: which is estimated empirically during testing. Using P we define resolution as the size of the target that can be sensed with an accuracy score higher than a threshold (i.e.,P > γ) and coverage as variation of P at different spatial points for a fixed size target. Secondly, we consider the cumulative distribution function (CDF) of the positioning error, i.e., F E (ε), where ε = p − p , in which · denotes the L2 norm andp ∈ R 2 is the position estimate of the true position, p.
B. Results and Discussion
We now proceed to evaluate the impact of the size of the target and the spatial coverage for different numbers of receivers. Then we evaluate the target positioning performance and compare to a model-based baseline.
1) Resolution Analysis: We analyzed the size of the target required to create sufficient CSI perturbation to be detected by the AI agent. First, we generate 2000 CSI realizations for each hypothesis and size by placing object at 1000 random positions within a 25 m 2 indoor area. A 70/30 split is done to train and validate the target detection part of the CsiSenseNet AI pipeline. Then we drop objects with varying size having diameter, σ from 0.2 to 1.2 at 700 random positions drawn from a 25 m 2 area to assess the accuracy of the AI prediction. The accuracy score, P, of the AI detector for the representative deployment scenarios in Fig. 2 is shown in the Fig. 4. The performance of the detector improves with L and for a given deployment, larger sized targets can be sensed with higher accuracy. For a passive object such as human, who has an approximate width of about 0.8 m can be detected with more than 90 percent accuracy with L > 2.
2) Coverage Analysis: The separation of the distribution of CSI matrix under null hypothesis (without targets) and alternate hypothesis (with target) depends on the position of the target. Positions closer to the transmitter or receiver node creates more CSI perturbation in alternate hypothesis than the targets which are farther. For example, objects in the direction near the endfire of an array are less likely to be detected than objects near the broadside. Therefore, we can define coverage of the sensing method for a given sized target in terms of probability of target detection at various positions. To assess the coverage, we train the target detection part of the CsiSenseNet by generating 2000 CSI realizations for both hypotheses by placing the fixed size object at the center of various quantized bins of 0.0625 m 2 of a 25 m 2 indoor area. The performance of such a trained agent is evaluated using 700 new CSI realization for both hypothesis at each of the quantized bins of 0.0625 m 2 from total indoor area of 25 m 2 for representative deployment scenarios in Fig. 2. The coverage of the proposed sensing method is shown in Fig. 5 for representative deployment scenarios. The coverage is good at positions closer to the transmit and receive antennas, and also along the beam directions. Also comparing Fig 5(a) and Fig. 5(b) notice that the coverage depends on the size with larger sized target having better coverage.
3) Position Estimation with CsiSenseNet:
In this section, we present the results for the proposed position estimation of the target using CSI gathered from multiple links and compare its performance with baseline method. For a fixed targetsize, 2000 CSI realizations at each quantized bin positions of resolution 0.0625 m 2 is captured similar to Section IV-B2, which is then used to train the position estimation part of the CsiSenseNet. We then drop the objects of various size (σ) at 1000 random positions drawn from a 25 m 2 area to access the accuracy of the position estimation. The Fig. 6(a) and Fig. 6(b) shows the performance in terms of mean position error µ ε , 90-percentile error ∆ 90 ε , and CDF of position-error F E (ε) for different deployment scenarios and target sizes. From Fig. 6(a) and Fig. 6(b), larger target size and more number of links in the deployment reduces the position uncertainty.
4) Position Estimation with Baseline Method:
The performance of the baseline method described in Section III-C is as shown in Fig. 6(c). The red plot in Fig. 6(c) is the performance of the baseline algorithm using 7 non-overlapping beams to scan the space (−π/2, +π/2) as shown in Fig. 2. The high position uncertainty in this method is due to: (a) The representative deployment scenarios use N r = 8 antennas at receiver, which yields approximate angular resolution of 30 degrees which is rather high and creates greater uncertainty while triangulating the angles towards position (b) The beams are not over-lapping which creates the large spatial regions without coverage (c) Due to the geometry of receiver placements and the target position, it could block multiple adjacent beams leading to angular uncertainty and inferior position estimates.
To address the issue described in (b) above, we created overlapped beams with beam width 30 degrees with stride of one degree to span (−π/2, +π/2) resulting in 180 overlapped beams. The performance of the angle based estimator with this modification is shown in the blue plot of Fig. 6(c). This modification to the baseline algorithm reduced the average position error, µ ε from 3.30 [m] to 2.86 [m]. The CsiSenseNet outperforms the angle based methods because the AI agent learns the spatial correlation between the perturbance in a higher dimension CSI space for each angular dimension across multiple receivers towards position estimation.
V. CONCLUSIONS
Passive sensing of targets using ubiquitous communication infrastructure provides several benefits without compromising on privacy and security as in the camera aided sensing systems. This paper describes a multistatic indoor sensing system which exploits perturbation patterns from inserted objects in the CSI of multiple links towards detection and position estimation. A shallow CNN based AI network called CsiSenseNet is developed to exploit these patterns towards target sensing. Results show that larger objects are easier to detect with higher accuracies. The performance of the proposed method to estimate the sensed target's position improves with the objects size and outperforms angle based methods. Objects inserted close to transmitter or receiver or along scanned beam directions are easier detected than the objects in other places. Increasing the number of links improves detection and position accuracy. Based on the results, the proposed methods can be used for sensing humans sized objects with good accuracy using indoor cellular deployment.
APPENDIX: MODIFIED SV MODEL
In order to modify the SV to only have single bounce reflections, we discretize the possible angle of departures into a set of angles pointing to a fine grid of points with 0.0625 m 2 resolution. The stochastically generated angle of departure (ψ l,u,v ) from the SV model are quantized to closest discretized angle (ψ l,u,v ) corresponding to the quantized grid point as shown in Fig. 7. By using the location of the receiver and the grid point, the angle of arrival (φ l,u,v ) to the receiver is computed from geometry.
ACKNOWLEDGMENTS
This work has been partly funded by the European Commission through the H2020 project Hexa-X (Grant Agreement no. 101015956). The authors gratefully acknowledge feedback and advise from Robert Baldemair.
|
2022-11-11T06:41:52.428Z
|
2022-11-10T00:00:00.000
|
{
"year": 2022,
"sha1": "877e6d785bf7909465ccd059521bcc9de9756940",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "877e6d785bf7909465ccd059521bcc9de9756940",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
13922414
|
pes2o/s2orc
|
v3-fos-license
|
The Value of Source Data Verification in a Cancer Clinical Trial
Background Source data verification (SDV) is a resource intensive method of quality assurance frequently used in clinical trials. There is no empirical evidence to suggest that SDV would impact on comparative treatment effect results from a clinical trial. Methods Data discrepancies and comparative treatment effects obtained following 100% SDV were compared to those based on data without SDV. Overall survival (OS) and Progression-free survival (PFS) were compared using Kaplan-Meier curves, log-rank tests and Cox models. Tumour response classifications and comparative treatment Odds Ratios (ORs) for the outcome objective response rate, and number of Serious Adverse Events (SAEs) were compared. OS estimates based on SDV data were compared against estimates obtained from centrally monitored data. Findings Data discrepancies were identified between different monitoring procedures for the majority of variables examined, with some variation in discrepancy rates. There were no systematic patterns to discrepancies and their impact was negligible on OS, the primary outcome of the trial (HR (95% CI): 1.18(0.99 to 1.41), p = 0.064 with 100% SDV; 1.18(0.99 to 1.42), p = 0.068 without SDV; 1.18(0.99 to 1.40), p = 0.073 with central monitoring). Results were similar for PFS. More extreme discrepancies were found for the subjective outcome overall objective response (OR (95% CI): 1.67(1.04 to 2.68), p = 0.03 with 100% SDV; 2.45(1.49 to 4.04), p = 0.0003 without any SDV) which was mostly due to differing CT scans. Interpretation Quality assurance methods used in clinical trials should be informed by empirical evidence. In this empirical comparison, SDV was expensive and identified random errors that made little impact on results and clinical conclusions of the trial. Central monitoring using an external data source was a more efficient approach for the primary outcome of OS. For the subjective outcome objective response, an independent blinded review committee and tracking system to monitor missing scan data could be more efficient than SDV.
Introduction
The International Conference on Harmonisation (ICH) Good Clinical Practice (GCP) guideline [1] defines trial monitoring as ''the act of overseeing the progress of a clinical trial and of ensuring that it is conducted, recorded and reported in accordance with the protocol, Standard Operating Procedures, GCP, and the applicable regulatory requirement(s)''. The primary aim of trial monitoring should be to assure patient safety and data quality. Whilst several approaches exist for monitoring clinical trials, they are generally classified under the headings of on-site monitoring and central monitoring.
On-site monitoring includes a range of different procedures for monitoring, each with a common theme that a member of the clinical trial team is required to visit one or more of the participating sites at one or more time points during the trial.
Procedures performed during on-site visits are numerous and may include checking drug accountability, discussing recruitment and retention figures for the site, and review of screening logs and consent forms. One of the most common procedures undertaken during on-site monitoring is Source Data Verification (SDV), a procedure that is used to check that data recorded within the trial Case Report Form (CRF) match the primary source data which are contained in the relevant source document such as the medical record of the patient.
Central monitoring also includes a range of different procedures but is characterised by using centralised procedures instead of site visits. Procedures may include exploring accumulating data centrally to check for consistency over time and across different data items, statistical techniques to identify unusual data patterns within and across participating sites, and external validation of data items, such as through birth and death registries.
ICH GCP is not specific about the format of monitoring in clinical trials but suggests that ''in general there is a need for onsite monitoring, before, during and after the trial and the use of central monitoring in conjunction with other procedures may be justified in exceptional circumstances'' [1]. Unfortunately the guideline is frequently misinterpreted and clinical trials often routinely include on-site monitoring which can be inefficient, unnecessary and can result in the already limited resources being directed at quality assurance procedures that may be unimportant. Due to a growing concern about the effectiveness and efficiency of monitoring practices, and a lack of empirical evidence to determine which practices best achieve the goals of trial monitoring stated in ICH, the Clinical Trials Transformation Initiative (CTTI) project on effective and efficient monitoring [2] was initiated to identify best practices and provide sensible criteria to help sponsors select the most appropriate monitoring methods for a clinical trial. One recent output from this project is a survey of current practice which highlighted the varied approaches to monitoring and a lack of sufficient empirical evidence to determine which on-site monitoring practices lead to improved patient safety and data quality [3]. The CTTI project recommend building quality in to the trial design and focussing oversight on errors that are most likely to adversely affect trial quality, recognising that data elements vary in their impact on the safety of participants or on the reliability of trial results such that a single-minded focus on checking/ensuring accuracy of every data point is misguided [2].
Bakobaki et al [4] searched the literature recently and did not identify any trials that formally evaluated on-site monitoring techniques or directly compared multiple monitoring strategies against each other. They subsequently undertook a retrospective review of a selected sample of on-site monitoring reports from a large HIV prevention trial, concluding that 95% of the on-site monitoring findings reviewed could be identified using central monitoring strategies. Furthermore, Buyse et al [5], Baigent et al [6] and others have proposed that central monitoring is a more efficient approach for identification of fraud and anomalies of the data that are most likely to impact on results. Baigent et al [6] highlight an example from the Second European Stroke Prevention Study, in which fabricated data on 438 patients from one site was first detected by central monitoring methods which on-site monitoring had failed to identify.
The financial, human, and time resource required for on-site monitoring is greater than for central monitoring and this is likely to be a significant factor in the choice of approach used by commercial or non-commercial clinical trials. Results from a survey of Swedish pharmaceutical companies in 2005 suggested that fifty percent of the cost of GCP-related activities in phase III trials was due to SDV, with an estimated actual cost of SDV for a phase III program estimated as 90 million US Dollars [7]. In a different study, on-site monitoring was estimated to represent approximately 25 to 30% of costs in phase III cardiovascular clinical trials [8]. In 2000, Favalli et al. [9] evaluated the average cost per site visit in an oncology trial to be 1500 US Dollars not including the salaries lost through time taken from regular duties. Due to the high costs associated with site visits and SDV, and uncertainty about the effectiveness of these approaches, there is an urgent need to investigate the added value of on-site monitoring in terms of improving data quality and patient safety. The choice of monitoring practice should, as far as possible, be based on empirical evidence which is currently lacking in this important area.
Data are available from a parallel, open-label, multicentre (United Kingdom), phase III, superiority RCT comparing control with experimental treatments in patients with advanced cancer. The trial was designed and initiated before the introduction of the United Kingdom (UK) Clinical Regulations in May 2004 (Statutory Instrument 2004 Number 1031) and originally included a degree of central monitoring for missing and unusual data identified at each interim analysis and a planned blinded review of all response data. During the stages of final data collection, towards the end of the study, the Trial Management Group agreed to undertake 100% SDV through on-site visits to strengthen conclusions from the trial by assuring data quality. This paper describes an empirical comparison of the 100% source verified data against the corresponding unverified data and explores the value of SDV for this trial. We also explore the value of centralised procedures in this setting.
Methods
Between May 2002 and January 2005 the trial recruited 533 patients from 75 secondary and tertiary care centres across the UK, all of which had research experience but variable in amount. Patient follow-up and death data were collected and entered on the trial database up until March 2006. During the conduct of the trial, all data were collected on paper CRFs and entered onto a central database. The prospectively planned quality assurance activities undertaken throughout included a programmed database designed to minimise input errors (e.g. drop down list rather than manual input, date checks in relation to dates of entry and treatments), planned interim analyses (3 interim analyses undertaken) which included statistical data cleaning of key variables, and blinded review of response data. This data set will be referred to as the original data.
After the trial had closed to recruitment but with some patients in active follow-up, a retrospective monitoring plan was developed to include 100% SDV of all important identified data items for all patients to verify that data in the CRF were consistent, complete and correct when compared with the source such as patient's hospital notes. A small team of experienced monitors were employed to undertake the planned independent SDV activities in parallel to the trial itself between 2006 and 2007. All source verified data were re-entered onto a new database, independent to the original trial database, with manual and computer generated verification checks. This data set will be referred to as the SDV data.
Since the SDV was undertaken towards the end of the trial some of the events observed in the SDV data were due to having observed a longer patient follow-up compared to the original data. Therefore, to increase comparability and ensure as far as possible that any differences observed are due to SDV, a common 'censoring date', chosen as the last date of death recorded on the original database (8/3/06), was used across both data sets. Followup data from the visit prior to this censoring date were used where relevant in calculations and data recorded after this censoring date were ignored for the purpose of this empirical comparison. A sensitivity analysis ignoring this censoring date was also explored for the primary outcome.
In order to explore the value of SDV in this setting we assessed whether SDV uncovered data errors related to critical items, but more importantly whether these data errors affected the main trial results and related conclusions. Therefore, both data sets were compared in terms of baseline data, primary outcome (Overall Survival (OS)), and secondary outcomes (Progression Free Survival (PFS); Objective response; Serious Adverse Events (SAEs)) of the trial. The trial did also collect data for two patient reported outcomes but SDV has limited value for these outcomes as the source data are the original patient completed questionnaires which were routinely returned to the Clinical Trials Unit (CTU).
The following were calculated for each patient using both data sets: time from randomisation to death from any cause or last follow-up for those patients still alive at the common censoring date (ii) time from randomisation to progression or death from any cause, or last follow-up for those patients still alive and progression free at the common censoring date (iii) Response assessed in accordance with the World Health Organization (WHO) criteria for disease response (Response Evaluation Criteria in Solid Tumors (RECIST)) Guidelines [10] and reported as best achieved response with criteria determined as follows: The number of discrepancies identified are summarised for clinically relevant baseline characteristics that are typically reported in randomised controlled trials (RCTs) in this particular clinical setting. For time-to-event outcomes (OS and PFS), Kaplan Meier survival curves, log-rank analyses, and unadjusted and adjusted (adjusted for stratification factors at randomisation for OS only) Hazard Ratio (HR) estimates with 95% confidence intervals (CI) obtained from Cox regression models, were compared across data sets descriptively. For each dataset, overall response was compared across treatment groups using a chi-square test and by estimating the Odds Ratio (OR) and 95% CI. A simple comparison of number of SAEs recorded per patient in each dataset was undertaken. Differences in recording methods between datasets made more in-depth comparisons of SAEs difficult.
Central Monitoring for Overall Survival
The Office for National Statistics (ONS) collects registration of birth and death data which can be made available for research studies through flagging, provided that the appropriate ethics approvals are in place. The use of independently collected birth and death data is a form of central monitoring and is useful to confirm the existence, date and cause of death for clinical trial participants.
ONS flagging was not prospectively planned for this study and so retrospective collection was necessary. A section 60 application was submitted to the patient information advisory group (PIAG) to gain approval to collect patient identifiers from participating sites. The multicentre research ethics committee (MREC) were notified, and a substantial amendment submitted to the Medicines and Healthcare products Regulatory Agency (MHRA). Following these approvals, the NHS number, name, and date of birth were obtained from participating sites and used for matching by the ONS. The paper copies of ONS death data were then entered onto a database by the trial team and verified through double data entry.
The ONS data provide a further source for the empirical comparison of the primary outcome OS. Time from randomisation to ONS date of death, or last follow-up for those patients still alive at the common censoring date, were calculated and compared to the original and SDV data using the methods described above for OS.
Results
Data for all 533 randomised participants were verified against source data. Discrepancies in baseline characteristics were detected between the original and SDV data ( Table 1). The percentage of patients with a discrepancy for each characteristic was generally low and equally distributed across treatment groups ( Table 1) and participating sites (data not shown). In the original data, 4 patients were identified as ineligible following randomisation. Three patients had a pre-randomisation CT scan outside the permitted 30 day interval and one patient had a different cancer type to that listed as eligible in the protocol. However, the process of SDV failed to identify these four patients as ineligible which led to the discrepancy in Table 1. Other than this, there were no systematic patterns to the direction of discrepancies.
Overall Survival
(i) Comparison of SDV against original data. A total of 13 (2.4%) participants had a discrepancy in date of death between the SDV and original data. The proportion, magnitude, direction and type of discrepancy in dates were similar with no systematic pattern across treatment groups ( Table 2) or sites (data not shown), and suggest that transcription errors were the most likely explanation in the majority of cases. For a further 29 (5.4%) participants, the SDV process identified a date of death that had not been recorded in the original data which raised a discrepancy. The proportion of these discrepancies were also equally distributed across treatment groups [15] (5.6%) in control group and 14 (5.2%) in experimental group). All additional deaths identified through SDV occurred after the last date of follow-up recorded in the original dataset and were mostly deaths that occurred towards the end of the trial. The Kaplan Meier survival curve for overall survival (Figure 1) shows almost identical curves for the original and SDV data with a negligible effect on the treatment effectiveness analysis regardless of whether or not adjusted for stage and performance status (Table 3). Results were almost identical in a sensitivity analysis using all available SDV data regardless of the censoring date used in the empirical comparison.
(ii) Comparison of central monitoring (ONS data) versus SDV. There were 53 (9.9%) discrepancies in date of death between ONS and SDV data. At the time of final analysis of the trial data ONS were unable to confirm a date of death for 5 of these patients. The SDV and original data agreed in four of these cases and date of death was also subsequently confirmed by site staff. However, for one of these patients that could not be confirmed as dead by ONS, original data recorded the patient as still alive whilst SDV data recorded this patient's status as 'still alive' but also recorded a date of death (01/04/2005). Although likely that this patient had died by the time of analysis and thus should have been included in the ONS data records, we included the patient as a censored observation in these analyses. For one further patient, ONS identified a date of death which had not been recorded in either the SDV or the original data.
The Kaplan Meier survival curve ( Figure 1) and unadjusted treatment effectiveness analysis (Table 3) using centrally monitored data are almost identical to the SDV and original data analyses.
Progression Free Survival
For the comparison between SDV and original data, there were a total of 132 patients (24.8%) with a discrepancy in the derived PFS time (median discrepancy 0.1 months, lower quartile 21.8 months, upper quartile 1.5 months, minimum 213.3 months, maximum 12.7 months). The percentage of discrepant observations are similar across treatment groups with no systematic pattern to the direction or magnitude of discrepancy. The Kaplan Meier survival curves for PFS ( Figure 2) for the SDV and original data are again almost identical with a negligible effect on the treatment effectiveness analysis (Table 3).
Response
RECIST response classifications are based on CT scan results undertaken at specified time points during the trial to assess change in tumour size from baseline. Across both the SDV ad original data there were a total of 620 CT scans but only 460 (74.2%) had been assigned a RECIST classification in both data sets. For these 460 scans, there was agreement in RECIST classification for 398 (86.5%) but disagreement for 62 (13.5%). The majority of these disagreements (58 (93.5%)) were due to a change in classification of one level up or one level down e.g. PD to SD, with the remaining 4 scans classified as PR in the original data set but classified as PD in the SDV data. A total of 160 scans were not common to both datasets; 125 scans which were identified through SDV and 35 scans that were missed during this process but which had been recorded in the original data (Table 4). Information is not available to explore the reason for these discrepancies. The SDV process is likely to have identified additional scans that were undertaken outside of the trial protocol 12-week schedule.
To explore how these scan level discrepancies translate to the patient level overall response analysis, the best achieved response across all scans was identified for each patient within each dataset. These response classifications (patient level rather than scan level) and treatment effectiveness analyses for this outcome were compared across treatment groups and dataset (Table 5). Although both datasets suggest a significantly better response rate for experimental treatment compared to control (lower part of Table 5), the original data provides a more extreme result (odds ratio 2.45) in favour of experimental treatment than the SDV data (odds ratio 1.67). This could suggest a potential bias in the original dataset as clinicians interpreting scan data were not blind to treatment allocation and may therefore be more likely to favour a
Serious Adverse Events
Overall there were 53 patients (9.9%) with a discrepancy in the number of SAEs between datasets; 20 patients had 29 additional SAEs recorded in the original dataset and 33 patients had 36 additional SAEs recorded in the SDV data (Table 6). There are more discrepancies between datasets for patients on control (33 patients with 40 events) compared to experimental (20 patients 1 Discrepancy in number of deaths between SDV and original data is due to the identification of 29 additional dates of death (all dates occurred before the censoring date used in this comparison) following source data verification. These patients were censored in the 'original' analysis using date of last follow-up. 2 Date of progression missing for one patient (id 113). 3 Date of progression is before date of randomisation for one patient (id 468). doi:10.1371/journal.pone.0051623.t003 with 25 events). This imbalance is not substantial but could suggest bias in the reporting of SAEs.
Estimated Costs
It is difficult to obtain the full costs of alternative monitoring approaches for a retrospective analysis such as this. In this particular example, the main additional financial costs of SDV would have been for monitors' salaries and expenses incurred during the monitoring visits. There were 533 patients recruited from across 75 sites with an average of 7.1 patients per site. Assuming it would take an average of 2 hours per patient to undertake a complete SDV for the overall survival primary outcome, the process would have taken an estimated 1066 hours, equivalent to an estimated 30.5 working weeks (7 hours per day, 5 days per week). Assuming an average salary for a clinical trial monitor of £26,000 per annum (£31,306 annual gross cost), and an average of £100 per week in expenses, a conservative estimate of the cost of SDV for the primary outcome is £21,412. The cost of the alternative central monitoring process was estimated to be approximately £2,023 to include ONS costs (approximately £500) and data manager costs (£1523) based on 3 working weeks at salary of £22,000 per annum (£26,406 annual gross cost) to apply for section 60 permission, obtain the patient identifiers from site, submit to ONS (name, date of birth and NHS number where available -minimum data was name and date of birth), computerise and validate dates of death. Neither of these estimates have accounted for the time and financial resources required at each site which might reasonably be expected to be greater for the SDV process.
Discussion
Results have been presented for an empirical comparison of SDV data against original trial data, and also against centrally monitored data for the primary outcome overall survival. The data used for this empirical comparison are quite unique as the comparison relates to a non-commercial clinical trial of an investigational medicinal product (CTIMP) for which 100% SDV was performed independently of the main trial. This trial was initiated prior to EU clinical trials directives and the current UK Clinical Trial requirements which now require GCP training of trial staff, Clinical Trial Authorisation and MHRA inspections of the trial documents as well as site files. Cancer clinical trials in the UK have also benefited from huge changes in research culture with the instigation of the National Cancer Research Network. The current culture of research governance and regulations which aim to safeguard the quality of clinical trials and safety of patients would make this particular empirical comparison difficult to repeat in future.
The comparison identified discrepancies between monitoring procedures for the majority of variables examined, with some variation in discrepancy rates. The potential for bias is greatest when errors are non-random with respect to treatment allocation [6]. In this example, the identified discrepancies for the baseline variables and the data required to construct the outcomes OS and PFS did not differ systematically across treatment groups or across sites, suggesting that random transcription errors were the most likely explanation for the majority of variables. For the two time- to-event outcomes the effect of these discrepancies on overall clinical results and conclusions was negligible. An important, if not surprising finding of this work is that SDV does not necessarily provide error-free data. In this example, SDV failed to identify four patients that were classified as ineligible in the original data. Due to the age and retrospective nature of this data, we can only speculate that this was due to the monitors' lack of clinical knowledge. In reality SDV is an iterative process and so it is possible that this discrepancy in eligible patients would have eventually been identified and resolved by the trial team. However, discrepancies in dates of death were also identified when SDV data were compared with centrally monitored data obtained from ONS. It is unlikely that the ONS data would contain errors but even if it did, these would be expected to occur completely randomly and be unrelated to the trial, outcome or treatment and would therefore provide unbiased data for estimation of the treatment effect. Together with the additional time and expense of SDV, estimated to be around £19,389, these findings suggest that the central monitoring procedure of using ONS data is the optimum approach for assuring the quality of primary outcome data for this trial.
The analysis of RECIST classification data highlighted important issues both in terms of the identification of additional scans, and in terms of interpretation of scan data. Discrepancies were most evident, and also had the most impact on results, for this subjective outcome. The number of additional scans identified through SDV were similarly distributed across treatment groups and they most likely reflect the continued clinical monitoring of patients at sites which were not requested as part of the trial protocol, or may not have always been fed back to the trials unit. Furthermore, given that there were discrepancies identified between SDV and the ONS date of death, and that SDV failed to identify 4 ineligible patients and 35 scans that were present in the original dataset, we cannot be certain that the SDV data is necessarily accurate for this outcome, particularly due to its subjective nature. It is possible that the monitor assessing the scan data may not have had the full medical information or knowledge required to make an accurate clinical assessment, a concern that has been raised in a previous study [11], as monitors may be less experienced and knowledgeable in the clinical area compared to investigators at sites. Alternatively, the clinicians/trial researchers who made the original assessment may have been biased in some way because their assessment was unblinded to the patient's treatment allocation. However, a second review was undertaken by a blinded clinician, and the data we explored showed a similar distribution across treatment groups of the percentage of patients with an improved response classification between datasets. For this subjective outcome a robust tracking system for monitoring receipt of expected scans during the trial, and an independent endpoint review committee blinded to treatment allocation may have been the optimal method of quality assurance.
Monitoring approaches were difficult to compare in relation to SAE data due to the variation in recording and necessary use of open text fields. However, discrepancies in the number of SAEs per patient were identified between datasets. This is an important finding as trial investigators are required to report SAEs to protect patient safety, and a total of 65 additional SAEs were identified either in the original data that weren't in the SDV data (29 additional events for 20 patients), or in the SDV data that weren't in the original data (36 additional SAEs for 33 patients). Due to the retrospective nature of this comparison it was not possible to explore the reason for these discrepancies. It is worth noting that fewer discrepancies were identified in the experimental group, possibly due to clinicians' vigilance identifying and reporting SAEs for an experimental treatment? Further work is required to assess the value of SDV for identifying unreported SAEs. However, it is unlikely that 100% SDV across all patients would be required. Alternative, risk-proportionate strategies, perhaps focussed on less experienced sites or those with differing SAE reporting characteristics compared to other sites, with provision of regular and clear training, may be more efficient. SDV is just one of the procedures undertaken during on-site monitoring and the results presented here should be viewed with this in mind. There are potentially useful on-site procedures that have not been explored within this empirical comparison. For example, the PRIME process [12] used observation by peer reviewers to improve protocol adherence and train site staff, which increased trial performance and consistency. As further empirical research is undertaken, decisions regarding the optimal use of resources during on-site visits will more likely be evidence based and risk proportionate. Baigent et al [6] suggest that resources used for on-site monitoring could be redirected more usefully to increase sample size, a strategy that may have been particularly useful in this trial which was originally designed to have 80% power to detect a difference as statistically significant at the 5% significance level. Of course, this strategy may not be appropriate in all trial settings and the ethical implications, and potential added costs of recruiting additional patients would need to be considered thoroughly and balanced against the potential gains.
Regulatory agencies have recognised the need for clinical trial oversight approaches that appropriately account for differing levels of risk associated with each specific trial. The Medicines and Healthcare products Regulatory Agency (MHRA) recommend a risk proportionate approach based on work undertaken by the MRC/DH/MHRA Joint Project on Risk-adapted Approaches to the Management of Clinical Trials of Investigational Medicinal Products [13]. The US Food and Drug Administration (FDA) are currently developing guidance to assist sponsors of clinical investigations in developing risk-based monitoring strategies and plans for investigational studies of medical products. The CTTI project on Effective and Efficient Monitoring [2] have recently issued recommendations which include (i) the need to focus on areas of highest risk for generating errors that matter, (ii) prospectively measure error rates of important parameters, and (iii) tailor monitoring approach (e.g., site visits, central, statistical) to the trial design and key quality objectives. These recent developments are important advances in clinical trial monitoring research and reaffirm the need to move away from a one-size fits all approach of resource intensive and inefficient approaches to clinical trial monitoring.
To our knowledge the comparison presented in this paper is the first empirical comparison that has explored the impact of monitoring method on clinical outcomes and trial conclusions. A randomised comparison of on-site visits versus no on-site visits has been attempted and reported in the literature [11]. However, this trial was terminated early and results could only be used to evaluate the impact of on-site initiation visits on patient recruitment, patients' follow-up time, quantity and quality of data submitted to the trial coordinating office, none of which were found to differ between monitoring approaches. The study could not evaluate the impact of repeated on-site visits on clinical outcomes. Two further relevant studies are underway. The ADAMON project [14] is a German led cluster randomised study involving twelve clinical trials that randomise sites within each trial to a risk-adapted approach, or to an intensive monitoring strategy with frequent visits and 100% source data verification. The OPTIMON project [15] is a French led initiative comparing intensive monitoring that includes 100% SDV against an optimized risk-based monitoring approach. Results from both studies are expected in the next few years and will contribute important information, along with results from our study, such that future recommendations regarding monitoring practices may be more appropriately evidence-based.
Conclusions
The value of the resource intensive source data verification needs to be established. In this example from cancer the process was time consuming, expensive, not necessarily error-free, and the resulting discrepancies identified made no impact on the main conclusions of the study. Source data verification did identify additional CT scan data which did impact upon the analysis of a secondary outcome of overall response. However, further empirical evidence is required to establish its value in this setting as it is likely that other more efficient methods such as effective tracking systems for missing scan data and independent blinded review of CT scans would be sufficient.
In conjunction with a thorough risk-proportionate monitoring system for the trial, one approach, if ethically reasonable, to safeguard against the effect of random errors might be to inflate the target sample size as is often done to account for potential missing outcome data.
Strengths and Weaknesses
The example presented relates to an academic led, Cancer Research UK funded trial with short duration and minimal number lost to follow-up since patients with advanced cancer tend to stay in follow-up. The trial did not include a per-patient payment for entering patients and there was no obvious incentive for fraud. The conclusions drawn may not necessarily apply to other clinical settings or to commercially funded trials which often attract significant payments to investigators for entering patients. However, as long as the potential for errors that are not random in relation to treatment allocation would not be expected to differ then the results from this empirical comparison should be generaliseable to other settings.
As the trial was conducted prior to the 2004 clinical trial regulations, monitoring practice in CTIMPs may have improved and the empirical comparison presented here may therefore represent a worst case scenario which in some respects is a more informative comparison. As the retrospective SDV was undertaken towards the end of the trial during which some patients were still in active follow-up, it is possible that the introduction of this additional trial process may have changed trial conduct. As this empirical comparison was not prospectively planned, insufficient information is available to thoroughly explore and provide explanations for identified discrepancies. Further confirmatory empirical studies of this nature are required.
|
2016-05-04T20:20:58.661Z
|
2012-12-12T00:00:00.000
|
{
"year": 2012,
"sha1": "8fa441f346c544594feeb96922c962718c730a19",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0051623&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fa441f346c544594feeb96922c962718c730a19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233698267
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the Protection Performance of Face Guard for Large Mining Height Hydraulic Support
With the increase of mining height, the problem of coal wall spalling in the working face gradually worsens. Hydraulic support and its face guard structure are the key pieces of equipment to restrain the coal wall spalling. However, at present, the hydraulic jack is mostly considered as rigid in the analysis of protection mechanism. This simplification cannot ef-fectively reflect the true bearing state of the face guard. In order to improve the accuracy of analysis, this study considers the face guard jack as a flexible spring and establishes a rigid-flexible coupling analysis model of the face guard mechanism. First, based upon the multibody dynamics software ADAMS ® , the multibody numerical model of the face guard of the hydraulic support was established. The influence of the two kinds of structures on the coal wall disturbance was analyzed and compared. Then, the rigid model was meshed. The hydraulic jacks were equivalent to the spring system, and the rigid-flexible coupling model was established. Based upon the application load on different positions of the rigid-flexible model, the load-bearing characteristics and hinge point force transfer characteristics of the two face guards were analyzed. The results show that the support efficiency of the integral type was higher than that of the split type. In the vertical support attitude, the dynamic disturbance of the coal wall, produced by the two kinds of face guards, was small. The four-bar linkage effectively improved the ultimate bearing capacity of the integral face guard. The results provide theoretical support for the design and optimization of the face guard.
Introduction
Although, in recent years, coal mining and coal consumption have gradually decreased with the increase in support towards reducing carbonization, the total global coal production has still increased by 1.5% in 2019 compared with that in the previous year. China, as a country with more coal and less oil and gas, accounts for 51.7% of the global coal consumption. In a long period of time, coal will still be an irreplaceable key primary energy resource in China. ere are a large number of thick coal seams in Western China. At present, there are two main mining methods for thick coal seams, namely, the large mining height one-time fullthickness mining and sublevel caving mining [1,2]. In recent years, with the continuous development of mining technology, the coupling support theory of large mining height hydraulic support and surrounding rock has gradually improved [3,4]. However, due to the continuous increase in mining height, the probability and extent of damage of the overlying strata in the working face has increased rapidly, which has led to severe roof impact load phenomenon and the rapid deterioration of the coal wall stress level. erefore, during mining with large mining height, large-scale and deep spalling can still occur [5][6][7]. In addition, the coal wall spalling phenomenon also aggravates the instability of the hydraulic support in the working face (two legs hydraulic support) [8]. erefore, the problem of spalling is still a key challenge restricting the mining of thick coal seams.
As the problem of spalling has a great impact on the production of coal, many scholars around the world have studied the problem of coal wall spalling. First, the stability factors of the surrounding rock were analyzed using indoor experiments and theoretical analysis as reported in some previous studies [9,10]. Moreover, based on the Hoek-Brown failure criterion, a prediction model of the mechanical parameters of the surrounding rock was established. In the study of the mechanism of coal wall spalling, some previous works [11][12][13][14] have studied the influence of mining height, roof pressure, and other factors on the scope and depth of coal wall spalling. It is considered that, with the increase of mining height and roof pressure, the probability of coal wall spalling in the working face also increases. Based on the "wedge" stability theory, some previous works [15,16] studied the mechanism of coal wall spalling in fully mechanized mining face with large mining height. In addition, they pointed out that the physical properties of coal body, the face guard protection force of hydraulic support, and the protection efficiency affected the stability of coal wall. In a previous work [17], the numerical model of 8102 working face in the Wolonghu mine (China) was established with the help of UDEC. Furthermore, the influence of the coal seam dip angle on the stability of coal wall was analyzed, and it was pointed out that the fracture mode of coal wall changed with the change of coal seam dip angle (tensile or shear fractures). In another work [18], based on the systematic analysis of the factors affecting the stability of coal and rock in large mining height working face, the triangular slope spalling caused by shear slip was found to be the main failure mode of coal wall spalling with large mining height. By establishing the interaction model of hydraulic support and surrounding rock under different roof structures, a previous study [19] analyzed the influencing factors of hydraulic support stability in fully mechanized mining face. e results showed that coal wall spalling would worsen the bearing condition of support, while the degree of spalling was negatively correlated with the support strength required by the hydraulic support. From the perspective of hydraulic support, the structural characteristics of hydraulic support were studied [20] by establishing the corresponding three-dimensional (3D) model, and numerical simulation analysis was also carried out. e results showed that the mechanical structure of the hydraulic support had a certain impact on the support. Some previous works [21][22][23][24] established the stope analysis model of fully mechanized mining face using theoretical calculations and finite element software FLAC 3D. e working resistance and support force of the face guard were made equivalent to static rigid force by changing the working resistance and the horizontal support force of the face guard. e restraining effect of the active support force of the face guard on the coal wall spalling was analyzed. e results showed that the support resistance and the support force of the face guard were the two important factors restraining the coal wall spalling, which were taken as the criteria for selecting the reasonable working resistance of the hydraulic support. In order to simulate and analyze the influence of hydraulic support and surrounding rock in various coupling states on the bearing characteristics of hydraulic support face guard and other components [25][26][27], different coupling states were simulated by changing the size and position of applied external load. It was found that the change of coupling state would cause damage to the hydraulic support components. Among them, the face guard mechanism is of great significance to maintaining the stability of coal wall. However, there is little research on the mechanism of the support. e available references assume that the face guard is a rigid body and that the face guard and the coal wall are in an ideal coupling state. In practice, due to the unevenness of the coal wall cut by the shearer, there are many forms of contact states between the face guard and the coal wall in actual work. Meanwhile, the face guard is close to the coal wall, which also forms a dynamic disturbance force for the coal wall. Based on these reasons, this study analyzes the performance of high mining height hydraulic support from two aspects, namely, the dynamic disturbance of the coal wall caused by the action of the face guard and the bearing characteristics of the face guard. e multibody numerical model of the face guard of the support is established using the multibody dynamics software ADAMS, and the influence of the action of the two kinds of structures on the dynamic disturbance of the coal wall is analyzed. e two types of structures are the integral face guard and the split face guard. Based upon the flexible replacement of each structural part of the face guard, the bearing characteristics and the force transmission characteristics of the two kinds of face guards under different coupling states of the face guards and the coal wall are analyzed and compared. e current study is arranged as follows. In the second section, the dynamic disturbance of the coal wall caused by the action of the two kinds of structures is analyzed. In the third section, the rigid-flexible coupling numerical analysis model is established, and the ultimate bearing capacity of the two kinds of face guards is analyzed. e fourth section compares and analyzes the load-bearing characteristics of the face guard mechanism under different coupling states between the face guard and the coal wall. Section 5 summarizes the conclusions.
Analysis of the Dynamic Disturbance of Coal
Wall by Face Guard 2.1. Analysis of the Working Principle of the Face Guard. rough direct contact with the coal wall, as shown in Figure 1, the face guard of hydraulic support exerts support force on the free surface of the coal wall. is can effectively delay the coal body damage and prevent the coal wall falling.
is can also prevent the ejection injury of coal wall, which can cause structural damage.
At present, there are two kinds of structures of the hydraulic support face guard that are suitable for large mining height working face, as shown in Figure 2. In one of the structures, the face guard of the hydraulic support is directly hinged with the front part of the extensible canopy. When the extensible canopy is extended, the face guard will also extend. e setup is called the integrated face guard structure. In the second kind of hydraulic support, the extensible canopy and face guard are separate structures. e face guard is installed at the front of the canopy, which means that the extensible canopy can move independently, and is called the split face guard structure. e function and composition of the two kinds of face guards are basically the same. However, for the hydraulic support with integral face guard structure, the primary face guard and the extensible canopy are usually connected through four-bar linkage, and the split type face guard structure is connected using the simple hinge joint of the primary face guard and the canopy.
Disturbance Analysis of the Coal Wall Caused by the
Action of Face Guard. After the shearer cuts coal, the coal wall has to bear the vertical pressure of the overlying roof and the horizontal force of the front coal body. Under this scenario, the stress situation becomes complex.
e face guard of hydraulic support applies a horizontal force to the coal wall, which can effectively prevent the coal body from spalling and change the stress situation of coal wall. (1) Shock and Vibration 3 erefore, after coal cutting, the face guard needs to support the coal wall in time [28]. Whether the hydraulic support face guard can fit the coal wall timely and effectively or not has an important influence on the coal wall protection. erefore, the efficacy of the action of the hydraulic support face guard from the retracted position to the expected designated position is an important index for evaluating the performance of the face guard.
At the same time, it has been reported that this kind of plate cracking phenomenon exists in coal wall and roadways [29,30]. However, when the coal wall of the working face is cracked, the support action will cause dynamic disturbance to the plate cracking area of coal wall, which will lead to the occurrence of plate cracking spalling disaster. erefore, the dynamic disturbance characteristic of the face guard is another index to evaluate its performance. In this study, a large mining height hydraulic support ZZ 18000/33/72D adopted in Jinjitan coal mine (China) was taken as the example. is was performed due to the reason that longitudinal crack failure occurred during the periodic weighting period. e face protecting action of the support also aggravates the plate cracking and spalling of coal wall, as shown in Figure 3.
During the process ranging from the opening of the primary and secondary face guards to the large area fitting with the coal wall, during the contact process between the two kinds of structures and the coal wall, it is usual that the secondary face guard first comes into contact with the coal wall. When it contacts the coal wall, the face guard has a certain speed and gradually changes into surface contact with the movement of the face guard jack. us, the opening process of the face guard will cause dynamic disturbance to the coal wall, which is not conducive to the stability of the coal wall. As shown in Figure 4, when the coal wall has been damaged by the tension caused by the mine pressure and the longitudinal or transverse through-cracks are generated, the impact of the face guard on the coal wall is likely to cutoff the coal wall and cause local spalling. erefore, by testing the angular velocity of the face guard during the process of opening, the dynamic disturbance effect of the face guard on the coal wall can be analyzed.
In order to compare the integral face guard and the split face guard, the time required for the face guard to reach the expected supporting state is determined, and the dynamic disturbance to the coal wall is analyzed. Based on ADAMS software, the multibody dynamic analysis model of the face guard mechanism of the hydraulic support is established, as shown in Figure 5. Each structural member is treated as a rigid body. Various components, such as the extensible canopy, the face guard, and the face guard jack, are defined as the rotating pair connection. e first level of the face guard jack is defined as the moving pair connection. Additionally, the other levels of the face guard are defined as the fixed connection; therefore, it can move along with the primary face guard and be relatively static. e moving pair of the first level face guard jack is driven, and the extension speed of the first level face guard jack of the two types of structures is controlled to be 90 mm/s. Furthermore, the frictional coefficient is set to be 0.3. Meanwhile, the canopy, goaf shield, and other parts are fixed and connected with the ground to keep them still. e same jack is used for the two structural forms of the face guards. Specific parameters of the face guard jack for various levels are presented in Table 1.
e performance results of the two kinds of face guards are shown in Figure 6. e measurement of the two kinds of face guard starts when the horizontal angle with the top beam is 10°. Under the condition of same and constant extension speed of the face guard jack, it takes 6.4 s for the integral face guard to open to 140°and 8.9 s for the split face guard. It can be inferred that when the two kinds of face guards are close to the vertical coal wall, the angular velocity decreases to the lowest value. e minimum value of the angular velocity remains basically the same. However, the action speed of the integral face guard becomes faster.
erefore, under the condition of a small influence on coal wall disturbance, the integral face guard can realize the support of coal wall more quickly. Meanwhile, based upon the measurements and statistics of the angular velocity of the two kinds of structures during opening, it can be found that the angular velocity of the face guard decreases at first and then increases. On the other hand, the minimum value of the angular velocity exhibits a small difference. When the angular velocity of the integral and split face guards is the minimum, the corresponding opening angles are 87°and 94°, respectively.
is means that the dynamic disturbance of the face guard to the coal wall can be minimized. erefore, during the opening process of the two kinds of structures, the extensible canopy jack and the face guard jack should be controlled cooperatively to make it fit with the coal wall for the opening angle of ca 90°.
is is performed to minimize the dynamic disturbance of the coal wall by the face guard, reduce the impact force on the coal wall, and avoid the local spalling of the coal wall caused by the face guard cutting off the coal wall. However, when the coal wall is convex or concave into a part of the angle, the split face guard has fewer disturbances to the coal wall because the angular velocity changes more smoothly. is means that it shows better adaptability.
Analysis of the Bearing Capacity of the Face Guard
During the advancement of working face, the hydraulic support face guard exerts active support force on the coal Coal wall of plate cracking spalling wall by being close to the coal wall to prevent the coal wall from spalling. Meanwhile, the coal wall will also form a reaction force of the same size and of opposite direction on the face guard. e ultimate bearing capacity of the face guard is analyzed by applying external load on the face guard, and the support capacities of different structural forms of the face guards are compared. Due to the different degrees of damage and flatness of coal wall in the process of coal mining machine advancing, the face guard and coal wall are not in the ideal state of complete fit in the working face. Instead, they are mostly in the single point support state. erefore, the force of coal wall on the face guard is simplified as the point load. On the other hand, the ultimate bearing capacity of the face guard at different positions is obtained by applying the point load to different positions of the face guard. Since the force of the three-step face guard is small and the stress situation is similar to that of the primary and secondary face guards, only the primary and secondary face guards of the two structural forms are analyzed.
eoretical Analysis of the Bearing Capacity of the Integral
Face Guard. e stress analysis of the integral face guard is shown in Figure 7. e hinge joint of the extensible canopy (1) Fixed with the ground When the loading position of the external load P is at the primary face guard, the forces are given by the set of following equation.
It is necessary to calculate the value P 1 according to equation (1) when the loading position of the external load is at the primary face guard. When the external load P is in the secondary face guard, the ultimate bearing capacity of the second level face guard jack must be considered. is is due to the reason that the first level and second level face guard jacks are simultaneously under pressure. In this case, it is necessary to add the following equation to the model.
According to the model size parameters, M � 136.43 mm and N � 342.4 mm, while K � 111.95 mm. e loading position of the external load P determines that 100 mm < y P < 2100 mm, whereas y D � −1049.75 mm, including the angle α � 24°. e maximum tension and compression working resistance of the first level face guard jack are 490/264 kN. Furthermore, the maximum pressure working resistance of the second level face guard jack is 300 kN. erefore, the maximum load-bearing capacity of the primary face guard under compression is 980 kN. e maximum bearing capacity under tension is −528 kN. Furthermore, the maximum bearing capacity of the secondary face guard is 600 kN under compression. Moreover, the maximum working resistance of the extensible canopy jack is always greater than its bearing capacity. According to the above parameters, the bearing capacity of the integral face guard is analyzed and calculated, and the corresponding results are shown in Figure 8.
eoretical Analysis of the Bearing Capacity of the Split Face Guard.
e stress analysis of the split face guard structure is shown in Figure 9. Because the primary face guard is connected with the canopy through a simple hinge joint, the stress analysis is relatively simple, and the external load can be obtained using equations (3) and (4). e hinge joint of the primary face guard and the canopy is taken as the coordinate origin O, and the coordinate system is established. e force on point D of the first level face guard jack is F 1 , whereas F 2 is the force of the second level face guard jack to point B. In this structure, the primary face guard does not contact the coal wall, and only the external load is applied at the secondary face guard. However, it is also necessary to consider the maximum working resistance of the first level and second level face guard jacks. When the loading position of the external load is in the secondary face guard, only the first level face guard jack is considered (equation (3)).
Considering the second level face guard jack, the following equation is added to the model.
According to the model parameters of the split type face guard, L 1 � 229.65 mm, L 2 � 239.95 mm, y A � −615.03 mm, and 800 mm < y P < 2100 mm. In order to ensure that there are only structural differences between the two types of face guards, the uniqueness of the control variables, all the other variables are the same except for the different structures of the face guards. Two jacks with the maximum working resistance of 490 kN are used for the primary face guard, while two jacks with the maximum working resistance of 300 kN are used for the analysis of the secondary face guard. erefore, F 1 � 980 kN and F 2 � 600 kN. e corresponding results are shown in Figure 10.
Numerical Model of the Face Guard Mechanism of the Hydraulic Support.
Based upon the HyperMesh software, the three-dimensional (3D) models of the two kinds of hydraulic supports are imported into Adams in the form of mnf file [31][32][33]. e canopy and the goaf shield of the hydraulic support are all defined as rigid bodies. e friction rotary pair is used to connect the extensible canopy and the face guard and for connecting the rod and the face guard. Similarly, they are also used to connect the rod and the extensible canopy and the two connecting rods. e frictional coefficient is set to be 0.3, while the face guard jack is replaced by an equivalent spring. At the same time, the canopy, goaf shield, and other rigid body parts are fixed, connected with the ground. e numerical simulation models of the two kinds of face guards are shown in Figure 11.
According to the calculation of the equivalent spring stiffness of jack, the equivalent spring stiffness of jack under different working conditions can be obtained (equation (5)).
where K is the equivalent stiffness coefficient (N/m), A is the effective area of the hydraulic cylinder when transmitting liquid pressure (m 2 ), c is the bulk elastic modulus of the hydraulic fluid (oil in water emulsion) having a value of 1.95 × 10 3 MPa, and L is the length of the effective liquid column in the hydraulic cylinder (m). In this model, the opening angle of the face guard is the same as that of the theoretical analysis model, which is 90°. After measuring the position parameters of the first and second level face guard jacks, it is determined that when the jack of the first level face guard is under pressure, the equivalent spring stiffness coefficient is 5.5 × 10 7 N/m. However, when the jack of the first level face guard is pulled, the equivalent spring stiffness coefficient becomes 3 × 10 7 N/m. e equivalent spring stiffness coefficient of the second face guard jack is 6.1 × 10 8 N/m. e same spring stiffness coefficient is set for the two kinds of face guards.
Numerical Analysis and Comparison.
Based on the above numerical model, on the vertical centerline of the integral primary and secondary face guards and the split-type secondary face guard, different loads are applied on the points with an interval of 260 mm. en, the bearing capacities of the two kinds of face guards are simulated and analyzed. e load amplitude is estimated using the above theoretical Shock and Vibration 7 analysis results and corrected according to the response of the face guard jack (the ultimate bearing capacity of the face guard is taken as the loading force of the critical overflow of the face guard jack). e comparison of the numerical and theoretical results is shown in Figure 12. It can be seen that the error between the numerical and theoretical results is less than 2.3%, which indicates good consistency. e results of theoretical calculation and numerical simulation are compared and analyzed. Based upon the comparison and analysis, following conclusions can be drawn. When the loading position of the integral face guard is less than 1500 mm from the hinge joint, the maximum bearing capacity can reach 900 kN. Compared with the split face guard, it has obvious advantages. When it is more than 1500 mm away from the hinge joint, the bearing capacity of the integral face guard is slightly lower than that of the split face guard, though the difference is small. Both of them are between 70 kN and 150 kN. Besides, due to the structural differences, the split-type primary face guard cannot directly contact the coal wall. is results in a less effective support area as compared with the integral face guard. erefore, the bearing capacity and bearing range of the integral face guard are better than those of the split face guard.
Analysis of the Load-Bearing Capacity of the Face Guard under Different Coupling States.
ere are many coupling states between the face guard and the coal wall of the hydraulic support. For example, due to the uneven coal wall, only one side of the face guard can contact the coal wall. e other side has no contact. In this case, the loading position of the external load is simply regarded as on the centerline of the face guard. It does not conform to the actual situation. erefore, based on the above numerical model, the load is applied at different positions of the face guard to analyze the bearing capacity under different coupling states. As shown in Figure 13, considering the integral face guard as an example, the centerline of the circular hole at the hinge joint of the primary face guard and the extensible canopy is taken as the X-axis (in the split face guard, the centerline of the round hole at the hinge joint of the face guard and the canopy is taken as the X-axis). Moreover, the centerline perpendicular to the Xaxis is taken as the Y-axis of the two kinds of face guards. A cushion block is set at each interval of ΔL � 165 mm in the Xaxis direction. Furthermore, a cushion block is arranged for every ΔM � 154 mm in the Y-axis direction. In order to facilitate the comparative analysis between the integral and split-type face guards, the integral first level face guard jack only considers the compression state. erefore, the cushion block position is y P > 300 mm. e cushion blocks are arranged in this way for the integral primary and secondary face guards and the split secondary face guard to simulate different coupling modes of the coal wall and the face guard. e bearing capacity of the integral face guard and split face guard is shown in Figure 14. According to the numerical results, the maximum bearing capacity of the integral face guard is 900 kN, while the minimum bearing capacity is 47 kN. e maximum bearing capacity of the split face guard is 276 kN. On the other hand, the minimum value is 82 kN. At the same time, it is not difficult to see that the bearing capacity of the integral and split face guards has the same variation trend. When the loading position is close to the Opoint position, the bearing capacity of the face guard has an obvious increasing trend compared with the other positions. When the position of the Y-axis is the same and X � 0 (the centerline position of the face guard), the bearing capacity is the largest. Interestingly, the bearing capacity is symmetrical on the left and right sides of the X-axis. When the X-axis position is the same, the Y-axis coordinate value is inversely proportional to the bearing capacity of the face guard.
In order to more intuitively compare the bearing capacities of the two kinds of face guards in different coupling states, four rows of cushion blocks corresponding to the integral and split face guards are taken for analysis. e coordinates of the four lines are given by X 1 � 280 mm, X 2 � 560 mm, Y 1 � 840 mm, and Y 2 � 1300 mm. According to Figures 15(a) and 15(b), when the X-axis position is the same under different coupling states. Additionally, when Y < 1500 mm, the bearing capacity of integral face guard is greater than that of the split face guard. When Y > 1500 mm, the bearing capacity of the integral face guard is slightly lower than that of the split face guard, whereas both of them have poor bearing capacities. Meanwhile, the load-bearing range of the integral face guard is obviously better than that of the split face guard. According to Figures 15(c) and 15(d), the bearing capacity of the integral face guard is better than that of the split face guard under the two coupling states of Y � 840 mm and Y � 1300 mm. Furthermore, it is symmetrical on both sides of X � 0 mm. Based upon the analysis, it can be concluded that, in different coupling states, the integral face guard has a better bearing range and bearing capacity than the split face guard. Shock and Vibration
Analysis of the Force Transfer Characteristics of the Hinge Point of the Face Guard Mechanism under Different Coupling
States. In order to study the load-bearing characteristics of the two kinds of face guards under different coupling states with the coal wall, the numerical simulation model is used for further analysis. e load is applied to the face guard, and the stress at the hinge point between the primary face guard and the canopy or extensible canopy is taken as the research object. Because the face guard mechanism is symmetrical, only the hinge point in the positive direction of X-axis is analyzed in this study. e other side is the same as the case considered here. After a comprehensive analysis of the ultimate bearing capacity of the integral and split-type face guards, the minimum bearing capacity of the two kinds of face guards is 47 kN under various coupling states. erefore, in the analysis of bearing characteristics, to ensure that the face guard jack does not overflow, the value of applied load should be less than the minimum bearing capacity of 47 kN within the effective working range. Meanwhile, it needs to satisfy the uniqueness of variables. erefore, in the subsequent analysis, the load applied by coal wall to the two kinds of face guards is 45 kN. e load-bearing characteristics of the hinged joint of face guards' results are shown in Figure 16. With the increase of X-axis coordinate, the stress on the hinge joint of integral and split face guards' increases. e Y-axis coordinate value is directly proportional to the bearing capacity at the hinge joint of the face guard. Among them, the maximum stress at the hinge joint of the integral face guard is 171 kN. Furthermore, the maximum stress at the hinge joint of the split face guard is 216 kN. erefore, it can be inferred that the load-bearing characteristics at the hinge point of the integral Based on the above analysis, the load-bearing characteristics of the two kinds of structures in different coupling states are compared. Four rows of cushion blocks corresponding to the integral and split face guards are considered for analysis. e coordinates of the four lines are given by X 1 � 280 mm, X 2 � 560 mm, Y 1 � 1800 mm, and Y 2 � 2100 mm. According to Figures 17(a) and 17(b), under the coupling state of different X-axis coordinate values, the stress at the hinge joint of the integral face guard and the extensible canopy is less than that at the hinge point of the split face guard and the canopy. Additionally, the stress of the hinge joint of the integral face guard is about 80% of that of the split face guard. According to Figures 17(c) and 17(d), when X < −330 mm, the stress of hinge joint of integral face guard is greater than that of the split face guard. It is worth noticing that the maximum difference between the two is 13 kN. However, after X > −330 mm, the stress of the hinge joint of the integral face guard becomes less than that of the split face guard. In this case, the maximum difference between the two turns out to be 44 kN. erefore, compared with the integral face guard, the pin bearing condition at the hinge joint of the split face guard is worse, indicating more wear and even failure.
According to the structural form of integral face guard, the primary face guard is connected with the extensible canopy through the four-bar linkage mechanism, as shown in Figure 2(a). e connecting rod that is hinged with the primary face guard is called the connecting rod A. Similarly, the connecting rod that is hinged with the extensible canopy is called the connecting rod B. e four-bar linkage is of great significance to the load-bearing characteristics of the integral face guard mechanism.
In order to study the load-bearing characteristics of the four-bar linkage, the load-bearing characteristics of the four- bar hinge point in the integral face guard mechanism are analyzed using the above numerical simulation model. Similarly, under the premise of different coupling states between the face guard and the coal wall, a load of 45 kN was applied to the face guard. Since the face guard mechanism is symmetrical on the left and right sides, only the hinge point on the positive side of X-axis is analyzed. e corresponding results are shown in Figure 18. e load-bearing characteristics of the hinge joint of connecting rod A and face guard, those of connecting rod A and connecting rod B, and those of connecting rod B and extensible canopy (the bearing characteristics of the hinge joint of connecting rod A and connecting rod B are the same) are affected by the coupling state of the coal wall and face guard. Furthermore, the variation trend of stress is the same. When the Y-axis value is the same, the force at the hinge point increases with the increase of X-axis. When the value of X-axis is the same, the stress at the hinge increases with the increase of Y-axis.
Combining the results of Figures 16(a) and 18, it can be seen that under various coupling states of coal wall and face guard, in the integral face guard mechanism, the maximum stress of the hinge joint between the face guard and the e maximum stress at the hinge point between the connecting rod A and the face guard is 214 kN. e maximum stress on the hinge point of the connecting rod A and the connecting rod B, as well as that between the connecting rod B and the extensible canopy, is 119 kN. erefore, in the process of design and material selection of the hydraulic support of the integral face guard because the pin shaft at the connecting rod A and the face guard has a larger limit force than the pin shaft at other hinge points during use, the bearing characteristics of the pin shaft at this position should be considered as a priority. It can avoid damage and failure in various coupling states.
Conclusions
In this study, based upon the establishment of the theoretical model of the integral and split-type face guards and the rigid-flexible coupling numerical analysis model, the dynamic disturbance effect of the two kinds of face guards on the coal wall is analyzed. Meanwhile, the load-bearing characteristics of the two kinds of face guards under various coupling states with coal wall are analyzed and compared. Based upon the results, following conclusions are drawn.
(1) When the face guard moves closer to the coal wall, it will form a dynamic disturbance to the coal wall due to the dynamic support effect of the face guard jack. e results show that the dynamic disturbance caused by the two kinds of face guards reaches the minimum when the face guard is nearly vertical. Meanwhile, compared with the split face guard, the integral face guard has higher supporting efficiency and can reduce the exposure time of the coal wall after shearer cutting. When the dip angle of coal wall changes (>10°), the average disturbance of the split face guard to the coal wall becomes smaller.
(2) Combined with the theoretical analysis and the numerical simulation analysis, it is shown that the ultimate bearing capacity and bearing range of the integral face guard are better than those of the split face guard in various coupling states. erefore, when other conditions, such as the face guard jack, are the same, the integral face guard can provide greater support and support range to the coal wall and can better prevent the occurrence of coal wall spalling.
(3) Based upon the numerical simulation, the two kinds of face guards are compared in different coupling states. e effect of the external load on the loadbearing characteristics of the hinge joint of the face guard is analyzed. e results show that the stress at the hinge joint of the integral face guard is significantly lower than that of the split face guard. erefore, the structural form of the integral face guard has higher reliability. For the same material, the pin shaft is not easy to be damaged relative to the split face guard.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
|
2021-05-05T00:09:34.218Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "290f090e3558409ced294a2629020fbd01e53a89",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sv/2021/6631017.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ffd10469e3147cfb72d30df655735b739a75a0ae",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
253756703
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Post-Weld Heat Treatment on Microstructure and Mechanical Properties of the Brazed Joint of a Novel Fourth-Generation Nickel-Based Single Crystal Superalloy
A novel fourth-generation nickel-based single crystal superalloy was brazed with Co-based filler alloy. The effects of post-weld heat treatment (PWHT) on the microstructure and mechanical properties of brazed joints were investigated. The experimental and CALPHAD simulation results show that the non-isothermal solidification zone was composed of M3B2, MB-type boride and MC carbide, and the isothermal solidification zone was composed of γ and γ’ phases. After the PWHT, the distribution of borides and the morphology of the γ’ phase were changed. The change of the γ’ phase was mainly attributed to the effect of borides on the diffusion behavior of Al and Ta atoms. In the process of PWHT, stress concentration leads to the nucleation and growth of grains during recrystallization, thus forming high angle grain boundaries in the joint. The microhardness was slightly increased compared to the joint before PWHT. The relationship between microstructure and microhardness during the PWHT of the joint was discussed. In addition, the tensile strength and stress fracture life of the joints were significantly increased after the PWHT. The reasons for the improved mechanical properties of the joints were analyzed and the fracture mechanism of the joints was elucidated. These research results can provide important guidance for the brazing work of fourth-generation nickel-based single crystal superalloy.
Introduction
With the continuous innovation and development of the aviation industry, the superalloy materials that have been widely used in aircraft engines are also improving [1][2][3]. From equiaxed crystal to columnar crystal, and finally to single crystal, the transverse and longitudinal grain boundaries (GBs) are eliminated, and the service temperature and strength of the alloy are greatly improved [4,5]. Nickel-based single crystal superalloy has a more stable structure because of its disordered γ matrix and γ phase with L1 2 -ordered structure. Therefore, it is the best candidate material for aviation engine turbine blades [6][7][8].
With the addition of Re and Ru elements, the fourth-generation nickel-based single crystal superalloy is designed to further improve the high temperature mechanical properties of the alloy [9][10][11].
Turbine blades, guide vanes and other engine hot end components in aircraft engines have very complex structures, which are difficult to complete by the traditional casting process [12,13]. In addition, various defects will inevitably be formed in the process of long-term service [14]. Therefore, it is essential to use welding technology to connect and The casting rod (Φ16 × 210 mm) of (001) oriented single crystal superalloy was prepared by the high-rate solidification (HRS) method. The microstructure of the single crystal rod was shown in Figure 1. The single crystal rod as the base metal after solution treatment, and the following two aspects were considered: (1) Elimination of dendrite segregation in as-cast single crystal rod. (2) The melting point of the filling alloy is far lower than the solution treatment temperature. The brazing experiments were arranged after the solid solution treatment, which prevented remelting of the filler alloy in the weld. Firstly, the base metal was cut with a wire electrical discharge machining (EDM), and the samples of microstructure and mechanical properties were cut into Φ16 × 3 mm and Φ16 × 35 mm, respectively. Second, the cut samples were polished smooth with 200 and 400 mesh sandpaper and cleaned with acetone for 15 min. Finally, the water-soluble adhesive was mixed with solder powder to form a paste, which was filled around the weld. It was worth mentioning that 100 µm nickel wire was used to control the weld gap. The Materials 2023, 16, 3008 3 of 18 assembled samples were heated in a vacuum brazing furnace. After the filler alloy melts, it was wetted and spread into the weld center under the capillary action. The brazed joint will be formed when the samples were cooled to room temperature in the furnace, and the vacuum degree was always kept below 4 × 10 −3 Pa during this process. In addition, the brazing temperature was set between the liquidus temperature of the filler alloy (1208 • C) and the solution treatment temperature of the BM (1325 • C). In this paper, the same heat treatment process as the BM was used to carry out PWHT on the brazed joint. The brazing and heat treatment processes were shown in Figure 2. In order to show the joint status more succinctly, the brazed joint was defined as HT1, and the joint after PWHT was defined as HT2.
samples of microstructure and mechanical properties were cut into Φ16 × 3 mm and Φ16 × 35 mm, respectively. Second, the cut samples were polished smooth with 200 and 400 mesh sandpaper and cleaned with acetone for 15 min. Finally, the water-soluble adhesive was mixed with solder powder to form a paste, which was filled around the weld. It was worth mentioning that 100 µm nickel wire was used to control the weld gap. The assembled samples were heated in a vacuum brazing furnace. After the filler alloy melts, it was wetted and spread into the weld center under the capillary action. The brazed joint will be formed when the samples were cooled to room temperature in the furnace, and the vacuum degree was always kept below 4 × 10 −3 Pa during this process. In addition, the brazing temperature was set between the liquidus temperature of the filler alloy (1208 °C) and the solution treatment temperature of the BM (1325 °C). In this paper, the same heat treatment process as the BM was used to carry out PWHT on the brazed joint. The brazing and heat treatment processes were shown in Figure 2. In order to show the joint status more succinctly, the brazed joint was defined as HT1, and the joint after PWHT was defined as HT2. The microstructure of brazed joints was analyzed by scanning electron microscopy (SEM). The SEM samples were operated by standard metallographic procedures, and then the samples were etched with copper sulfate corrosion solution (20 g CuSO4 + 100 mL HCl + 5 mL H2SO4 + 80 mL H2O) for 3-5 s. The element concentration and distribution of the joint were analyzed by energy dispersive spectroscopy (EDS) and electron probe micro- samples of microstructure and mechanical properties were cut into Φ16 × 3 mm and Φ16 × 35 mm, respectively. Second, the cut samples were polished smooth with 200 and 400 mesh sandpaper and cleaned with acetone for 15 min. Finally, the water-soluble adhesive was mixed with solder powder to form a paste, which was filled around the weld. It was worth mentioning that 100 µm nickel wire was used to control the weld gap. The assembled samples were heated in a vacuum brazing furnace. After the filler alloy melts, it was wetted and spread into the weld center under the capillary action. The brazed joint will be formed when the samples were cooled to room temperature in the furnace, and the vacuum degree was always kept below 4 × 10 −3 Pa during this process. In addition, the brazing temperature was set between the liquidus temperature of the filler alloy (1208 °C) and the solution treatment temperature of the BM (1325 °C). In this paper, the same heat treatment process as the BM was used to carry out PWHT on the brazed joint. The brazing and heat treatment processes were shown in Figure 2. In order to show the joint status more succinctly, the brazed joint was defined as HT1, and the joint after PWHT was defined as HT2. The microstructure of brazed joints was analyzed by scanning electron microscopy (SEM). The SEM samples were operated by standard metallographic procedures, and then the samples were etched with copper sulfate corrosion solution (20 g CuSO4 + 100 mL HCl + 5 mL H2SO4 + 80 mL H2O) for 3-5 s. The element concentration and distribution of the joint were analyzed by energy dispersive spectroscopy (EDS) and electron probe micro- The microstructure of brazed joints was analyzed by scanning electron microscopy (SEM). The SEM samples were operated by standard metallographic procedures, and then the samples were etched with copper sulfate corrosion solution (20 g CuSO 4 + 100 mL HCl + 5 mL H 2 SO 4 + 80 mL H 2 O) for 3-5 s. The element concentration and distribution of the joint were analyzed by energy dispersive spectroscopy (EDS) and electron probe micro-analyzer (EPMA) techniques. Thermal Calc 2021a software using the TCNI10 database was used to calculate the residual liquid phase components in the joint. The phase distribution and grain boundaries in the joints were obtained by electron back-scattered diffraction (EBSD) and the data were analyzed by the channel 5 software package. The cross-section of the brazed joint was subjected to the Vickers hardness test with a 100 g load for 15 s. Select 3 points in each area to test the microhardness, and then take the average value. In addition, the high-temperature tensile (980 • C) and stress rupture tests (980 • C/60 MPa) were carried out on the joints under different conditions. Figure 3a shows the overview microstructure of the brazed joint at 1260 • C for 90 min. It is observed that the brazed joint was composed of two different zones: the non-isothermal solidification zone (NSZ) and the isothermal solidification zone (ISZ). Due to insufficient brazing time, a large number of eutectic compound precipitates were formed in the NSZ. On the other hand, there is no diffusion affected zone (DAZ) near the BM in the brazed joint, which is obviously very beneficial to the properties of the BM. By amplification of the ISZ, it is found that the γ' phase near the NSZ side is very small and at the nanometer level, while the γ' phase on the side near the BM is relatively large. In order to better understand the formation of the precipitated phases in different regions conveniently, these two regions are redefined as ISZ-1 and ISZ-2, respectively, while the two precipitated phases are called secondary γ' (S-γ') and primary γ' (P-γ') phase, respectively, as shown in Figure 3c By magnifying the NSZ, it is obvious that there are three different types of precipitates. The white skeleton precipitates occupy most of the area in the NSZ, and the volume fraction of gray strip precipitates and bright white block precipitates is relatively small, and the remaining area is composed of γ matrix, as shown in Figure 3e. In order to determine the element content of different precipitates, EDS technology was used to analyze different positions, and the results were listed in Table 3. It is found that the bright white block precipitates are rich in Ta, the gray strip precipitates are Cr-rich, and the white skeleton precipitates are rich in W and Ta. Since EDS cannot accurately detect light elements such as B and C, the EPMA technique was used for further qualitative analysis of the different precipitates, as shown in Figure 4. The results show that the bright white block precipitates are Ta-rich carbides, and the common forming elements of MC carbide are Ta, Ti, and Hf, so it can be inferred that this phase is MC carbide [27]. It is obvious that the gray strip precipitate is a Cr-rich boride. The white skeleton precipitate can be determined as W and Ta-rich boride. In addition, it is found that the Si element is uniformly distributed in the γ solid solution and do not form a silicide with other elements. This is mainly due to the low content of Si in the filler alloy and the high solubility of Si in the γ solid solution [28]. In this experiment, due to insufficient brazing time, the joint did not complet thermal solidification. During the cooling process, the residual liquid phase gradual lidified and formed different precipitates. It can be seen from the above analysis re that the joint was mainly composed of boride, carbide, γ and γ' phase compositio order to better understand the formation of precipitates in the joint, the thermal-calc software was used to calculate the phase composition of the joint. Firstly, the solidific In this experiment, due to insufficient brazing time, the joint did not complete isothermal solidification. During the cooling process, the residual liquid phase gradually solidified and formed different precipitates. It can be seen from the above analysis results that the joint was mainly composed of boride, carbide, γ and γ' phase composition. In order to better understand the formation of precipitates in the joint, the thermal-calc 2021a software was used to calculate the phase composition of the joint. Firstly, the solidification process of filler alloy without element diffusion was calculated. It can be seen from Figure 5a that a variety of precipitates are formed during the solidification of the filler alloy. However, the element diffusion between filler alloy and BM cannot be ignored in the actual brazing process. Therefore, the concentration of each element in the center of the joint after brazing can be calculated based on the simple law of conservation of mass, which can be represented by the following equation [29]:
Microstructure of Brazed Joint
where C i jab is the concentration of element i in the joint after brazing; C i BM is the concentration of element i in the BM; C i f a is the concentration of element i in the filler alloy; D represents the dissolution ratio, which can be expressed by the following formula: where M a is the weight of the filler alloy in the joint after brazing; M b is the weight of the filler alloy before brazing. Since both the filler alloy and the BM are superalloys, it can be assumed that they have the same density. Therefore, the dissolution ratio D can also be represented by the following equation [30]: where W max is the maximum width of the joint after brazing; W 0 is the initial width of the brazed joint. Therefore, the dissolution ratio D can be brought into Equation (1), and the following equation can be obtained: According to this equation, the concentration of each element in the joint can be calculated, and then the phase composition during solidification can be calculated using the thermal-calc software. The calculated phase composition results are illustrated in Figure 5b,c. The precipitation of the γ' phase from the γ matrix indicates the diffusion of Al from the BM into the joint to combine with Ni to form the γ' phase. Combined with the experimental results, the two borides can be tentatively identified as M 3 B 2 -and MB-type borides, respectively. The simulation results show the formation of M 23 C 6 , and the MC in the experimental results may be its intermediate transition phase, which is not transformed into M 23 C 6 . Since the experimental results are the result of the coupling of many factors, some errors between the actual solidification process and the theoretical solidification process are acceptable. Figure 6 shows the microstructure of the brazed joint with PWHT. Compared with the joints before heat treatment (Figure 3), it can be clearly found that borides in NSZ change from large dense skeleton to discrete block distribution. According to the EBSD phase distribution map, the precipitated phases in the joint were identified as M 3 B 2 , MB and MC (Figure 7). This indicates that the simulation results have a certain accuracy. It also shows that the PWHT only changes the precipitation phase distribution, not the precipitation phase type. In addition, it can be seen from Figure 6b-d that obvious traces of interface connection were observed between ISZ-1 and ISZ-2 and between ISZ-2 and BM. The S-γ' phase in ISZ-1 remained unchanged and remained at the nanometer level ( Figure 6e). However, many needle-like compounds phases were precipitated from ISZ-1, and The EDS results indicate that the main chemical composition of the precipitated phase is Cr-5.6, Co-17.8, NI-11.3, Ta-15.6, W-42.4, Re-7.6 (wt%), which indicated that the precipitated phase is the same boride as the skeleton-like precipitated phase. It is observed that the P-γ' phase in ISZ-2 changed from an irregular shape to a regular cubic shape and the size of the P-γ' phase increased (Figure 6c). Meanwhile, the S-γ' phase is observed in the γ matrix channel (Figure 6f). Furthermore, it can be seen from Figure 6g that the rafting of the γ' phase and the coarsening of the γ matrix in the BM.
Effect of PWHT on the Microstructure of Brazed Joint
where max W is the maximum width of the joint after brazing; 0 W is the initial width of the brazed joint. Therefore, the dissolution ratio D can be brought into Equation (1), and the following equation can be obtained: According to this equation, the concentration of each element in the joint can be calculated, and then the phase composition during solidification can be calculated using the thermal-calc software. The calculated phase composition results are illustrated in Figure 5b,c. The precipitation of the γ' phase from the γ matrix indicates the diffusion of Al from the BM into the joint to combine with Ni to form the γ' phase. Combined with the experimental results, the two borides can be tentatively identified as M3B2and MB-type borides, respectively. The simulation results show the formation of M23C6, and the MC in the experimental results may be its intermediate transition phase, which is not transformed into M23C6. Since the experimental results are the result of the coupling of many factors, some errors between the actual solidification process and the theoretical solidification process are acceptable. In order to better understand the formation of joints and the evolution of microstructure during PWHT, a series of simple schematic diagrams were drawn (Figure 8). In the first stage, the filler alloy completely melts when heated to the brazing temperature, and then wets and spreads on the surface of the BM under capillary action (Figure 8a). Under the action of concentration gradient, the filler alloy and the BM conduct the element mutual diffusion and begin to enter the isothermal solidification stage. At the same time, the B atoms combine with Ta, W, Cr and other atoms to form boride, thereby reducing the system energy. Immediately after that, the γ' phase gradually precipitates in the γ solid solution and cools to room temperature to form a stable joint, as shown in Figure 8b. Finally, the brazed joint is subjected to PWHT, which is the focus of the work in this paper. 6e). However, many needle-like compounds phases were precipitated from ISZ-1, and The EDS results indicate that the main chemical composition of the precipitated phase is Cr-5.6, Co-17.8, NI-11.3, Ta-15.6, W-42.4, Re-7.6 (wt%), which indicated that the precipitated phase is the same boride as the skeleton-like precipitated phase. It is observed that the P-γ' phase in ISZ-2 changed from an irregular shape to a regular cubic shape and the size of the P-γ' phase increased (Figure 6c). Meanwhile, the S-γ' phase is observed in the γ matrix channel (Figure 6f). Furthermore, it can be seen from Figure 6g that the rafting of the γ' phase and the coarsening of the γ matrix in the BM. In order to better understand the formation of joints and the evolution of microstructure during PWHT, a series of simple schematic diagrams were drawn (Figure 8). In the first stage, the filler alloy completely melts when heated to the brazing temperature, and then wets and spreads on the surface of the BM under capillary action (Figure 8a). Under the action of concentration gradient, the filler alloy and the BM conduct the element mu- After PWHT, the S-γ' phase in ISZ-1 changed little and the P-γ' phase changed obviously in ISZ-2, this can be explained by the following discussion of this phenomenon. During high-temperature aging treatment, B atoms in NSZ diffuse to the surrounding. Due to the extremely low solubility of B in the Ni-based alloy, excessive B atoms will diffuse into ISZ-1 and combine with surrounding W and Ta atoms to form needle-like borides (Figure 8c).
It is well known that Ta is an important element in the formation of γ' phase [31]. The lack of Ta will directly affect the precipitation of the γ' phase. Moreover, the formation of needle-like borides repels the diffusion of Al elements into ISZ-1. Therefore, the lack of Ta and Al elements is the direct reason for the absence of significant changes in the S-γ' phase in ISZ-1. In addition, the formation of needle-like boride leads to the hindrance of Al atoms diffusion, so that most of the Al atoms in the BM stay in ISZ-2 during the diffusion process, which makes the P-γ' phase in this region have more favorable precipitation growth conditions. At the same time, the combined action with Ostwald ripening mechanism leads to the increase of P-γ' phase size [32]. Figure 9 shows the SEM-EDS scanning of the joint after post-weld heat treatment, where the yellow line represents the scanning area of the SEM-EDS line. The results show that the Al content in ISZ-2 is significantly higher than in ISZ-1, which is consistent with the results of the previous analysis. On the other hand, the increase of the cubicity of the P-γ' phase in ISZ-2 during the low- After long-term aging treatment, the boride volume fraction in NSZ decreased significantly. The M 3 B 2 -type boride dissolution temperature is found by thermal-calc software simulations to be upwards of 1300 • C (Figure 5c), which is significantly higher than the high-temperature aging treatment in this experiment, indicating that the boride elimination is controlled by interatomic solid-state diffusion. Pouranvari [21] reported in detail the evolution mechanism of boride elimination in the joint after PWHT, which mainly experienced two stages of precipitate break-up and dissolution. The dissolution temperature of MB-type boride (1048 • C) is lower than the high-temperature aging temperature (1150 • C), indicating that MB-type boride is dissolved during high-temperature aging. In addition, the volume fraction of MB-type boride in the joint is low, and it can be completely dissolved at high temperatures for a long time.
After PWHT, the S-γ' phase in ISZ-1 changed little and the P-γ' phase changed obviously in ISZ-2, this can be explained by the following discussion of this phenomenon. During high-temperature aging treatment, B atoms in NSZ diffuse to the surrounding. Due to the extremely low solubility of B in the Ni-based alloy, excessive B atoms will diffuse into ISZ-1 and combine with surrounding W and Ta atoms to form needle-like borides (Figure 8c).
It is well known that Ta is an important element in the formation of γ' phase [31]. The lack of Ta will directly affect the precipitation of the γ' phase. Moreover, the formation of needle-like borides repels the diffusion of Al elements into ISZ-1. Therefore, the lack of Ta and Al elements is the direct reason for the absence of significant changes in the S-γ' phase in ISZ-1. In addition, the formation of needle-like boride leads to the hindrance of Al atoms diffusion, so that most of the Al atoms in the BM stay in ISZ-2 during the diffusion process, which makes the P-γ' phase in this region have more favorable precipitation growth conditions. At the same time, the combined action with Ostwald ripening mechanism leads to the increase of P-γ' phase size [32]. Figure 9 shows the SEM-EDS scanning of the joint after post-weld heat treatment, where the yellow line represents the scanning area of the SEM-EDS line. The results show that the Al content in ISZ-2 is significantly higher than in ISZ-1, which is consistent with the results of the previous analysis. On the other hand, the increase of the cubicity of the P-γ' phase in ISZ-2 during the low-temperature aging treatment is mainly dominated by the elastic strain energy [33]. The relatively low solidus of the S-γ' phase, indicating the S-γ' phase is precipitated during the low-temperature aging treatment. Since no stress was applied to the BM, it is shown that the rafting of the γ' phase is mainly controlled by temperature and time. The high temperature brazing process leads to the redistribution of the forming elements of the γ' phase, and the solute atoms diffuse according to the chemical potential gradient. The growth of γ' phase follows the Ostwald maturation mechanism, and the large γ' phase engulfs the small γ' phase. After aging treatment, the original cubic morphology was changed under the combined action of elastic strain energy and interface energy, which eventually leads to the rafting of the γ' phase in the BM (Figure 8d). temperature aging treatment is mainly dominated by the elastic strain energy [33]. The relatively low solidus of the S-γ' phase, indicating the S-γ' phase is precipitated during the low-temperature aging treatment. Since no stress was applied to the BM, it is shown that the rafting of the γ' phase is mainly controlled by temperature and time. The high temperature brazing process leads to the redistribution of the forming elements of the γ' phase, and the solute atoms diffuse according to the chemical potential gradient. The growth of γ' phase follows the Ostwald maturation mechanism, and the large γ' phase engulfs the small γ' phase. After aging treatment, the original cubic morphology was changed under the combined action of elastic strain energy and interface energy, which eventually leads to the rafting of the γ' phase in the BM (Figure 8d). Figure 10 shows the EBSD results of brazed joints at low magnification. It can be found that the brazed joints before and after PWHT form low angle grain boundaries (LAGBs, 2° < θ < 15°) and high angle grain boundaries (HAGBs, θ > 15°). Sheng et al. [34] found that there was composition supercooling at the front of the solid/liquid interface during isothermal solidification, and element diffusion was necessarily affected by the concentration gradient. It leads to cellular growth at the initial stage of isothermal solidification. With the solid/liquid interface moving forward, the cellular crystal nucleus deviates from the original orientation, leading to the formation of grain boundaries. Figure 10c,g show that the LAGBs of the joint after PWHT are significantly reduced, and large size grains are formed. At the same time, there is obvious stress concentration in the joint before heat treatment, and the stress concentration is significantly reduced after PWHT (Figure 10d,h). Figure 11 shows the corresponding (Geometrically Necessary Dislocation) GND density distribution charts of the brazed joints before and after PWHT. It can be found that the GND density after PWHT (1.60 × 10 15 /m 2 ) is lower than the GND density before PWHT (1.66 × 10 15 /m 2 ), which means that the internal stress of the joint after PWHT is reduced. All these indicates that the post-weld heat treatment process undergoes the process of recrystallization and grain growth, LAGBs merge with each other, and HAGBs swallow LAGBs, during which the internal stress in the joint can be released. Figure 10 shows the EBSD results of brazed joints at low magnification. It can be found that the brazed joints before and after PWHT form low angle grain boundaries (LAGBs, 2 • < θ < 15 • ) and high angle grain boundaries (HAGBs, θ > 15 • ). Sheng et al. [34] found that there was composition supercooling at the front of the solid/liquid interface during isothermal solidification, and element diffusion was necessarily affected by the concentration gradient. It leads to cellular growth at the initial stage of isothermal solidification. With the solid/liquid interface moving forward, the cellular crystal nucleus deviates from the original orientation, leading to the formation of grain boundaries. Figure 10c,g show that the LAGBs of the joint after PWHT are significantly reduced, and large size grains are formed. At the same time, there is obvious stress concentration in the joint before heat treatment, and the stress concentration is significantly reduced after PWHT (Figure 10d,h). Figure 11 shows the corresponding (Geometrically Necessary Dislocation) GND density distribution charts of the brazed joints before and after PWHT. It can be found that the GND density after PWHT (1.60 × 10 15 /m 2 ) is lower than the GND density before PWHT (1.66 × 10 15 /m 2 ), which means that the internal stress of the joint after PWHT is reduced.
All these indicates that the post-weld heat treatment process undergoes the process of recrystallization and grain growth, LAGBs merge with each other, and HAGBs swallow LAGBs, during which the internal stress in the joint can be released. Figure 12 shows the microhardness of different areas of the joint before and afte PWHT. The microhardness of the HT1 joint was in this order: NSZ > BM > ISZ-2 > IS and the microhardness of the HT2 joint was in this order: NSZ > ISZ-1 > BM > ISZgeneral, the hardness value of NSZ is the highest, and the microhardness of the joint the inside to the outside shows a tendency to decrease first and then increase. The m hardness of the HT2 joint increased overall (except for the microhardness decrea NSZ). In the NSZ of the HT1 joint, a large number of dense large skeleton borides are tributed, these borides have the characteristics of a hard and brittle structure, which l to high slight hardness in NSZ. Researchers [35] have reported that the presence of bo in the joints leads to a significant increase in microhardness in this area. In HT2 joints microhardness of NSZ can be found to be decreased. This is mainly due to the decrea the volume fraction of boride in NSZ of the HT2 joint, which changes from a large skel to a small, dispersed distribution. Therefore, it is not difficult to understand the decr in microhardness in NSZ of the HT2 joint.
Microhardness
In the ISZ-1 of the HT2 joint, the morphology and size of the γ and S-γ' phases no obvious change compared with that of the HT1 joint, but needle boride precipitat Figure 12 shows the microhardness of different areas of the joint before and after the PWHT. The microhardness of the HT1 joint was in this order: NSZ > BM > ISZ-2 > ISZ-1, and the microhardness of the HT2 joint was in this order: NSZ > ISZ-1 > BM > ISZ-2. In general, the hardness value of NSZ is the highest, and the microhardness of the joint from the inside to the outside shows a tendency to decrease first and then increase. The microhardness of the HT2 joint increased overall (except for the microhardness decrease in NSZ). Figure 12 shows the microhardness of different areas of the joint before and after the PWHT. The microhardness of the HT1 joint was in this order: NSZ > BM > ISZ-2 > ISZ-1, and the microhardness of the HT2 joint was in this order: NSZ > ISZ-1 > BM > ISZ-2. In general, the hardness value of NSZ is the highest, and the microhardness of the joint from the inside to the outside shows a tendency to decrease first and then increase. The microhardness of the HT2 joint increased overall (except for the microhardness decrease in NSZ). In the NSZ of the HT1 joint, a large number of dense large skeleton borides are distributed, these borides have the characteristics of a hard and brittle structure, which leads to high slight hardness in NSZ. Researchers [35] have reported that the presence of boride in the joints leads to a significant increase in microhardness in this area. In HT2 joints, the microhardness of NSZ can be found to be decreased. This is mainly due to the decrease of the volume fraction of boride in NSZ of the HT2 joint, which changes from a large skeleton to a small, dispersed distribution. Therefore, it is not difficult to understand the decrease in microhardness in NSZ of the HT2 joint.
Microhardness
In the ISZ-1 of the HT2 joint, the morphology and size of the γ and S-γ' phases have no obvious change compared with that of the HT1 joint, but needle boride precipitates in In the NSZ of the HT1 joint, a large number of dense large skeleton borides are distributed, these borides have the characteristics of a hard and brittle structure, which leads to high slight hardness in NSZ. Researchers [35] have reported that the presence of boride in the joints leads to a significant increase in microhardness in this area. In HT2 joints, the microhardness of NSZ can be found to be decreased. This is mainly due to the decrease of the volume fraction of boride in NSZ of the HT2 joint, which changes from a large skeleton to a small, dispersed distribution. Therefore, it is not difficult to understand the decrease in microhardness in NSZ of the HT2 joint.
In the ISZ-1 of the HT2 joint, the morphology and size of the γ and S-γ' phases have no obvious change compared with that of the HT1 joint, but needle boride precipitates in the ISZ-1 of the HT2 joint, which is the direct cause of the increase of ISZ-1 microhardness. The ISZ-2 of the HT2 joint was still composed of γ and γ' phases, and no boride was precipitated. This indicates that the strengthening effect of ISZ-2 is through solid solution strengthening of the γ matrix and precipitation strengthening of the γ' phase. The difference in volume fraction of the P-γ' phase in ISZ-2 before and after the PWHT is not significant, but the S-γ' phase precipitated in ISZ-2 after heat treatment, which is one of the contributors to the higher microhardness. Another contributor is the effect of γ matrix solid solution strengthening on microhardness. In other words, the strengthening effect can be considered from the perspective of solid solution strengthening elements (Re, Ru, W, Mo and Cr are all effective solid solution strengthening elements). According to the model proposed by Gypen and Deruyttere [36], the superposition of solute element strengthening can produce different effects. This can be expressed according to the following formula: where C i is the concentration of solute i and K i is the strengthening coefficient of solute i. The solute strengthening coefficient K i can be obtained from the strengthening factor table of alloy elements in Ni-base alloys [37]. So far, due to the lack of relevant databases, the effect of Re on solid solution strengthening of γ matrix has not been considered. The element concentrations in different regions of ISZ before and after the PWHT of the joint were listed in Table 4. The results show that ISZ-2-∆σ sol is significantly larger than ISZ-1-∆σ sol in HT1 joints, and ISZ-2-∆σ sol of the HT2 joint is slightly larger than that ISZ-2-∆σ sol of the HT1 joint. In addition, due to the acicular boride precipitated in ISZ-1 after PWHT, the effect of γ matrix in ISZ-1 on microhardness is not considered. In conclusion, the changes of microhardness in different regions of ISZ before and after the PWHT of the joint can be explained. The BM was mainly composed of γ and γ' phase. After PWHT, the microhardness of BM increased, which could be attributed to γ' phase coarsening. Some other researchers [38,39] have also found that the coarsening of γ' phase has an adverse effect on its microhardness. Figure 13 shows the ultimate tensile strength at 980 • C and the stress rupture life under the condition of 980 • C/60 MPa before and after the PWHT of the joint. The tensile strength of the HT1 joint is 409 MPa, and the tensile strength of the HT2 joint is increased to 747 MPa (Figure 13a). This is mainly related to the change of the NSZ region. the volume fraction of brittle boride in NSZ can be significantly reduced, and the distribution of brittle boride in NSZ can be changed from a large dense skeleton to a small block dispersed after the PWHT of the joint. Figure 14a,b show the local misorientation maps of the brazed joint, indicating that the HT1 joint has significant stress concentration at the interface around the bulk boride compared with the HT2 joint. Under the action of external stress, cracks originate preferentially generated around the boride and then propagate along the boride (Figure 14c). Continuous bulk brittle borides exhibit low resistance to crack growth. When the borides are discontinuous, the crack propagates to γ solid solution, which will encounter a large resistance to prevent the crack from further propagating. In other words, it is necessary to apply more external force to the joint in order to propagate the crack further until the final fracture of the joint, as shown in Figure 14d. Therefore, the tensile strength of the joint after PWHT is significantly improved.
Tensile and Stress Rupture Properties
boride in NSZ can be changed from a large dense skeleton to a small block dispersed the PWHT of the joint. Figure 14a,b show the local misorientation maps of the brazed indicating that the HT1 joint has significant stress concentration at the interface aro the bulk boride compared with the HT2 joint. Under the action of external stress, cr originate preferentially generated around the boride and then propagate along the bo (Figure 14c). Continuous bulk brittle borides exhibit low resistance to crack growth. W the borides are discontinuous, the crack propagates to γ solid solution, which will enc ter a large resistance to prevent the crack from further propagating. In other words necessary to apply more external force to the joint in order to propagate the crack fu until the final fracture of the joint, as shown in Figure 14d. Therefore, the tensile stre of the joint after PWHT is significantly improved. The stress rupture life (967 h) of the HT2 joint is significantly longer than that o HT1 joint (268 h), as shown in Figure 13b. This may be related to the increase of the vo fraction of the S-γ' phase and the migration of more solid solution strengthening elem to the center of the joint after a long time of aging treatment, which enhanced the solution strengthening mechanism of the joint. In particular, the characteristic elem Re and Ru in the fourth-generation nickel-based single crystal superalloy enter the which can reduce the stacking fault energy and increase the lattice misfit between t and γ' phase, respectively [40][41][42]. It is generally believed that grain boundaries are sidered a crystal defect at sustained high temperatures, which in turn affects the h temperature creep properties of the joints [43]. It can be seen from Figure 14e,f tha joint fracture mostly breaks along the grain boundary. Furthermore, the numb HAGBs is increased in the joint of HT2 compared to the joint of HT1 (Figure 10), indic that the crack initiation is mainly at the HAGBs. The crack propagation along the inte between HAGBs and borides was observed from the fracture path of the joint (Figure It is worth noting that the MC carbide was found to change from large blocks to s pieces of particles after the PWHT of the joint and was uniformly distributed on HAGBs, as shown in Figure 7. This suggests that MC carbide can strengthen the g boundary and prevent the rapid failure of the joint due to the grain boundary defect Figure 15 shows the fracture morphologies of brazed joints before and afte PWHT after tensile and stress rupture tests. All the samples were fractured at the ce of the joint, and the fracture surface is relatively flat without plastic deformation. The sile fracture surface shows obvious quasi-cleavage fracture characteristics, and the pattern and tear edge can be observed obviously. Many pores were observed on the ture surface of the brazed joint without the PWHT (Figure 15a), which was unfavo to the mechanical properties of the brazed joint. However, the surface morpholog stress rupture fracture was obviously different from that of tensile fracture. A large n The stress rupture life (967 h) of the HT2 joint is significantly longer than that of the HT1 joint (268 h), as shown in Figure 13b. This may be related to the increase of the volume fraction of the S-γ' phase and the migration of more solid solution strengthening elements to the center of the joint after a long time of aging treatment, which enhanced the solid solution strengthening mechanism of the joint. In particular, the characteristic elements Re and Ru in the fourth-generation nickel-based single crystal superalloy enter the joint, which can reduce the stacking fault energy and increase the lattice misfit between the γ and γ' phase, respectively [40][41][42]. It is generally believed that grain boundaries are considered a crystal defect at sustained high temperatures, which in turn affects the high-temperature creep properties of the joints [43]. It can be seen from Figure 14e,f that the joint fracture mostly breaks along the grain boundary. Furthermore, the number of HAGBs is increased in the joint of HT2 compared to the joint of HT1 (Figure 10), indicating that the crack initiation is mainly at the HAGBs. The crack propagation along the interface between HAGBs and borides was observed from the fracture path of the joint (Figure 14f). It is worth noting that the MC carbide was found to change from large blocks to small pieces of particles after the PWHT of the joint and was uniformly distributed on the HAGBs, as shown in Figure 7. This suggests that MC carbide can strengthen the grain boundary and prevent the rapid failure of the joint due to the grain boundary defect. Figure 15 shows the fracture morphologies of brazed joints before and after the PWHT after tensile and stress rupture tests. All the samples were fractured at the center of the joint, and the fracture surface is relatively flat without plastic deformation. The tensile fracture surface shows obvious quasi-cleavage fracture characteristics, and the river pattern and tear edge can be observed obviously. Many pores were observed on the fracture surface of the brazed joint without the PWHT (Figure 15a), which was unfavorable to the mechanical properties of the brazed joint. However, the surface morphology of stress rupture fracture was obviously different from that of tensile fracture. A large number of small creep dimples were observed, showing the characteristics of intergranular fracture of microporous aggregation. Furthermore, the number of fine dimples in the fracture surface of the joint after the PWHT was significantly increased, and no holes were observed on the fracture surface of the joint (Figure 15d), and the stress rupture life of the joint was improved.
Conclusions
The microstructure and mechanical properties of brazed joints of the fourth-gene tion nickel-based single crystal superalloy before and after the PWHT were investigat The following conclusions could be drawn: 1. The non-isothermal solidification zone (NSZ) and the isothermal solidification zo (ISZ) were observed in the brazed joint. The NSZ was composed of M3B2, MB-ty boride, MC carbide and γ matrix, and the ISZ was consisted of γ and γ' phases w Figure 15. Fracture morphology of brazed joints before and after the PWHT: (a,b) high temperature tensile (c,d) stress rupture.
|
2022-11-22T16:11:22.978Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "38eb3cb506c5b8e4eab56d4b57e2031778de0385",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/8/3008/pdf?version=1681176738",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f5bd06b8e79fbfccd31c93df4c7b94e6b9b4072",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59374536
|
pes2o/s2orc
|
v3-fos-license
|
Near-Field / Far-Field Transformation with Helicoidal Scanning from Irregularly Spaced Data
A fast and accurate technique for the compensation of the probe positioning errors in the near-field/far-field transformation with helicoidal scanning is proposed in this paper. It relies on a nonredundant sampling representation using a spherical modelling of the antenna under test and employs an iterative scheme to evaluate the near-field data at the points fixed by the helicoidal nonredundant representation from the acquired irregularly distributed ones. Once these helicoidal data have been recovered, those required by a classical cylindrical near-field/far-field transformation are efficiently determined by using an optimal sampling interpolation algorithm. Some numerical tests assessing the effectiveness of the proposed approach and its stability with respect to random errors affecting the near-field data are shown.
Introduction
As well-known, far-field (FF) range size limitations, transportation, and mounting problems can make impossible or impractical the measurement of antenna radiation patterns on a conventional FF range.In these cases, it is convenient to exploit near-field (NF) measurements and recover the FF patterns by using NF-FF transformation techniques [1][2][3].In addition, the NF measurements may be performed in a controlled environment, as an anechoic chamber, thus overcoming those drawbacks that cannot be eliminated in FF outdoor measurements.In this framework, the reduction of the time needed for acquiring the NF data is assuming an ever growing relevance for the antenna measurement community, since this time is currently very much greater than that required to perform the corresponding NF-FF transformation.Such a reduction can be achieved by decreasing the number of the NF data to be collected and/or by making faster the acquisition of each NF value.A significant reduction of the number of required NF data has been obtained for all the conventional scannings [4][5][6][7][8] by applying the nonredundant sampling representations of electromagnetic (EM) fields and the optimal sampling interpolation (OSI) expansions [9], whereas, the use of the modulated scattering technique employing arrays of scattering probes has been proposed in [10] to realize a very fast electronic scanning.However, antenna testing NF facilities based on such a technique are not very flexible as those employing mechanical scans.These last can be made faster by exploiting innovative spiral scannings which use, as suggested by Yaccarino et al. in [11], continuous and synchronized movements of the positioning systems of the probe and antenna under test (AUT).In particular, accurate, stable, and efficient NF-FF transformations using the helicoidal scanning, the planar and spherical spiral scannings have been developed [12][13][14][15][16][17][18][19].They are based on the aforementioned nonredundant representations and reconstruct the NF data needed by the NF-FF transformation with the corresponding classical scanning by interpolating, via appropriate OSI formulas, the nonredundant samples acquired on the spiral.The required two-dimensional algorithm has been obtained (a) by assuming the AUT enclosed in a proper convex domain International Journal of Antennas and Propagation bounded by a surface Σ with rotational symmetry; (b) by developing a nonredundant sampling representation of the voltage acquired by the probe on the spiral; (c) by choosing the spiral step equal to the spacing needed to interpolate the data along a meridian curve.In particular, the AUT has been assumed to be enclosed in the smallest sphere able to contain it in [12][13][14][15], whereas more effective AUT modellings, that allow a further reduction of required NF data when dealing with antennas having one or two predominant dimensions, have been adopted in [16][17][18] by properly applying the unified theory of spiral scannings for nonspherical antennas [19].This last has been obtained by heuristically extending the rigorous approach for spherical antennas [15].It is worth noting that these effective modellings allow one to consider measurement cylinders (planes) with a radius (distance) smaller than one half the antenna maximum size, thus reducing the error related to the truncation of the scanning zone.
Unfortunately, it may be impossible to get regularly distributed NF data due to an inaccurate control of the positioning systems, but the measurements points position can be accurately read by optical devices.In addition, the finite resolution of the positioning systems and their imprecise synchronization do not allow one to exactly locate the probe at the points fixed by the sampling representation.In the light of the above considerations, the development of an accurate and stable reconstruction procedure from irregularly spaced data becomes relevant.In this context, an approach based on the conjugate gradient iteration method and using the unequally spaced fast Fourier transform [20,21] has been proposed in the planar [22] and spherical [23] classical scannings.However, such an approach is not tailored for scannings exploiting the nonredundant sampling representations of EM fields, wherein the "a priori" information on the AUT and proper sampling interpolations are employed to recover the NF data required by the corresponding standard NF-FF transformation technique.The interpolation from nonuniform samples has been well investigated in the onedimensional case of bandlimited functions defined over the real axis.Several sufficient conditions assuring the possibility of reconstructing a function from its nonuniform samples have been stated in [24].The stability in a nonuniform sampling algorithm; that is, the requirement that small errors affecting the samples give rise to small errors in the reconstructed functions, has been exhaustively investigated in [25,26], wherein it has been shown that a stable sampling cannot be accomplished at an "average" rate lower than the Nyquist one.Some closed-form expressions for the interpolation of bandlimited functions from nonuniform samples have been developed in [27].However, they are valid only for particular sampling points arrangements, are cumbersome and not user friendly.Moreover, they become more and more unstable as the sampling points distribution deviates from the uniform one.The twodimensional nonuniform sampling has not attracted an equal consideration.In any case, it has been shown [28] that, again, a stable sampling cannot be performed at a rate lower than the Nyquist one.As it has been clearly stressed in [29], wherein a more exhaustive discussion on this topic can be found, a nonuniform sampling algorithm useful for practical applications must be computationally manageable, accurate, and stable.Accordingly, it is more convenient to recover the uniform samples from the irregularly distributed ones than to resort to a direct interpolation formula.In fact, once the uniform samples have been determined, the value at any point of the scanning surface can be recovered by an accurate and stable OSI expansion.
In this context, two different approaches have been proposed [29][30][31].The former [29,30] is based on an iterative technique which has been found convergent only if there exists a one-to-one correspondence associating at each uniform sampling point the nearest nonuniform one.The latter [31] exploits the singular value decomposition (SVD) method [32] and has been applied when the twodimensional problem can be reduced to the research of the solution of two independent one-dimensional ones.This, f.i., occurs in a cylindrical near-field facility, wherein the nonuniformly distributed data can be realistically assumed to lie on not regularly spaced rings when the measurements are made by rings [31].Such a hypothesis is no longer valid in the helicoidal scanning.Accordingly, the iterative technique will be here applied for reconstructing the uniformly spaced helicoidal samples from the acquired irregularly distributed data.Obviously, the SVD-based approach could be generalized to such an intrinsically two-dimensional problem, but the dimension of the involved matrix would become too large, thus requiring a huge computational effort.
In the following, the helicoidal scanning based on the spherical AUT modelling will be considered since it is more simple and general than those using modellings tailored for elongated antennas.
Nonredundant Sampling Representation on a Cylinder
Let us consider an AUT and a nondirective probe scanning a helix with constant angular step lying on a cylinder of radius d (Figure 1) and adopt the spherical coordinate system (r, ϑ, ϕ) to denote an observation point P in the NF region.Since, as shown in [33], the voltage measured by such a kind of probe has the same effective spatial bandwidth of the AUT field, the nonredundant sampling representations of EM fields [9] can be applied to it.Accordingly, by assuming the AUT as enclosed in the smallest sphere of radius a able to contain it and describing the helix by a proper analytical parameterization r = r(ξ), it is possible to consider the "reduced voltage" where γ(ξ) is an optimal phase function to be determined.The error, occurring when V is approximated by a spatially bandlimited function, becomes negligible as the bandwidth exceeds a critical value W ξ [9], so that it can be effectively controlled by choosing a bandwidth equal to χ W ξ , where χ is an enlargement factor, slightly greater than unity for electrically large antennas.The parametric equation of the helix, when imposing its passage through a fixed point P 0 of the generatrix at ϕ = 0, are where φ is the angular parameter describing the helix, φ s is the value of φ at P 0 , and θ = kφ.Such a helix can be constructed as intersection of the cylinder with the line from the origin to a point moving on a spiral which wraps the sphere enclosing the AUT.In order to allow the twodimensional interpolation, the helix step Δθ (Figure 1) must be equal to the spacing required for the voltage interpolation along a generatrix [6].Therefore, the parameter k is such that the angular step, determined by the consecutive intersections of the helix with a generatrix, is Δθ = 2π/(2N + 1), with N = Int(χN )+1 and N = Int(χ βa)+1.Accordingly, being Δθ = 2πk, it results that k = 1/(2N + 1) .The function Int(x) denotes the integer part of x, β is the wavenumber, and χ > 1 is an oversampling factor needed for controlling the truncation error [9].
A nonredundant sampling representation of the voltage on the helix can be obtained by using the following expressions for the phase function and parameterization [15] As can be seen, the optimal parameter ξ is proportional to the curvilinear abscissa along the spiral wrapping the sphere modelling the AUT.Since such a spiral is a closed curve, it is convenient to choose the bandwidth W ξ such that ξ covers a 2π range when the whole curve on the AUT sphere is described.Therefore, In the light of these results, the reduced voltage at any point of the helix can be reconstructed via the OSI expansion: where m 0 = Int[(ξ−ξ s )/Δξ] is the index of the sample nearest (on the left) to the output point, 2p is the number of retained samples V (ξ m ), and with are the Dirichlet and Tschebyscheff Sampling functions, respectively, T M (•) being the Tschebyscheff polynomial of degree M and ξ = pΔξ.The OSI expansion (5) can be used to evaluate the "intermediate samples"; that is, the reduced voltage values at the intersection points between the helix and the generatrix passing through P. Once these samples have been evaluated, the reduced voltage at P can be recovered via the following OSI formula: where and the other symbols have the same meaning as in (5).It is so possible to recover the NF data required to perform the standard NF-FF transformation with cylindrical scanning [34], whose key steps are reported in the next section for reader's convenience.
Probe Compensated NF-FF Transformation
As rigorously demonstrated in [34], the modal coefficients a v and b v of the cylindrical wave expansion of the field radiated by the AUT are related to (a) the two-dimensional Fourier transforms I ν and I ν of the output voltage of the probe for two independent sets of measurements (the probe International Journal of Antennas and Propagation is rotated 90 • about its longitudinal axis in the second set); (b) the coefficients c m , d m and c m , d m of the cylindrical wave expansion of the field radiated by the probe and the rotated probe, respectively, when used as transmitting antennas.In particular, (2) ν+m (Λd) (2) ν+m (Λd) V ϕ, z e −jνϕ e jηz dϕ dz, V ϕ, z e −jνϕ e jηz dϕ dz, where Λ = β 2 − η 2 , H (2) ν (•) is the Hankel function of second kind and order ν, and V , V represent the output voltage of the probe and the rotated probe at the point of cylindrical coordinates (d, ϕ, z).
Once the modal coefficients have been determined, the FF components of the electric field in the spherical coordinate system (R,Θ,Φ) can be evaluated by
Uniform Samples Reconstruction
Let us now suppose that the samples are irregularly distributed (Figure 1) and denote with (ϑ i , ϕ i ) the nonuniform sampling point corresponding to the nearest uniform one ξ i lying on the helix.By expressing the reduced voltage at each nonuniform sampling point as a function of the unknown values at the nearest uniform ones via the following twodimensional OSI formula (obtained by properly merging expansions ( 5) and ( 9)), it results in where Q is the number of samples.The system (13) can be recast in the matrix form where A is a Q × Q sparse banded matrix whose elements are given by where b is the vector of the collected nonuniform data, and x is the vector of the unknown uniform samples.By splitting A in its diagonal and nondiagonal parts, A D and Δ, respectively, multiplying both members of (14) by A −1 D and rearranging the terms, it results in The following iterative scheme is so obtained: where x (ν) is the vector of the uniform samples estimated at the νth step.Necessary conditions for the convergence of such a scheme [29,30] are that A ii / = 0, for all i, and |A ii | ≥ |A im |, for all m / = i.These conditions are surely verified in the assumed hypothesis of biunique correspondence between each uniform sampling point and the "nearest" nonuniform one.
By straightforward evaluations from (17), it results where ) /Δϑ] being the index of the intermediate sampling point nearest to the uniform one ξ i .
Numerical Results
The effectiveness and robustness of the proposed algorithm for compensating the probe positioning errors in the NF-FF transformation with helicoidal scanning have been assessed by many numerical tests.The reported simulations refer to a uniform circular array lying on the plane y = 0 (see Figure 1), symmetric with respect to the plane z = 0 and having radius a = 16 λ, λ being the wavelength.Its elements are elementary Huygens sources polarized along the z axis and are radially and azimuthally spaced of λ/2.An open-ended WR-90 rectangular waveguide, operating at the frequency of 10 GHz, is chosen as probe.The considered helix wraps a cylinder with radius d = 20 λ and height 2h = 140 λ.The irregularly distributed samples have been generated by imposing that the distances in ξ and ϑ between the position of each nonuniform sample and the associate uniform one are random variables uniformly distributed in (−0.3Δξ, 0.3Δξ) and (−0.3Δϑ, 0.3Δϑ).Note that this represents a pessimistic occurrence in a real scanning procedure.enough to get a very good reconstruction.The evaluation of the maximum and mean-square errors (normalized to the maximum value of the output voltage V on the cylinder) in the reconstruction of the uniform samples assesses more quantitatively the effectiveness of the proposed algorithm.They have been obtained by comparing the reconstructed uniform samples and the exact ones.As can be seen (Figures 4 and 5), on increasing the number of iterations, the errors decrease quickly until a constant saturation value is reached.Such a value decreases on increasing the retained samples number.Even better results are to be expected when the distances between the nonuniform samples and International Journal of Antennas and Propagation the uniform ones are smaller.Moreover, Figure 6 shows the normalized maximum and mean-square errors in the reconstruction of the NF data needed to carry out the NF-FF transformation [34] both when using the directly collected nonuniform samples and the recovered uniform helicoidal ones.As can be seen, in this last case an increased accuracy of about 60 dB can be obtained.
The robustness of the algorithm has been assessed (see Figure 7) by corrupting the exact samples with random errors.Both a background noise (bounded to Δa in amplitude and with arbitrary phase) and uncertainties on the data of ±Δa r in amplitude and ±Δα in phase have been simulated.
At last, Figures 8 and 9 show the antenna FF pattern in the principal planes E and H reconstructed from the irregularly distributed helicoidal samples.As can be seen, the exact and recovered patterns are practically indistinguishable, thus providing an overall assessment of the proposed iterative technique.
It is worth noting that the number of employed samples (guard samples included) for reconstructing the NF data on the considered cylinder is 26 817, significantly less than that (71 680) required by the standard cylindrical scanning and by the helicoidal scanning technique [35].
Figure 1 :
Figure 1: Geometry of the problem.
Figure 2 :
Figure 2: Amplitude of the probe voltage V on the generatrix at ϕ = 90 • .Solid line: exact.Crosses: recovered from irregularly spaced NF data at the iteration 0.
Figures 2 and 3
show a representative reconstruction example of the output voltage V (the most significant one) on the generatrix at ϕ = 90 • , obtained by 0 and 8 iterations, respectively.As it can be seen, only 8 iterations are
Figure 3 :
Figure 3: Amplitude of the probe voltage V on the generatrix at ϕ = 90 • .Solid line: exact.Crosses: recovered from irregularly spaced NF data at the iteration 8.
Figure 4 :
Figure 4: Normalized maximum error in the reconstruction of the uniform helicoidal samples.
Figure 5 : 10 Figure 6 :
Figure 5: Normalized mean-square error in the reconstruction of the uniform helicoidal samples.
|
2018-12-27T17:52:57.178Z
|
2010-09-26T00:00:00.000
|
{
"year": 2010,
"sha1": "cd9e8af3aa8418f6fe1a7f3bd42c5a44e14bda94",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijap/2010/859396.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cd9e8af3aa8418f6fe1a7f3bd42c5a44e14bda94",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
270320473
|
pes2o/s2orc
|
v3-fos-license
|
Dietary Intake among Lebanese Adults: Findings from the Updated LEBANese natiONal Food Consumption Survey (LEBANON-FCS)
Background: The rates of obesity, undernutrition, and other non-communicable diseases are on the rise among Lebanese adults. Therefore, it is crucial to evaluate the food consumption habits of this population to understand diet quality, analyze consumption trends, and compare them to healthy diets known to reduce risks of non-communicable diseases. Aim: To evaluate the food consumption patterns, energy intake, as well as macro- and micro-nutrient intake among a nationally representative sample of Lebanese adults aged 18−64 years old. Methods: A cross-sectional study was carried out from May to September 2022 involving 444 participants from all eight Lebanese governorates. Sociodemographic and medical information was gathered through a questionnaire, food consumption was evaluated using a validated FFQ and 24 h recall, and anthropometric measurements were recorded. Results: There was a notable lack of adherence to three healthy diets (Mediterranean, EAT-Lancet, USDA) among Lebanese adults. Their dietary pattern is characterized by high energy, added sugars, sodium, and saturated fat intake while being low in healthy fats, vitamin A, D, and E. Adult women are falling short of meeting their daily calcium, vitamin D, iron, and vitamin B12 requirements, putting them at increased risk of anemia, osteoporosis, and other health issues. Grains and cereals were the most consumed food groups, and most participants were found to be overweight or obese. Conclusions: In conclusion, the results highlight the need for public health policies and interventions aimed at encouraging Lebanese adults to make healthier food choices and transition towards diets like the Mediterranean, EAT-Lancet, or USDA diet. These diets have been shown to promote overall health and wellbeing.
Introduction
In an era marked by rapid globalization, urbanization, demographic and epidemiological transitions, and economic volatility, the dynamics of nutrition transition and food insecurity have become critical focal points for policymakers, researchers, and public health practitioners worldwide.The intricate interplay between nutrition transition-the shift in dietary patterns towards higher consumption of high-caloric food and a decrease in consumption of healthy foods [1]-and food insecurity-the inadequate access to sufficient, safe, and nutritious food-is emblematic of multifaceted challenges facing contemporary societies.This transition is often accompanied by a rise in sedentary lifestyles, reduced For the sample to be nationally representative, a minimum number of 400 participants was required.The sample size was calculated based on the population estimates from 2018 to 2019 using the following formula: where 'n' refers to the sample size; 'Z ∝/2 ' refers to the standard error's reliability coefficient at a 5% level of significance and is equal to 1.96; 'p' denotes the probability of adults (18-64 y) who were not capable of taking precautions regarding the diseases (50%); and 'e' represents the standard error's tolerated level (5%), as stated by Hosmer and Lemeshow [15].Overall, 449 participants (184 males and 265 females) from the 8 Lebanese governorates were included.The sampling technique was a combination of stratified cluster sampling, with the stratified groups being the two genders and the clusters being the Lebanese governorates.The process of recruiting participants involved various channels, including volunteers, charitable organizations, first-aid and medical centers, so a broad range of participants could be reached.Participants recruited in this stage were then encouraged to invite other individuals within their outreach to participate.This allowed us to reach participants that were difficult to reach physically due to budget constraints and timing.The individuals willing to participate were informed about the study nature and then assessed for eligibility.Eligible participants provided an electronic written consent indicating their willingness to participate in the study.One individual per Lebanese household was allowed to participate in the study so that a wide representation of households could be guaranteed.Overall, the study involved 444 participants, as 5 participants were excluded after checking for missing data, errors, and outliers.Participant distribution across the 8 governorates is shown in Figure 1.
Nutrients 2024, 16, x FOR PEER REVIEW 3 of 22 64 y) who were not capable of taking precautions regarding the diseases (50%); and 'e' represents the standard error's tolerated level (5%), as stated by Hosmer and Lemeshow [15].Overall, 449 participants (184 males and 265 females) from the 8 Lebanese governorates were included.The sampling technique was a combination of stratified cluster sampling, with the stratified groups being the two genders and the clusters being the Lebanese governorates.The process of recruiting participants involved various channels, including volunteers, charitable organizations, first-aid and medical centers, so a broad range of participants could be reached.Participants recruited in this stage were then encouraged to invite other individuals within their outreach to participate.This allowed us to reach participants that were difficult to reach physically due to budget constraints and timing.The individuals willing to participate were informed about the study nature and then assessed for eligibility.Eligible participants provided an electronic written consent indicating their willingness to participate in the study.One individual per Lebanese household was allowed to participate in the study so that a wide representation of households could be guaranteed.Overall, the study involved 444 participants, as 5 participants were excluded after checking for missing data, errors, and outliers.Participant distribution across the 8 governorates is shown in Figure 1.
Phase 1: Administration of Sociodemographic Questionnaire
During the initial data collection phase, a pre-tested questionnaire was utilized in interviews with participants.This questionnaire covered demographic and socioeconomic details along with the medical background of the participants.For example, participants were queried about their age, gender, weight, height, place of residence, marital status, living space, household size, number of rooms in their residence, educational level, occupation, and any existing chronic illnesses.Questions pertaining to household size and number of rooms were included to compute the crowding index (CI), which serves as an indicator of a household's socioeconomic standing [16].
Phase 2: Food Frequency Questionnaire
Following the completion of the sociodemographic questionnaire, trained dietitians conducted 30-minute interviews with participants to administer a 157-item semi-quantitative food frequency questionnaire (FFQ), which had been validated previously among the adult Lebanese population [17].To assist the participants in remembering how much food they had consumed and in estimating portion sizes as precisely as possible, During the initial data collection phase, a pre-tested questionnaire was utilized in interviews with participants.This questionnaire covered demographic and socioeconomic details along with the medical background of the participants.For example, participants were queried about their age, gender, weight, height, place of residence, marital status, living space, household size, number of rooms in their residence, educational level, occupation, and any existing chronic illnesses.Questions pertaining to household size and number of rooms were included to compute the crowding index (CI), which serves as an indicator of a household's socioeconomic standing [16].
Phase 2: Food Frequency Questionnaire
Following the completion of the sociodemographic questionnaire, trained dietitians conducted 30-min interviews with participants to administer a 157-item semi-quantitative food frequency questionnaire (FFQ), which had been validated previously among the adult Lebanese population [17].To assist the participants in remembering how much food they had consumed and in estimating portion sizes as precisely as possible, instructions and visual aids were given to them.The interviewers recorded how many servings and how often the stated portions (daily, weekly, or monthly) for each food item were consumed.Trained dietitians administered the FFQ.Furthermore, two 24 h recalls were conducted-one on a typical weekday and another on a weekend day.The FFQ captured the frequency of consuming various foods over the past year, with interviewees noting servings, grams, and consumption frequency (daily, weekly, or monthly).The consumption of each specific food and beverage was comprehensively recorded through the two 24 h recalls.Visual aids and instructions were provided to aid participants in accurately recalling food intake and estimating portion sizes.
Phase 3: Anthropometric Measurements
During this stage, anthropometric measurements (weight and height) of the participants were taken at a designated center or facility within their respective governorates.Standardized protocols and calibrated equipment, including a digital scale for weight and a stadiometer for height, were employed to ensure precise measurements.To enhance accuracy, each participant's height and weight were measured thrice, and the Body Mass Index (BMI) was calculated by averaging the three recorded readings.
Data Management and Data Analysis
Excel 2016 was used to code and organize the data.The CI was calculated by dividing the total number of people living in the home (apart from infants) by the total number of rooms (apart from bathrooms and kitchens) [16].The food items were divided into food groups according to the classifications of the following three distinct diets: the 'Mediterranean Diet' [18], the 'EAT-Lancet Diet' [19,20], and the 'USDA Diet' [21].The food group intake (g/day) was then calculated and compared to the recommendations of each of these three diets.To calculate the food group consumption (g/day), the following method was used: The daily consumption of each food item was determined using the FFQ data.To calculate the daily frequency (serving/day), a serving that was reported as being consumed on a weekly or monthly basis was divided by 7 or 30, respectively.Then, the quantity of each food item consumed (g/day) was calculated by multiplying the daily frequencies that resulted by the corresponding serving sizes of the individual food items (g/serving).After that, the individual foods were divided into categories according to the three diets.The total intake (g/day) for a food group was then determined by adding the quantities consumed of each food item belonging to the corresponding group.
After calculating the amounts of food consumed in grams per day, extraction of energy, macro-and micro-nutrients and the fiber content of the food consumed was undertaken using 'Nutritionist Pro' (version 5.1.0,2014, First Data Bank, Nutritionist Pro, Axxya Systems, San Bruno, CA, USA), which is a software that permits the nutritional analysis of individual foods, menu items, and recipe ingredients [22].The extracted nutritional value of food consumed was then compared to the age-specific Dietary Reference Intakes (DRIs), which were created by the 'Food and Nutrition Board, Institute of Medicine, National Academies' (NIH) and include the 'Acceptable Macronutrient Distribution Range' (AMDR), the 'Adequate Intake' (AI) and the 'Recommended Dietary Allowance' (RDAs) [23], 'Dietary Cholesterol and Cardiovascular Risk: A Science Advisory From the American Heart Association' [24], 'Development of a Lebanese food exchange system based on frequently consumed Eastern Mediterranean traditional dishes and Arabic sweets' [25], and 'Nutritional value of the Middle Eastern diet: analysis of total sugar, salt, and iron in Lebanese traditional dishes' [26].Energy requirements were extracted from the 'Dietary Guidelines for Americans 2020-2025' [21].
Statistical Analysis
The sample was classified based on genders into 4 age categories in accordance with the 'Food and Nutrition Board, Institute of Medicine, National Academies' (NIH) [23] as follows: 18 years; 19-30 years; 31-50 years; and 51-64 years.
The Statistical Package for the Social Sciences (SPSS; Version 25.0, IBM Corp: Armonk, NY, USA) was used to analyze the data at a 95% confidence interval.Frequencies (N) and percentages (%) were calculated for categorical variables, while means and standard deviations (SD) were calculated for the continuous variables.
Ethical Considerations
The protocol for this study underwent review and approval by the Ethical Committee at Al Zahraa University Medical Center (#57/2022) and was carried out in compliance with the ethical principles set forth in the Declaration of Helsinki.Prior to participation, participants provided informed consent and were informed that their involvement was voluntary, with the option to withdraw at any point.
Population Characteristics
The demographic and socioeconomic characteristics of the study population are shown in Table 1, and the study population's health characteristics are shown in Table 2. Most participants were females (58.8%), and the mean age ± SD (years) was 34.1 ± 12.7.Most of the participants were residing in Mount Lebanon (40.32%), and most households were found to be crowded (62.84%), which reflects a lower socioeconomic status in these households compared to the non-crowded ones.Almost one-third of the participants (33.8%) had a normal BMI, while the majority (61.9%) were found to be overweight or obese.In addition, 25% of the participants reported having one or more chronic diseases, with anemia (32.4%) being the most prevalent disease, followed by hypertension (30.6%).Regarding the type of disease, more women were shown to be affected by the majority of diseases, except for kidney disease (similar between genders) and liver disease (more men are affected).Significant differences between genders existed when it comes to residency (p-value = 0.006), crowding index (p-value = 0.028), occupation (p-value = 0.000), household income (p-value = 0.000), salary change (p-value = 0.000), the presence of chronic diseases (p-value = 0.005) and the disease type (p-value = 0.004).
Food Groups Consumption
Mean intake of the different food groups by age and by gender are shown in Tables 3 and 4, respectively.The description of food items included in each food group is shown in Table S1.In the overall study population, intake of bread/cereals/grains was the highest (317.18g/d), followed by fruits (254.33 g/d) then vegetables (206.49g/d).Among age categories, the mean intake of bread/cereals/grains was the highest compared to other food groups in all age categories except for participants in the 51-64 age group, in which the mean intake of fruits was the highest compared to other food groups (330 g/d).Significant differences in consumption existed among age categories when it comes to the consumption of processed meat (p-value = 0.000), poultry (p-value = 0.003), fresh fruit juices (p-value = 0.001), sweets (p-value = 0.000), and added fats/oils (p-value = 0.002).Regarding consumption based on gender, our results showed that, on average, male participants had a higher consumption compared to women from all the food groups except for vegetables, with significant differences in consumption existing when it comes to consuming nuts/seeds (p-value = 0.04), dairy products (p-value = 0.011), red meat (p-value = 0.000), processed meat (p-value = 0.000), poultry (p-value = 0.000), fish (p-value = 0.000), eggs (p-value = 0.000), drinking water (p-value = 0.000), non-alcoholic beverages (p-value = 0.000), and alcoholic beverages (p-value = 0.01).Pyramids showing comparisons of the current consumption to the Mediterranean, EAT-Lancet, and USDA diet recommendations are shown in Figure 2. In general, the dietary pattern of Lebanese adults showed low adherence to the recommendations of the three healthy diets.
For instance, the current consumption showed low adherence to the Mediterranean recommendations, especially the consumption of 'sweets', 'red meat', 'white meat', and 'legumes', for which consumption exceeded the recommended daily intake by 1106.4%, 516.8%, 427.5%, and 267.4%, respectively.Our results also showed that the consumption of 'dairy products', 'fruits', 'vegetables', 'olive oil', 'olives/nuts/seeds', and 'grains/cereals' was lower than the amounts recommended by the Mediterranean diet.As for the EAT-Lancet recommendations, the consumption of 'added sugar', 'beef, lamb, pork', 'grains', 'chicken and other poultry', and 'fruits' exceeded the recommended daily intake by 499.9%, 295%, 132%, 118%, and 146.16%, respectively.In addition, most of the consumed grains (92% of the amount consumed) are refined grains, which is far from this diet's recommendation, as it recommends consuming 232 g/d of whole grains (compared to only 24 g/d in our study) and very low amounts (or nothing) of refined grains, in contrast to 274 g/d in our study.Our results also showed that the consumption of 'vegetables', 'dairy products', 'fish', 'nuts', and 'unsaturated oils' was lower than the amounts recommended by the EAT-Lancet diet.Regarding the USDA diet recommendations, the current consumption of 'vegetables', 'fruits' and 'meat and poultry' exceeded the recommendations by 174.37%, 146.17% and 104.57%, respectively.Additionally, a lower consumption of 'grains', 'dairy products', 'fish', 'oils', and 'nuts, seeds, soy products' was observed compared to the recommendations.
Energy Content of Food Consumed
The energy content of the food consumed by the study participants is shown in Table 5.The mean estimated energy requirement (EER) for a participant in our study was 2237.47 kcal/day, and a participant consumed on average 2237.49kcal/day, which represents 100% of the mean EER.Our findings showed that, on average, participants of both genders and in all age categories exceeded their EER, except females in the '31-50 years' age group, of which almost all consumed their EER (97%).Males and females in the '18 years' age group had the highest consumption among male and female participants.Significant differences in the energy content of food consumed existed between genders (p-value = 0.000) and among age categories (p-value = 0.019).
Macronutrient Content of Food Consumed
The macronutrient content of food consumed by the study population is shown in Table 6.Our findings showed that the consumption of carbohydrates and fat exceeded the AMDR in participants from all age categories and genders, except the fat for the females in the 31-50 years group (97.4%).As for proteins, the AMDR was not reached in any of the age groups and genders, ranging from 65.3% for the female participants aged 31-50 years to 93.6% for the male participants aged 51-64 years.The consumption of monounsaturated fatty acids (MUFAs) did not exceed 60% of the RDA for all age categories and genders, except for males in the 51-64 years group (75.29%).In addition, participants from all age categories and genders exceeded the recommended daily limit of saturated fats, except for females in the 31-50 years group, who almost reached the recommended level (99.6%).Plus, the saturated fat content of food consumed exceeded 10% of the energy intake for males in the 19-30 years group and almost exceeded this limit for the other groups (>8% for all age groups and genders).
Micronutrient Content of Food Consumed
The micronutrient content of food consumed by the study population is shown in Table 7.Our results showed that participants in all age groups and genders had a low consumption of fat soluble vitamins (A, D, E), especially vitamin D, for which content in the consumed food was not 20% of the RDA.As for vitamin K, the RDA was exceeded for participants in all age groups and genders.Concerning water soluble vitamins, participants in all age groups and genders had a high consumption for all the vitamins except for biotin and B12.In general, the consumption of biotin did not exceed 85% of the RDA in all age groups and genders, and B12 consumption did not exceed 75% of the RDA in females belonging to the 51-64 years age group.In addition, our results revealed that females of reproductive age (belonging to the age groups 18, 19-30, 31-50) did not reach their daily requirements of iron, while females in all age categories had a low consumption of calcium, not exceeding 80% of the RDA in all the age categories.Moreover, our findings showed that the food consumed by Lebanese adults was high in sodium, with levels exceeding 150% of the RDA for all age groups and genders.
Discussion
This is the most updated study undertaken in Lebanon to assess the dietary consumption patterns of adults and report the energy, macro-and micro-nutrients of this consumption.Our results showed a low adherence to the following three different healthy diets: the Mediterranean, the EAT-Lancet, and the USDA diet.In addition, participants, especially women, failed to meet the RDA for many essential vitamins and minerals, notably vitamin D, calcium, and iron.Also, it was shown that Lebanese adults follow dietary patterns that are high in sodium, added sugars, and saturated fats, and low in potassium and MUFAs.Plus, the consumption of refined grains, red meat, and poultry exceeded the recommended amounts when compared to each of the three diets.A low consumption of seafood and nuts was observed which did not reach the recommendations when compared to each of the three diets.Our findings aligned with the results of a previous national study, which showed that Lebanese adults have low adherence to the Mediterranean diet [27].Similarly in Italy [28], a study showed that Italian adult participants had a medium adherence to the Mediterranean diet, highlighting the need for public health policies to improve dietary habits.As for the EAT-Lancet diet, our findings align with a study conducted in Brazil [29] involving adults aged 20 years and above that showed a low adherence to the recommendations of this diet.The low adherence to the healthy and sustainable diets in our study population might be due to the shift in dietary patterns, as shown in a national study published in 2019 regarding the consumption patterns between the years 1997 and 2008/2009, which showed that the adult Lebanese population is shifting towards the Westernized dietary patterns and departing from the traditional Lebanese dietary pattern [30].This low adherence to the healthy diets, particularly the Mediterranean diet, highlights the importance of shifting towards a healthier dietary pattern in this population, which can be achieved in many ways.For instance, Lebanese adults' consumption of fruits, vegetables, nuts and seeds, and whole grains is lower than the recommended levels.In contrast, the consumption of red and processed meat, and sweets and added sugars exceeds the limits by far.Thus, reducing the consumption of sweets and red and processed meats and shifting to consuming more nuts, seeds, fruits, and vegetables can improve this population's adherence to the Mediterranean and other healthy diets.In addition, Lebanese adults can replace red, processed and white meats with seafood, which will increase the consumption of the latter, as it was shown that consumption of fish and seafood, which contain healthy fats, is below the recommendations.These few changes can eventually improve the nutritional status of Lebanese adults, as fruits, vegetables and nuts/seeds/fish are rich in essential vitamins, minerals and healthy fats that are cardio-protective.
Our results showed that the food consumed by Lebanese adults is high in energy, with participants in all age groups and genders (except females in the 31-50 years group) exceeding their EER.This is reflected by most participants (61.9%) being overweight or obese.The following findings reveal that Lebanese adults are at a high risk of developing NR-NCDs, especially as the high-energy diet of this population is also very high in sodium and saturated fats and low in healthy fats (due to the low consumption of fish, nuts, and seeds).For instance, a systematic analysis conducted in 2017 showed that diets high in sodium are the leading cause of global deaths attributable to diet, while diets low in nuts and seeds were the fourth cause of global death attributable to diet, with the most deaths occurring from cardiovascular disease, followed by diabetes [31].This is compounded by a low vitamin D intake for participants in all ages and genders and low iron and calcium intakes in female participants, which puts female Lebanese adults at a high risk of anemia and osteoporosis.For instance, women aged 50 years and above have a four times higher prevalence of osteoporosis and a two times higher prevalence of osteopenia in comparison to men [32], and low calcium intake is considered a modifiable risk factor for osteoporosis [33].In addition, the amount of females of reproductive age (18-49 years) requiring higher amounts of iron and not meeting their requirements from food intake, as revealed in our study, is alarming, especially as 65.6% of the female participants in our study failed to meet their requirements.Concerning vitamin D deficiency, similar findings were found in Libya [34], Egypt [35], Iran [36], Qatar [37], and Saudi Arabia [38], where a high deficiency of vitamin D was prevalent within the populations.As for vitamins E, A, and K, our findings revealed that participants had a low intake of vitamins A and E and a high intake of vitamin K, which is consistent with a study performed in Jordan that showed that vitamin A and E daily dietary intakes were below the RDAs [39].Furthermore, in Greece, a low nutrient intake of vitamin E was found in all age groups [40], while in Kuwait, around 80% of the population was shown to consume less than the RDAs of vitamins A and E [41].Some studies showed different results when it comes to the consumption of vitamins A and E. A study conducted in Egypt revealed that the intakes of both vitamins A and E were within or above recommendations [35].Similarly, in Pakistan, vitamin A intake seemed quite high at the national level [42].The low consumption of vitamin A and E in our study might be due to the reduction in intake of animal products, fruit/vegetables for vitamin A and vegetable oils/nuts for vitamin E. As for vitamin B12, despite the mean intake being 2.9 µg/day, which is higher than the RDA (2.4 µg/day), and despite the mean intake for most age categories exceeding this RDA, more than half of the participants (52.5%) failed to meet this RDA, indicating a low intake in the adult population.Notably, in our study, younger adults (<30 years) were found to significantly consume more sweets and non-alcoholic beverages than older adults.This can be explained by younger generations being more willing to try new and trendy meals compared to older adults, who stick to old traditions [43].
When it comes to consumption patterns between the two genders, disparities may indeed exist.This could be attributed to various factors, such as cultural norms, access to resources, household dynamics, and societal roles.For instance, based on the 'Food and Agriculture Organization of the United Nations' (FAO), denying women's rights is one of the leading causes of food and nutrition insecurity, making women more vulnerable to chronic nutrition and food insecurity [44].Our findings align with studies that tackled gender disparities in food consumption.A study conducted in Bangladesh [45] showed that men had higher food intakes compared to females, as well as higher portion sizes.In addition, in Lebanon, a study showed that males had significantly higher energy intakes than females [46].
Compared to the findings of the nationally representative survey [30] that addressed the adult dietary consumption in the years 2008/2009 (see Table 8), our results showed a shift in the consumption of most of the food groups.For instance, an increased consumption of bread/cereals/grains (228 g/d vs. 317 g/d), legumes (40.87 was observed.The consumption of red meat, processed meat, poultry, and fish underwent a slight change, while a significant decrease occurred in the consumption of nuts/seeds (9.44 g/d vs. 5 g/d), sugar sweetened beverages (165.76 mL/d vs. 75.86mL/d), and alcoholic beverages (12.7 mL/d vs. 0.6 mL/d).This dietary shift in consumption occurred in both genders, with the consumption of a food group either increasing or decreasing simultaneously in both genders, except for vegetables consumption.In short, the increased consumption of fruits and vegetables and the decreased consumption of alcoholic beverages between the years 2009 and 2022 are considered positive shifts.However, this positive shift is faced by a negative shift characterized by decreased consumption of healthy fats, especially nuts and seeds, and a significant increase in the consumption of sweets, added sugars, sugar sweetened beverages, chips and salty crackers, and bread/cereals/grains, which include many refined items such as pasta, rice, and breakfast cereals.In addition, although the consumption of meat, poultry, and fish remained stable, the consumption of red meat remained higher than the limit set by the healthy diets, while the consumption of fish remained particularly low.This high consumption of red meat is alarming, since red meat is classified as a Group 2A carcinogen, meaning that it is "probably carcinogenic to humans", with a 17% increase in the risk of developing cancer with every 100 g consumed per day [47].Plus, excess red meat consumption is associated with higher risk of developing cardiovascular disease, type 2 diabetes and other NR-NCDs [47].Moreover, the high consumption of sweets, especially in the younger adults' generation, accompanied by an increased consumption of refined carbohydrates predispose Lebanese adults to increased adiposity levels and eventually overweight and obesity [48].For instance, sweets and refined carbohydrates are energy-dense and low in proteins, fiber and essential macro-and micro-nutrients, and their high consumption is associated with increased risk of developing cardiometabolic diseases and dental caries, among other issues [48,49].In short, our findings showed that Lebanese adults, especially women, are at high risk of developing NR-NCDs and hidden hunger.
Strengths and Limitations
This study has the major strength of being the most updated regarding assessing the dietary patterns of Lebanese adults and the corresponding nutritional value of these patterns following the economic crisis, providing valuable insights into the significance of this crucial matter.In addition, the study was performed on a nationally representative sample covering all the Lebanese governorates, which allows for the generalization of the results to the Lebanese adult population.However, this study has some limitations.The data collected in the study relied on self-reported data.This type of data is subject to inaccuracies due to recall bias, which might lead to inaccuracies in estimating portion sizes and under-and/or over-reporting of food consumption.
Conclusions
Our findings highlight unhealthy food consumption patterns in the Lebanese adult population characterized by high sodium, added sugars, and saturated fat intake, as well as low intakes of healthy fats, essential vitamins, and minerals, with the consumption of sweets and added sugars doubling compared to the year 2009.In a country where 91% of all deaths are attributed to NCDs [50], and where the prevalence of NR-NCDs is rising, these results are alarming, as the current dietary pattern has put the Lebanese adult population at high risk of developing NR-NCDs, NR-NCD complications, and hidden hunger.The findings of our study thus call for public health policies and interventions that allow for the adoption of healthy food choices and the shift towards healthier diets, such as the Mediterranean, the EAT-Lancet, or the USDA diet, which are proven to be healthy diets.This might require changes across the food system to focus on promoting healthier diets and ensuring their affordability, availability, accessibility, and acceptability for all [51].
Figure 2 .Figure 2 .
Figure 2. Comparison of the current food consumption to three different diets.
Table 1 .
Demographic and socioeconomic characteristics of the study population, overall and by gender.
Table 2 .
Health characteristics of the study population, overall and by gender.
Table 3 .
Food groups consumption by Lebanese adults, by age category.
Table 4 .
Food groups consumption by Lebanese adults, overall and by gender.
Values are Mean ± Standard deviation.*p-value< 0.05 is significant.aBeveragespresented in mL/day.3.2.2.Comparison of Food GroupsConsumed to the Mediterranean, the EAT-Lancet, and the USDA Diet Recommendations
Table 5 .
Energy content of food consumed by Lebanese adults and the corresponding percentage contribution to the estimated energy requirement per age and per gender.
Table 6 .
Macronutrient content of food consumed by Lebanese adults and the percent contribution to daily value, per age and per gender.
Table 7 .
Micronutrient content of food consumed by Lebanese adults and the percent contribution to daily value per age and gender.
Abbreviations: AI Adequate Intake, d day, DRI Dietary Reference Intake, DV Daily Value, g grams, n number of participants, RDA Recommended Dietary Allowance, SD Standard Deviation.* Values are AIs.
Table 8 .
Comparison of the current adult food-group consumption to the consumption of the year 2009.
* Values in mL/day.
|
2024-06-08T15:06:23.238Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "4b1ed900ce085867762175109907591eafe1583b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/16/11/1784/pdf?version=1717662921",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9016daa14f3218cb2e4ac4c0aaf5e757f2043b2c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
55597509
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the Generating and Influencing Factors of Vertical Cracking in Abutments during Construction
In order to analyze the causes of cracking in abutments subject to concrete shrinkage and temperature variation during the construction process and to determine factors affecting the mechanical properties of the abutment, nonlinear calculations capturing abutment behavior are conducted with Midas/FEA software. Using these calculations, the cracking mechanism is identified, and the influence of the evaluated factors is analyzed. It is concluded that the deformation between the pile cap and abutment backwall as constrained by a pile foundation when subjected to concrete shrinkage and temperature changes is the basic cause of abutment cracks during construction; these cracks form over the piles and develop upward. For a given reinforcement ratio, the distribution of horizontal crack-control steel using small, closely spaced bars is more beneficial. When pile-bearing capacity meets the standard, the width of the generated cracks tends to decrease with the decrease in the diameter of the piles. +e existence of a postcast strip in the abutment backwall also contributes to the decrease in the depth of the crack. Finally, the impact of age difference between the pile cap concrete and abutment backwall concrete on cracking is inconsequential.
Introduction
e typical bridge abutment is composed of a pile foundation, pile cap, abutment cap and backwall (including bracket), and wingwall.e construction of an abutment must be conducted in a sequence: the pile foundation is installed first, then the pile cap, and then the abutment cap and backwall.erefore, there are significant age differences between members that constitute the abutment, and vertical cracks may appear in the backwall above the piles owing to this age difference, shrinkage forces, or temperature forces created, in part, by the generation of hydration heat during concrete curing.erefore, strength monitoring is vital to ensure the safe construction of an abutment and to determine its readiness for service after it has been cast.
To enable the effective monitoring of cast concrete, Munuswamy and Sivakumar [1] analyzed the force conditions in concrete under abutment restraint conditions when subject to the time-dependent effects of creep and shrinkage, relaxation of prestressing steel, and temperature gradient.To accomplish this, taking the abutment-pile-soil system into account (using discrete spring stiffness to describe the translational and rotational degrees of freedom), concrete cracking was initiated by the application of lateral load, and the behavior of the piles supporting the abutment was evaluated and presented in terms of lateral deflection, bending moment, shear force, and stress along the pile depth.
Wang and Zhou [2] highlighted the cracking associated with tall abutments as a critical issue in highway construction.Four U-shaped abutment models were built using clay bricks to provide a contrast analysis of failure mechanism: three models with different arrangements of structural stiffeners and one model of a common abutment.ese abutment models were loaded by filling them with soil and applying an external vertical load on the surface of the soil.It was found that the stiffeners had an obvious effect on the cracking of the abutment model, resulting in higher cracking loads compared to that of the common abutment.
To evaluate the behavior of different components of a bridge structure under the effect of temperature variation, Shoukry et al. [3] developed a detailed instrumentation system for a 3-span continuous steel girder bridge with integral abutments.e instrumentation plan was developed to provide continuous monitoring of the triaxial state of strain, temperature gradient, crack opening, and relative movement between substructure elements.e results indicate that the motion of the substructure owing to the expansion or contraction of the bridge superstructure is partially restrained, generating thermal stresses in the girders and the deck that are not currently accounted for in bridge design.
Abutment behavior was modeled by Yu et al. [4], who performed a finite element analysis of an abutment on the Xiangshan Harbor Highway Bridge in Ningbo.In order to understand the effects of the hydration heat of mass concrete on surface cracking in the early stage of casting, the effects of the four parameters, concreting temperature, environmental temperature, material of the insulating layer, and constraint conditions on surface stress, were analyzed.e results reveal that tensile stress on the surface of the concrete is proportional to the casting temperature while inversely proportional to the environmental temperature.Heat preservation was found to reduce the surface tensile stress more efficiently in the center than at the edge of the abutment, and increasing the stiffness of the concrete formwork was found to effectively prevent cracks.
Flaga et al. [5] presented two approaches to investigating thermal shrinkage stresses and cracking risk in early-age concrete structures: a simplified engineering method and a 3D numerical model.e results of these stress analyses were compared and validated against the behavior of a real abutment upon which cracking had been observed in the early phases of construction.
Although the researchers in [1][2][3][4][5], as well as many others [6][7][8][9][10], have studied and analyzed the stress conditions of bridge abutments under the impact of temperature, live load, and creep and shrinkage, further efforts must be made to investigate the stress mechanism of abutment cracking during construction as a result of hydration heat, the cooling effect from changes in atmospheric temperature, the shrinkage difference between the concrete of the pile cap and the abutment body, and constraints applied by the pile foundations.Notably, few studies have analyzed other factors influencing the formation of abutment cracks, such as the arrangement of lateral anticrack reinforcement, the rigidity of the pile foundation, the difference in the age of the concrete of the pile cap and the abutment body, and the presence of a postcast backwall strip.
In order to determine the effects of these factors, this paper employs the 3D finite element analysis software package Midas/FEA to analyze a 3D finite element nonlinear model of the flyover located at Cuixiang Road, Dongguan, to determine the stress mechanism behind abutment cracking observed during construction.e model was also used to analyze the influence of different factors, such as the lateral anticracking reinforcement, the pile foundation rigidity, the difference in the age of the pile cap and the abutment backwall concrete, and the inclusion of a postcast strip on the stress in the abutment.is information was then used to provide suggestions for preventing and controlling the formation of vertical cracks on bridge abutments during construction.
Overview of Abutment.
e flyover at Cuixiang Road, Dongguan City of China, consists of a 30 m long continuous prestressed concrete box girder bridge with 8 m long approach slabs and D80 expansion joints on both ends.e 19 m wide bridge is designed for Class-I highway loads and is equipped with 19 m wide straight abutments, as shown in Figure 1.
Under the pile cap, there are eight D130 friction-type piles in two rows.e pile cap is 1.5 m deep, 6.3 m in the longitudinal direction of the bridge, and 19.0 m in the transverse direction of the bridge.e abutment backwall is 100 cm thick and 1.6 m high, cast with lateral C16 anticracking reinforcement spaced at 15 cm and C12 stirrups spaced at 10 cm. e abutment backwall is 80 cm high and 175 cm thick.Finally, the abutment is flanked by 50 cm thick wingwalls erected along the abutment body.e concrete for piles, pile cap, abutment backwall, and wingwalls was made of ordinary Portland cement, whose mixing proportion with the water temperature of 20 °C is shown in Table 1.
Abutment Cracking.
e concrete of the pile cap was poured on January 3, 2017, while that of the abutment backwall was poured on February 12, constituting an age difference of 40 days.Several vertical cracks were spotted on the abutment backwall on February 28, dominated by 4 major cracks symmetrically distributed around the longitudinal center line of the bridge, as shown in Figure 2. e cracks were about 2 m long, with the middle two cracks extending from the pile cap all the way to the bearing seats on the backwall.
According to field measurements, cracks L1 and R1, indicated in Figure 2, had a width of around 0.4 mm, while cracks L2 and R2 were as wide as 0.5 mm.A core sample was collected from crack L1, revealing that the crack continued through nearly the entire thickness of the pile cap (Figure 3).
Finite Element Analysis Model.
e 3D analysis model of the subject abutment had 31,599 nodes and 25,846 elements.
e constraining effect of the pile-soil interaction was taken into consideration by applying a soil spring to the sides of the piles in conjunction with constraints at the pile tips.e 3D finite element model of the abutment is illustrated in Figure 4(a).In view of the inhibiting effects of internal reinforcement on crack development, the reinforcement scheme of the abutment was included in the model (Figure 4(b)).
Hydration Heat Model and Atmospheric Temperature
Model.
e cracks in a concrete structure can be classified into two types on account of the crack cause: the first type of crack is referred as a structural crack, caused by external load or internal structural force, while the second type is referred 2 Advances in Materials Science and Engineering as a strain crack, appearing under such classic factors as temperature [11], shrinkage, creep, expansion, humidity, and di erential settlement of the foundation.
e load of the bridge superstructure was not acted on the abutment when the cracks in the abutment of the subject bridge appeared during the curing of concrete.As the construction process was determined to be consistent with relevant laws and regulations, the cracks did not appear as the result of construction loads, and the cracks cannot be structural cracks.Hence, they must be strain cracks, likely caused by some type of deformation.Field investigations excluded di erential settlement of the foundation as a cause of cracking, indicating that the cracks were likely caused by the concrete shrinkage or di erential displacement resulting from temperature changes resulting from the atmosphere and the heat of hydration.
When pouring the abutment, the pile cap had been aged 40 days, so this paper focused on the hydrothermal reaction of the abutment body that occurred from February 12 to February 28. e mixing ratios of the concrete of the abutment, pile cap, and pile foundation are shown in Table 1; thus, the speci c heat C is 1.176 kJ/(kg• °C), and the thermal conductivity λ is 12.42 kJ/(m•hr• °C).
(2) Model of the Atmospheric Temperature.e model of the atmospheric temperature from February 12 to February 28 was a sine function, whose peak was above 16 °C and valley value was 12 °C.e model is shown in (1), and its trend is shown in Figure 5.
where t is the time (hr) and T(t) is the atmospheric temperature corresponding to t hours ( °C). (
3) e Convection Characteristics for the Concrete Material.
As buried in bedrock all year round, the pile foundation was treated at a xed temperature complying with the bedrock.All nodes on the pile surface were given a xed temperature of 20 °C.For the pile cap and abutment, both surfaces were exposed to the atmosphere, so a solid layer with the thickness of 1.5 cm was established on the surface of the pile cap and abutment, which allows the heat exchange between the pile cap, abutment, and atmosphere."Exposed surface" was selected as the convection model, and the convection coe cient was a constant (50.23 kJ/(m 2 •hr• °C)).
(4) Model of the Heat Source.Model of the heat source mainly considered the hydration reaction of the abutment body.
When treated as a heat source, the abutment body was considered without exchanging heat with atmosphere, that is, "adiabatic temperature rise."e adiabatic temperature rise of concrete hydration heat is closely related to the concrete mixing ratio and the water temperature for mixing.e concrete of the abutment was ordinary Portland cement, with the concrete mix proportion shown in Table 1.Taking into account the concrete mix proportion and the water temperature for mixing, we obtained a proper model of adiabatic temperature rise for the abutment body by checking the Japanese norm JSCE-SSCS 2002 [12], as shown in the following formula: where Q int is the highest of adiabatic temperature rise for abutment concrete ( °C), r is the thermal conductivity coe cient (m 2 /hr), t 0 is the starting time of hydration heat for concrete (day), t is the cumulative time of hydration heat for concrete (day), Q(t) is the adiabatic temperature rise for concrete accumulated over t days ( °C).
According to the actual mix proportion of the abutment concrete, we determined the value of the parameters shown in Table 2.
rough the relevant parameters in Table 2, we obtained the model of adiabatic temperature rise for the abutment body, as shown in Figure 6.
Based on the hydration heat model discussed earlier and the atmospheric temperature model, hydrothermal analysis was executed using FEA, and the following results were obtained.
e hydration heat of the concrete began with the start of the abutment pouring.us, taking into account the model of the atmospheric temperature and bedrock temperature, the highest temperature changes over time of hydration heat for the pile foundation, pile cap, and abutment are shown in Figure 7. Figures 8(a)-8(f), corresponding to the six representative time series, show the temperature of the abutment pro le.
As seen from Figure 7, the temperature of the abutment body remarkably rose from 20 °C to 36 °C within three days after the body was poured (February 12 to February 15), and then, the temperature began to drop.When cracks went 4 Advances in Materials Science and Engineering through the body of abutment (February 28, its temperature was 14.4 °C.As buried in bedrock all year round, the temperature of the pile foundation was basically consistent with the bedrock, at about 20 °C.Because its surface was exposed to atmosphere mostly, the temperature of the pile cap uctuated between 12 and 14 °C.However, due to the e ects of the abutment body and pile foundation temperature, the temperature of the junction area of them would vary di erently.In summary, the temperature model of the pile foundation, pile cap, and abutment body can determine the accuracy of the hydration heat model for the abutment body.e crack coe cient i value was used to predict whether the temperature cracks occur: where f t is the tensile strength of concrete (MPa) and σ T is the temperature stress (MPa).
According to the hydrothermal analysis using FEA, the crack coe cient of the abutment can be obtained, as shown in Figure 9.
When the i value is greater than 1.5, cracks can be prevented.Figure 9 shows the minimum i value of 1.1, which was mainly concentrated in the area of the abutment body above pile foundations; that is, the construction cracks will be generated in these areas.
Load Calculation.
In order to study the generation mechanism and development process of the vertical cracks for abutment and use them as the basis for the parametric analysis of the abutment, it was not enough to rely on the hydrothermal analysis using FEA alone.In this paper, a nonlinear analysis of the abutment was carried out on the basis of the results of hydration analysis considering the atmospheric temperature.
According to the construction record, the vertical cracks in the abutment body belong to the through cracks, and the through cracks are often caused by the deformation on account of the concrete cooling gradually after the mass concrete develops to a certain extent.erefore, in the nonlinear analysis process, the load was "cooling," mainly including the following cooling models.
(1) Equivalent Temperature Di erence of Shrinkage.In order to compare the e ects of shrinkage with the e ects of hydration heat and atmospheric temperature, in this study, shrinkage is converted to an equivalent temperature differential.e shrinkage is rst calculated in accordance with JTGD62-2012 Code for Design of Highway Reinforced Concrete and Prestressed Concrete Bridges and Culverts [13], in which the concrete shrinkage stress can be calculated using the following equations: where t is the concrete age at the moment of consideration (day); t s is the concrete age when the shrinkage begins (day), set at 3d; ε cs (t, t s ) is the shrinkage stress value at t and t s ; ε cso is the nominal shrinkage coe cient, which is given in Table 3; β s (t−t s ) is the shrinkage development factor; h is the theoretical thickness of the component under consideration (mm), where h 2 A/u when A is the cross-sectional area of the component (mm 2 ); and u is the peripheral length of the component in contact with the open air (mm).In Table 3, RH is the annual average relative humidity in atmosphere, and in this case, RH is 70% to 99%; thus, ε cso is 3.1 ×10 −4 .
Advances in Materials Science and Engineering
As stated before, there were nearly 40 days between the casting of the pile cap and the casting of the backwall.e concrete in the pile cap and abutment backwall was believed to begin shrinking 3 days after the pouring of each component was completed.e pile cap concrete was 55 days old and abutment backwall concrete was 16 days old when the backwall rst showed cracks.e primary parameters for determining the shrinkage stress in the cushion cap and abutment body with no constraints at the time of cracking were calculated according to (4) and ( 5) and are listed in Table 4.
Assuming the linear expansion coe cient of concrete, α, to be 1.0 × 10 −5 / °C, the equivalent temperature of differential shrinkage between the pile cap and abutment backwall concrete, T s , can be determined using the following formula: (2) e Abutment Cooling ΔT i on Account of Hydration Heat and Atmospheric Temperature.Taking into account the in uence of atmospheric temperature on the abutment body, its concrete hydration heat developed to peak will gradually cool down.e data that the temperature of the abutment body changed with the time of hydration heat had been calculated, as shown in Figure 8. e node temperature of the abutment body varies with the coordinates of nodes from Figure 8.For node i, the maximum temperature was recorded as T imax , the corresponding minimum temperature was recorded as T imin , and the temperature drop ΔT i was T imin − T imax .
Calculation Conditions.
Two conditions were evaluated to determine the cause of cracking in the subject abutment, as described in this section.
Condition 1.
e equivalent temperature di erence of concrete between the abutment body and pile cap T s .
is condition described the e ect of di erential concrete shrinkage between the abutment backwall and the pile cap.As determined in the calculations of Section 2.2.2(1), the equivalent temperature di erential describing this concrete shrinkage, T s , was −0.31 °C, and it was applied to the abutment body.Considering the constraint of pile foundations, the model of the pile cap, abutment body, and pile foundations was established by using Midas/FEA.
Condition 2.
e abutment cooling on account of hydration heat and atmospheric temperature ΔT i + the equivalent temperature di erence of concrete between the abutment body and pile cap T s .
is condition considered the comprehensive temperature cooling, including the temperature cooling ΔT i of the abutment body as a result of hydration heat and atmospheric temperature and the equivalent temperature T s on account of the concrete shrinkage di erence between the abutment body and pile cap.Simultaneously, the constraint of the pile foundation was considered.
Considering the model of atmospheric temperature, we extracted the cooling data ΔT i of all nodes (i 1∼50,946) of the abutment body and attached the equivalent temperature T s .In the nonlinear analysis, the comprehensive temperature cooling ΔT i + T s was applied to all nodes of the abutment body corresponding to the body node number, respectively.
Constitutive Tensile Relationship of the Concrete
Material.
e Midas/FEA analysis accounts for the nonlinear properties of the concrete material in order to determine the causes of the observed cracking.e concrete material parameters used adopt the total strain crack model, a type of the discrete crack model that evaluates crack behavior according to either a xed crack model or a rotating crack model.In the xed crack model, once determined, the cracking direction does not change in any case, while for the rotating crack model, the cracking direction changes with the change in the direction of principle stress.Although the former can accurately describe the physical properties of a crack in detail, when a structure with a xed orthogonal crack is compared with a structure with a nonorthogonal crack, the structural rigidity and strength are overevaluated.In comparison, the rotating crack model requires no knowledge of past cracking states, so its calculation process is more simple and convergent.Because of its merits, the rotating crack model has long been applied to the nonlinear analysis of reinforced concrete structures.Accordingly, the rotating crack model was chosen for use in this study as well.Advances in Materials Science and Engineering e total stress cracking model adopts a linear softening model that exhibits softening when the tensile strength of the base material is exceeded, as shown in Figure 10.
e major constitutive equations for the linearized softening tensile model are shown below: where G f indicates the tensile fracture energy of concrete, E indicates the elastic modulus of concrete, h denotes the crack width, f t denotes the tensile stress in the concrete, and ε cr u•min denotes the minimum ultimate crack strain.
Bonding Model of Reinforced Concrete.
Considering that the slippage between steel and concrete in a reinforced concrete structure was a ected by cracks, a bond-slip model can be used to simulate this e ect.e FEA software uses a constitutive model based on the theory of total mass to describe the bond-slip model.Assuming that the normal direction of materials was linear elasticity and the tangential direction was nonlinear, the bond-slip model can be described as follows: Deriving the right side of ( 9) and ( 10) for relative displacement, the tangent sti ness is as follows:
Analysis of Crack Cause.
e Midas/FEA software package was used to create a 3D nonlinear nite element analysis model of the subject abutment.Under Condition 1, the focus of the research was on the e ect of the di erence in concrete shrinkage between the abutment backwall and the pile cap on the stress in the abutment backwall.Under Condition 2, attention was paid to all potential crackin uencing factors: the atmospheric temperature changes, hydration heat of concrete, and equivalent temperature of concrete shrinkage di erence.e major results and analysis are as follows.
Condition 1.
Under this condition, the e ect of the di erence in concrete shrinkage between the abutment backwall and the pile cap on the abutment is considered in isolation.Meanwhile, this model took the constraint of pile foundations into account.When the equivalent differential shrinkage temperature T s −0.31 °C is applied to the abutment backwall, the lateral stress distribution shown in Figure 11 is obtained.
Figure 11 depicts the distribution of the lateral normal stress in the abutment when the constraints of the pile foundation on the abutment and pile cap were taken into consideration, and the lower middle portion of the abutment was the most stressed area, with a maximum tensile stress of about 0.45 MPa, which is lower than the standard tensile strength of 2.01 MPa for abutment concrete.As a result, no cracks appeared on the abutment for this equivalent temperature change, which correlates with the di erential shrinkage between the pile cap and abutment.
Condition 2.
e condition of temperature cooling ΔT i + T s acted on the abutment body.Considering the constraint of pile foundations, the following calculation results were obtained by the Midas/FEA analysis.
Taking the change of the maximum temperature of the abutment body T max with the hydration heat time of the abutment body t as a reference, the development of cracks in the abutment was described as in Figure 12.When T max decreased from 35.5 °C to 18.2 °C (i.e., 10 days after abutment pouring), the abutment began to form vertical cracks at the top of both sides of pile foundations, namely, cracks "L1" As the maximum temperature of the abutment body T max was further reduced to 15.9 °C (i.e., 11 days after abutment pouring), the abutment began to form vertical cracks at the top of middle pile foundations, namely, cracks "L2" and "R2," whose widths were 0.238 mm, as shown in Figure 13(b).
Advances in Materials Science and Engineering
When the maximum temperature of the abutment body T max was reduced to approximately 14 °C (i.e., 14 days after the abutment pouring), the four main cracks had penetrated the abutment body.e cracks "L1" and "R1" were 0.365 mm wide and 1.74 m high, and the cracks "L2" and "R2" were 0.384 mm wide and 1.85 m high, as shown in Figure 13(c).
At the moment, the L1, L2, R1, and R2 cracks were nearly the same as the vertical cracks observed in the abutment of the yover at Cuixiang Road.In view of the computational results of Conditions 1 and 2, it can be concluded that the di erence in concrete shrinkage between the abutment backwall and pile cap is not the primary cause of the vertical cracks in the abutment backwall.Instead, the combined e ects of hydration heat, atmospheric temperature di erential, and concrete shrinkage di erential cause a deformation in the abutment backwall to occur. is deformation is constrained by the pile foundation, causing the major cracks to proceed vertically upward from the top of the pile cap.
Analysis of Crack-In uencing Factors.
e analysis in Section 3.1 reveals the stress mechanism causing vertical cracks in the abutment backwall under secondary e ects such as concrete shrinkage and temperature change.In order to prevent the appearance of these vertical cracks, the authors analyze di erent factors a ecting the behavior of the abutment backwall, including horizontal crack-control reinforcement, pile foundation rigidity, concrete age difference between the pile cap and abutment backwall, and the inclusion of a postcast strip.
Horizontal Crack-Control Reinforcement.
Keeping the reinforcement ratio and 20 cm stirrup interval xed, the change in abutment backwall cracking with the change in diameter and interval of horizontal crack-control reinforcement was evaluated.To this end, four reinforcement schemes using reinforcing diameters from 10 to 16 mm and spacing intervals from 10 to 20 cm, shown in Table 5, are compared.
e four reinforcement schemes are calculated under Condition 2 with the results as provided in Figure 14.
e maximum crack widths corresponding to reinforcement schemes I, II, III, and IV shown in Figure 14 are summarized in Table 6.
e data provided in Table 6 suggest that, for the same reinforcement ratio, the maximum crack width increases with the increasing diameter and interval of horizontal crackcontrol reinforcement.In other words, the arrangement of Advances in Materials Science and Engineering horizontal crack-control reinforcement in a "thin but dense" manner most successfully controls the appearance of vertical cracks on the abutment during construction.For the subject abutment, when the reinforcement ratio is held constant, the use of a "thin but dense" arrangement (C10 reinforcement at 10 cm intervals) can reduce the crack width by 29.3% when compared with a "thick but sparse" arrangement (C16 reinforcement at 20 cm intervals).e results are provided in Figure 15, and the calculated crack widths in the abutment backwall for each pile size are summarized in Table 7.
Pile Foundation
Clearly, the rigidity of the piles has a signi cant e ect on the stress in the abutment during construction.As can be seen in Table 7, as the rigidity of the piles increases with increasing pile diameter, so does the constraining e ect on the abutment backwall.As a result, the width of the vertical cracks in the abutment backwall increases with the increase in pile diameter.
Inclusion of a Postcast Strip.
In order to examine the e ect of the postcast strip on crack development, a 70 cm thick postcast strip was placed in the middle of the abutment backwall model.e concrete construction procedure was assumed as follows: (1) erect the reinforcement cage for the pile cap and abutment backwall and then cast the pile cap; (2) complete the reinforcement for the left and right sides of the abutment and cast the abutment backwall on the left and right sides, leaving the reinforcement in the postcast strip location exposed; (3) pour the concrete to form the left and right abutment caps; and (4) connect the exposed reinforcement in the postcast strip and cast the strip concrete 60 days later.
e Midas/FEA software package was used to carry out a nonlinear analysis of the abutment during construction under Condition 2, yielding the results shown in Figure 16.
A comparison of the results shown in Figure 16 for an abutment including a postcast strip with those shown in Figure 13(c) for an abutment with no postcast strip reveals a signi cant decrease in the crack width when a postcast strip is used.e maximum crack width in an abutment without a postcast strip is 0.384 mm, while in an abutment with a postcast strip, the crack width is 0.252 mm, representing a 34.4% reduction in addition to a signi cant reduction in the crack height.
Di erence in Pile Cap and Abutment Backwall Concrete
Age.
e concrete of the abutment backwall was not poured until 40 days after the pouring of the pile cap.According to the results computed in Section 2.2.2(1), the equivalent temperature change of the concrete shrinkage di erential between the pile cap and the abutment backwall is T s −0.31 °C.
When the constraining e ects of the piles are taken into consideration after the equivalent shrinkage temperature drop acts on the abutment backwall, the maximum normal stress on the abutment backwall is only 0.45 MPa (Figure 11), which is far lower than the standard tensile strength of 2.01 MPa of abutment body concrete.As a result, the contribution of the di erence in age of the pile cap and abutment backwall concrete to the vertical cracks in the abutment backwall may be safely ignored.
Conclusions
In order to determine the causes of vertical cracks observed in an abutment backwall during the construction of the yover located at Cuixiang Road, a total strain crack model was used to create a 3D nite element nonlinear analytic model of the abutment backwall, pile cap, and foundation.In the process, the stress mechanisms in the abutment body under the e ects of concrete shrinkage and various temperature changes are summarized, and in uencing factors on the abutment are analyzed, including the parameters of horizontal crack-control reinforcement, the pile rigidity, the di erence in age of the pile cap and abutment backwall concrete, and the inclusion of a postcast strip in the abutment backwall.e following conclusions were obtained: (1) e total strain crack model created using Midas/FEA can e ectively analyze the cracking behavior of concrete with outstanding convergence, direct posttreatment, and output of critical data such as crack width.(2) e major reason for the appearance of vertical cracks in an abutment is that, under the joint e ect of concrete shrinkage and temperature change, deformation appears between the pile cap and the abutment backwall but is constrained by the sti ness of the piles so that cracks appear in the abutment backwall above the piles and develop vertically.(3) e abutment backwall should be provided with reinforcement according to the design load requirements.Holding the reinforcement ratio for a horizontal crack-control reinforcement constant, a "thin but dense" arrangement is more favorable for controlling the growth of vertical cracks on the abutment backwall.(4) While meeting the basic design load requirements, the diameter (and concomitant sti ness) of abutmentsupporting piles should be reduced, if possible, to alleviate the constraining e ect of the piles on the deformation of the abutment backwall under concrete shrinkage and temperature change.
( 1 )
ermal Property for the Concrete Material.e heat conduction in the concrete unit was determined by the
(a) Photo of the crack (b) Drilled core
Figure 4 :
Figure 4: ree-dimensional nite element model of the subject abutment for condition 2.
Figure 7 :Figure 6 :
Figure 7: Change of abutment temperature with time of hydration heat.
Figure 9 :
Figure 9: Temperature crack coe cient of the abutment.
Figure 13 :
Figure 13: Crack development process and crack width in the abutment.
Figure 12 :
Figure 12: Main crack state corresponding to hydration time of the abutment body.
Figure 14 :
Figure 14: Crack widths in the abutment for di erent reinforcement schemes (mm).
Rigidity.According to the stress mechanisms in the abutment backwall during construction, the backwall tends to deform under the e ect of secondary factors such as concrete shrinkage and temperature change, but this deformation is constrained by the pile foundation, resulting in the formation of vertical cracks on the abutment above the pile locations.erefore, it is necessary to analyze the e ect of pile rigidity on the stress in the abutment.Piles with diameters of 1.0 m, 1.1 m, 1.2 m, 1.3 m, and 1.4 m were evaluated to provide a comparative analysis under Condition 2.
Table 1 :
Mix proportion of abutment concrete.
Table 2 :
Related parameters of adiabatic temperature rise for abutment concrete.
Table 4 :
Concrete shrinkage parameters and strain.
Table 6 :
Maximum crack widths in the abutment under di erent reinforcement schemes.
|
2018-12-05T20:34:54.021Z
|
2018-03-19T00:00:00.000
|
{
"year": 2018,
"sha1": "987dcab1543616a21a791102c258a84069427c00",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amse/2018/1907360.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "987dcab1543616a21a791102c258a84069427c00",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
268620285
|
pes2o/s2orc
|
v3-fos-license
|
Graph Convolutional Multi-Mesh Autoencoder for Steady Transonic Aircraft Aerodynamics
. Calculating aerodynamic loads around an aircraft using computational fluid dynamics is a user’s and computer-intensive task. An attractive alternative is to leverage neural networks bypassing the need of solving the governing fluid equations at all flight conditions of interest. Neural networks have the ability to infer highly nonlinear predictions if a reference dataset is available. This work presents a geometric deep learning based multi-mesh autoencoder framework for steady-state transonic aerodynamics. The framework builds on graph neural networks which are designed for irregular and unstructured spatial discretisations, embedded in a multi-resolution algorithm for dimensionality reduction. The test case is for the NASA Common Research Model wing/body aircraft configuration. Thorough studies are presented discussing the model predictions in terms of vector fields, pressure and shear-stress coefficients, and scalar fields, total force and moment coefficients, for a range of nonlinear conditions involving shock waves and flow separation. We note that the cost of the model prediction is minimal having used an existing database.
Introduction
Aerodynamic analyses in many engineering sectors, from aerospace to automotive, remain computationally expensive with high-fidelity CFD.The need to provide physical insights at very small scales around a complex geometry with limited resources contrasts with the ever pressing project deadlines.This motivates the investigation of ROMs for accurate approximations of the physical system via numerical techniques of reduced computational complexity.The recent breakthrough in ML has prompted the development of new methodologies particularly suited for ROMs.Modern, sophisticated ML algorithms are attractive for approximating highly complex and nonlinear systems from data.For example, Massegur et al [1] leveraged ML to predict the formation of ice on a wing section and the resulting aerodynamic degradation across a range of freezing conditions.Furthermore, Massegur et al [2] developed an ML based aerodynamic model coupled with a structural model for aeroelastic and flutter search analyses.Herein, we focus on three-dimensional (3D) geometries and we address the added complexities compared to two-dimensional (2D) cases, which are around ten times more frequent in the literature.Regarding the physics, 3D problems present more abundant nonlinear flow features and interactions [3,4], which include cross-flow interactions, wing-tip vortices and separation bubbles, among others.ROM approaches developed for 2D problems [2], featuring direct prediction of aerodynamic forces, are questionable to address the additional intricacies that arise in 3D problems.To this goal, we use deep learning NNs [5] to model distributed aerodynamic quantities on the aircraft surface.
The primary challenge in reduced-order modelling of 3D aerodynamic fields is the large dimensional space, reflecting the numerical discretisation of the computational domain [6].Common nonlinear ROM approaches, including Kriging [7] and DNNs [8,9] result ill-suited.To address the problem of poor scalability, CNNs are better suited for flow-field analyses because they are designed to extract spatial features from digital images [10][11][12].The problem is that CNNs are not directly suitable for unstructured meshes typical of CFD applications because of their limitation to Euclidean domains (Cartesian grids).Interpolating data to a regular grid is essential, but it can lead to additional errors and loss of resolution in refined regions.Alternatively, CNN kernels of size 1 can be used as these are applicable to any mesh arrangement, as implemented by Immordino et al [13] or Sabater et al [14].The issue with this approach is that the mesh connectivity is unused, resulting in simpler cloud-point modelling.The absence of propagation of information across the mesh limits the capability to capture coherent flow features and reconstruct continuous solutions.To leverage connectivity throughout the mesh, geometric deep learning [15] offers a range of convolutional methods, known as GNNs.GNNs are capable of processing graphs or manifold-unstructured meshes, which makes them a better fit for CFD applications that deal with non-Euclidean data [16].Examples of GNNs for steady-state aerodynamics are not abundant.Ogoke et al [17] addressed aerodynamic solution of a 2D aerofoil rather than a more relevant test case.Baqué et al [18] projected the 3D geometry to a simpler prismatic graph to contain the computational burden.On the other hand, comprehensive message-passing methods, as adopted by Hines and Bekemeyer [19] or Han et al [20], which feature dedicated learnable weights for each edge and node of the mesh, can lead to excessive memory requirements.
To maximise ROM computational efficiency when dealing with large spatial domains, dimensionality reduction to compress the domain size is crucial.The classical POD [21,22] is only limited to linear projections of the data onto the compressed space, causing general loss of information in highly complex physics.AEs [23][24][25][26], which are the NN alternative to the POD, are a better alternative allowing a nonlinear compression and recovery.This work contributes generally to the question of dimensionality reduction of large datasets defined on irregular spatial domains.To this aim, an AE approach embedded in the graph-based convolutional framework is sought.We solved this challenge by devising a multi-resolution scheme, inspired by the common multi-grid methods to solve partial differential equations [27].While the methodology applies naturally to other fields and disciplines, we demonstrate the applicability on a 3D aerodynamic problem.
We propose a graph-convolutional MM AE tailored for predicting distributed aerodynamic loads around a full-scale air vehicle.Unique contributions of this work are the applicability to non-Euclidean domains and a unique multi-resolution embedding for dimensionality reduction and modelling efficiency purposes.Furthermore, a building-block implementation, consisting of composition of separate NN units, facilitates evaluation of different model architectures to address the task.We have chosen the wing/body configuration of the NASA CRM for demonstration, predicting steady-state transonic aerodynamic loads across a range of Mach numbers and angles of attack.
The paper continues in section 2 with a description of the problem we solved and the identification of nonlinear features to be retained in the model outputs.The methodology for steady-state problems is explained in section 3.Then, section 4 summarises the main results.Finally, conclusions and an outlook on future work are given in section 5.
NASA CRM wing/body configuration
The test case is for the NASA CRM wing/body configuration, which is representative of a transonic transport aircraft designed to fly at a cruise Mach number of 0.8 [28,29].The CRM is shown in figure 1.The reference geometric chord is c = 0.1412 m, the surface area is S = 0.0727 m 2 , and the pitch moment is taken around x a /c = 0.5049.
The CFD dataset was taken from Immordino et al [13], where SU2 7.5 solver was used.The surface mesh shown in figure 1 features around 78 000 grid points in an unstructured triangular topology with adapted discretisation density around the edges.The volume mesh consists of over 15 million cells, with a prism-layer mesh to promote y + < 1 near the wall, denoting the non-dimensional height of the first cell normal to the boundary.The y + ensures an adequate resolution of the boundary layer.The Spalart-Allmaras turbulence model was chosen.For good convergence, the JST scheme with added dissipation was adopted to discretise the convective term, Green Gauss was chosen to compute the discretised gradients and the biconjugate gradient with ILU preconditioner was applied to solve the linear solver.
In this work, we are interested in the steady-state prediction of the pressure coefficient: with p ∞ , ρ ∞ and U ∞ the freestream pressure, density and velocity, respectively.The shear-stress coefficient components: with τ w = [τ x , τ y , τ z ], on the surface mesh of the CFD model across an envelope of Mach numbers M ∞ and angles of attack α ∞ .
Reference dataset
We used the available database of aerodynamic solutions to generate and validate our steady-state predictive model.The database contains 70 CFD calculations in the range of M ∞ ∈ [0.70, 0.84] and α ∞ ∈ [0.0, 5.0] deg, at Reynolds number Re = 5 • 10 6 and freestream temperature T ∞ = 311 K.The range of freestream conditions was chosen to have diverse nonlinear flow phenomena to be predicted by our model.The sampled conditions are shown in figure 2, of which 40 were used for training (represented with circles) and the remaining 30 for validation (triangles).Latin hypercube technique was used to define this limited number of samples for the preliminary envelope scan.The resulting sampling distribution did not include points on the boundaries and occasionally left large gaps between samples.Therefore, these experiments are useful to assess the performance of our framework in under-sampled spaces.First, we delve into the variation of lift and drag coefficients, C L and C D , respectively, with the freestream conditions shown in figure 3.These were obtained by integrating the pressure coefficient C P and shear-stress C τ fields of the reference CFD solutions.Inspecting the colormaps, the lift coefficient correlates predominantly with the angle of attack.The variation of the drag coefficient indicates an equal correlation to α ∞ and M ∞ .Both the angle of attack and the shock-induced boundary-layer separation contribute to the drag increase.
Then, we extracted few sample points from the database for further demonstration of the complexity of our problem.Figure 4 shows the pressure coefficient for selected conditions, and figure 5 is for the skin friction coefficient, defined as the shear-stress norm C f = ||C τ || 2 .These figures are arranged into four panels, placed to reflect the location of the samples labelled in figure 2, with low α ∞ on the bottom panels and high M ∞ on the right panels.The pressure distribution, figure 4, significantly differs depending on the freestream conditions.Inspecting the lower and the upper left panels, the location of the shock wave moves towards the leading edge and the shock intensity increases with angle of attack at lower Mach numbers.This transition appears to be nonlinear for α ∞ higher than 3 deg.With increasing Mach number, the pressure distribution becomes gradually smoother and the shock wave becomes stronger.As a result, at the highest M ∞ , right panels, the peak C P values are lower.On the contrary, at these M ∞ conditions, the location of the shock wave remains similar with increasing angle of attack, bottom to top of the right panels.
Similarly, the skin friction coefficient distributions, in figure 5, indicate the boundary-layer separation regions (darker blue) induced by the shock wave.At low Mach number, left panels, the separation line moves towards the leading edge with increasing angle of attack.Furthermore, in panel (a) a separation bubble is visible.On the contrary, above M ∞ = 0.8, the location of the separation line remains similar across the α ∞ range, as seen in the right panels.
This brief overview sets the background problem for our ROM.The diverse nonlinear flow characteristics cannot be captured by simple direct modelling of the scalar loads, such as C L and C D .On the other hand, the task of predicting distributed quantities at each condition, such as C P , is more challenging than scalar targets but is more useful from a design standpoint.This motivates our choice to devise a GDL based framework for prediction of the surface aerodynamic fields across the operating envelope.
Methodology
In a steady-state formulation, the aerodynamic response is considered dependent on the input conditions only, and any time dependence is neglected.An NN function f NN is sought which maps specific user-defined inputs s to desired target fields Y i on the surface mesh S: with i denoting a node in S. The grid point coordinates x i are also embedded.We devise an architecture for f NN by leveraging GDL and dimensionality reduction for unstructurally meshed manifolds.From GDL, we resort to GNNs, which involve the convolution operation on graphs [16,30].
Graph representation
In a GNN approach, the surface mesh is represented as a graph where the vertices (or nodes) contain the position coordinates x i and variable fields y i (features).The graph edges connecting the grid points are determined by the mesh connectivity.Figure 6 illustrates a representation of a graph where a target node i is connected to j ∈ S surrounding grid points.Features y i and weights e ij are assigned, respectively, to each node and edge.The edges defining the graph connectivity and chosen weights are arranged to form the adjacency matrix [16]: This is an n × n matrix containing the edge weights, and n is the total number of grid points in S. The subscript ij denotes the jth source node connected to a given node i.We assign the weights as the inverse of the distance between the two nodes forming the edge: We then normalise the edge weights to be ∈ (0, 1], where the upper end is inclusive because self loops, i.e. e ii = 1, are inserted by adding the identity matrix to the adjacency matrix:  = A + I.In addition, we choose the edge weights to be non-directional, i.e. e ij = e ji , which results in a symmetric adjacency matrix,  = ÂT .Note that  is largely sparse as each row contains only a few non-zero elements.Consequently, it is more memory-efficient to arrange the adjacency matrix in COO format.This format consists of two vectors: the edge-index and the edge-weight vectors.The edge-index vector contains the pair of node indices [i, j] for each edge, of size n e × 2 and n e the number of edges.The edge-weight vector contains the assigned weight edges e ij equation ( 5), with size n e × 1.In the CRM test case, the surface mesh consists of n = 78 829 points and n e = 472 404 edges.
GCN
From the family of GNN architectures, we leverage the GCN by Kipf and Welling [31].The GCN operator at a given target node is defined as: with θ a layer-specific trainable weight vector, b a constant term and y the node-based input vector at each node of the mesh S.
) ∀ i contains the sum of the edge weights connected to each node i, known as the diagonal degree matrix.
At each layer l, the GCN operation, equation ( 6), is executed on the output from the previous layer y l−1 , followed by a nonlinear activation function h: We adopted for h the PReLU [32]: with β another learnable parameter.Note how the GCN operator equation ( 6) takes the convolutional analogy of the CNN for Euclidean domains [33] but with a single-parameter filter swept across each row of the adjacency matrix.In fact, if we set set e ij = 0 for i ̸ = j, the standard CNN layer with kernel size 1 is obtained, as Immordino et al [13] opted.
GCN-based AE
From equation (6) we realise that successive GCN executions are required to exert influence between far-away grid points.For instance, with k l GCN layers in succession, only k l neighbourhoods around target node i are influenced.Consequently, the propagation of information across the mesh is slow.This leads to two issues.The first is that a deep network with an excessive number of layers would be necessary to propagate the information in refined regions.The second issue relates to the memory size of the network becoming computationally unmanageable in large spatial domains and high number of features.To alleviate these issues, as introduced in section 1, we adopted an AE approach for the compression of the spatial domain.
The AE involves the projection of the states from the original domain onto a compressed space, operation known as encoder.Then, these latent states are recovered back onto the original domain in an inverse operation, known as decoder.Embedding NNs in this process makes the AEs more attractive than POD as nonlinear projections are possible [23].Figure 7 lays out the AE process embedded with GCNs.
MM scheme
AEs for discretised domains entail a multi-resolution scheme, which involves gradual coarsening operations and a subsequent refining of the grid.Reduction techniques in Cartesian arrangements are trivial, including, for example, pooling operations [11].In contrast, coarsening of unstructured meshes is a more difficult task.One of the primary challenges is that the adjacency matrix needs to be regenerated at every coarsening step.A second challenge relates to reliable transfer of information between the various grid resolutions.
To solve these challenges, we present a novel hierarchical MM scheme for the AE.In the encoder process, the mesh is coarsened between blocks of GCN layers, intended for extraction and compression of crucial flow-field features.The latent states on the coarsened mesh are decoded in a recovery operation interleaved with additional GCN blocks, to reconstruct the solution onto the original domain.Figure 8 illustrates the coarsening and recovery operations of the proposed 2-level multi-mesh cycle for the CRM model.This method is reminiscent of the common multi-grid V-cycle algorithms to solve partial differential equations [27,34].Operating on an MM cycle is advantageous for: (1) reducing the computing memory size, given the compressed spatial domain; (2) extracting features of different spatial scales by means of the different mesh resolutions; and (3) enabling direct information exchange among distant nodes, avoiding the need for a deep network to spread influence across the grid, which results in a significantly smaller model.
The coarsening operation in the encoder involves the removal of grid points from the original mesh.The strategy taken to coarsen the mesh is crucial to prevent loss of essential information.For instance, a uniform random selection should preserve the original mesh topology, in terms of relative cell sizes, on the reduced mesh.However, there is risk of insufficient resolution left in regions where the initial node density was already low.On the other hand, excessive removal of nodes on originally refined regions could lead to inappropriate reconstructions where the solution is likely to present larger gradients.
For adequate representation of the distinct spatial regions at the coarse level, a balanced node selection is essential.We achieved this by selecting the nodes according to a probability function based on the corresponding face area: with i the index of the grid points sorted by their face area a i in ascending order, and n the total node count.We chose p 1 = 0.2 and p n = 1 for the smallest and largest elements, respectively.The resulting distribution is demonstrated in figure 9. Probability of being selected is higher in nodes with larger face areas, likely in unrefined regions, as opposed to nodes found in dense discretisations.The coarsened mesh resulting from this selection probability is illustrated at the bottom of figure 8.The mesh size (node count) in the various multi-resolution levels is reported in table 1 for the CRM test case.Note how a compression ratio of 16 was adopted.
Upon selection of the grid points to be kept in the coarse mesh, the graph connectivity was regenerated by reconnecting remaining nodes that shared connections with discarded ones.Figure 10 demonstrates the process of restoring graph connectivity following a coarsening operation, with new edges shown in orange.
Weighted moving least squares for grid interpolation
As shown in figure 8, information must be transferred across multiple grids.In the reduction step of the encoder, the original field data must be interpolated onto the compressed mesh.In the decoder, the recovery of the latent states onto the fine grid entails an inverse interpolation.These are critical operations in the MM cycle.The interpolation consists of a functional I Sn→Sr : R nn → R nr to map a spatial field y i from a source grid S n , which contains n n nodes, to a target grid S r , with n r nodes, where both grids discretise the same spatial domain: with I Sn→Sr the interpolation matrix from the source to the target mesh, and y j the interpolated field data.
The following properties are desirable for an adequate interpolation [35]: (1) interpolated values at the source nodes should match the original data; (2) integrated resultants should be conserved; and (3) interpolated fields should be continuous.Consequently, directly recasting the data across coincident points and nearest-neighbour interpolation, as Han et al [20] proposed, result inappropriate because conservation and continuity properties are not satisfied.
There is multitude of multi-grid algorithms, often devised to accelerating the solution of finite-volume discretisation of partial differential equations, as proposed for example by Smith [34].However, we chose a different approach which is particularly suited for fluid-structure interaction problems, where satisfying the above properties is crucial for adequate transfer of loads and deflections across models.In particular, we implemented a weighted moving least squares (WMLS) scheme [35,36].The idea is to generate a shape function u(x) to approximate the input data y i at source nodes i ∈ S n with coordinates x i by least-square-error minimisation: where the weight w(||x − x i ||) is a function of the distance between the source and the target points.We specify u(x) as a polynomial combination: with p(x) = [1, x, y, z, x 2 , y 2 , z 2 , xy, xz, yz] T the second-order polynomial basis function, and a the vector of respective coefficients.The analytical solution of the least square minimisation can be shown to yield the resulting approximation at every node j ∈ S r of the target grid: with the shape functions defined as: and Therefore, the interpolation matrix I Sn→Sr is: The least squares approximation, equation ( 14), must be computed for each target grid point, resulting in computationally intractable matrix operations when dealing with large meshes.To reduce the computing burden, we adopt a moving interpolation consisting of limiting each target node to be influenced only by the k n closest source points: In this work, we found k n = 10 a good compromise between interpolation accuracy and computational efficiency.Note that I Sn→Sr is a largely sparse and non-square matrix of size n r × n n , each row containing just k n non-zero values.This matrix is not invertible and two different interpolation matrices must be generated for the encoder and decoder operations, I Sn→Sr and I Sr→Sn , respectively.Figure 8 illustrates the dual interpolation process.We observe how the resulting pressure reconstruction (right) after execution of the MM cycle matches well the original field (left).
Steady-state prediction framework
We now complete the construction of the predictive model architecture, here referred to as steady-state GCN-MM-AE.For the CRM use case, the target fields are the pressure coefficient and the shear stress components, Y = [C P , C τx , C τy , C τz ], across the input envelope of Mach numbers and angles of attack, The architecture of the final steady-state GCN-MM-AE model is shown in figure 11.The scalar inputs s are cast to each node of the graph, concatenated to the grid-point coordinates x i ∀ i ∈ S n .The input vectors are processed by the encoder, involving the coarsening step of the MM cycle and two GCN blocks.Subsequently, the decoder comprises the recovery operation of the MM embedded in two additional GCN blocks.The network ramifies at the end into separate blocks for each field quantity to predict.This architecture defines f NN in equation (3).
To the best of our knowledge, this framework is novel on several fronts: (1) GDL based AE framework for spatial predictions on large and unstructured discretisations, applied to an aerospace problem; (2) multi-resolution scheme embedded in the nonlinear AE for dimensionality reduction of unstructured manifolds, aimed at capturing different spatial scales, promoting influence across the grid and maximising ROM computational efficiency; and (3) building-block functionality to address distinct tasks within the same framework: multi-resolution reconstructions, steady-state predictions and extension to dynamic simulations.This framework was implemented using PyTorch 1.11, an optimised deep-learning library in Python, and PyTorch Geometric, an open-source GNN package built upon PyTorch.
Results
This section is organised in two parts.The first part focuses on the prediction of scalar quantities-in our case, the integrated force and moment coefficients.The second part is related to the model prediction of the distributed fields-the pressure and the shear-stress coefficient distributions.The appendix contains more background information, including the steady-state GCN-MM-AE architecture from figure 11 and the model optimisation procedure.Comparison between our WMLS scheme and the multi-grid method by Smith [34] for the two-way interpolation of the MM cycle was also investigated.To complete the framework set-up, sensitivity assessments to key hyperparameters, such as the training set size, the model weights and the MM cycle set-up, are also reported.Worth noting that at each sample point the model prediction outputs C P and C τ distributions, from which the resulting force and moment coefficients were obtained by integration.
Integrated loads
Figure 12 presents the analysis for the integrated lift coefficient C L , drag coefficient C D and pitching moment coefficient C My .The left panels show the prediction error on the dataset samples, with training samples in circles and validation in triangles.The error is defined as: with y = [L, D, M y ].The errors are classified by a traffic-light colour scheme: green corresponds to prediction errors below 4%, amber between 4% and 10%, and red above 10%.For convenience, the percent error is reported above each sample point.Best and worst predictions are highlighted as A and B, respectively.The prediction error is generally small throughout for the C L predictions, where the error is below 2.2%.Additionally, the worst prediction (B) was found at M ∞ = 0.76, α ∞ = 4.42 deg, with pitching moment error of 24.3%.This point stands out for not including training samples within a wide surrounding.The model found more difficult learning this region of the envelope.An adaptive sampling method to include training samples in under-sampled regions would be convenient for improved model accuracy.However, this is beyond the scope of this work.We also found the C My prediction accuracy degrades slightly towards high angles of attack, where nonlinear aerodynamic response occurs.Nevertheless, the accuracy of the model is overall high.Table 2 provides a statistical summary of the prediction errors.The average error, standard deviation and the worst prediction for each aerodynamic coefficient across the complete dataset and the validation dataset are reported.The statistics are similar between datasets.This suggests that there is no over-fitting and the model performs well to new conditions.In addition, the low standard deviations indicate good model precision.The average errors were also found low, with C L the best predicted quantity.The errors for the C D and C My tend to be skewed by the reference values being an order of magnitude smaller than the lift values.Last, the worst statistics are for the C My , consequence of the larger errors at high angles of attack.Small errors on the shock-wave location around the reference axis can contribute to a magnified error too.
The adoption of the traffic-light system for the error plots in figure 12 is convenient to judge the adequacy of the model for design purposes.The low error obtained for the lift is essential because this is regarded the most important design parameter.In contrast, larger errors for the drag and pitching moments are acceptable due to the smaller magnitudes.In general, errors lower than 4%-5% (green) are within typical simulation tolerance and, therefore, acceptable.For errors between 5% and 10%-15% (amber), engineering judgement should take consideration of the discrepancies.Larger errors (red) could cause wrong aerodynamic design directions.Action should be taken to improve the model, by iteratively including new training experiments and regenerating the model until acceptable error is achieved.In practice, however, the error tolerance is determined by application.For example, multiphysics simulations (e.g.aeroelastic analyses) require stricter modelling tolerances from each separate model than single physics simulations (e.g. common aerodynamic responses).Nevertheless, there is no risk of unpredicted structural failures as safety factors are enforced to be over 200%.
Predicted distributions
Figures 13 and 14 analyse the predicted pressure coefficient field for the two labelled conditions in figure 12.
The contour plots illustrate the reference C P solution from CFD (left), the prediction by our model (middle) and the error (right).Panel (d) compares the C P distributions at the cross sections specified in the right contour.We observe that in sample e the prediction is in good agreement over the whole surface, figure 13.For sample f, the model is overall correct except for a small discrepancy on the shock-wave location, predicted slightly further downstream from 30% of the span, figure 14.Figures 15 and 16 provide a similar comparison for the skin friction coefficient C f .The shear-stress field is also found correct in the first condition.In the second case, we observe that the boundary-layer separation is slightly delayed.This is in line with the predicted location of the shock wave shown in the previous figure.Remarkably, this result indicates that our proposed framework appears to understand the relationship between the physical quantities.This demonstrates the reason for developing a single model for multiple target fields.
Full envelope prediction
The implemented ROM is convenient to efficiently interrogate the complete operating envelope.The right panels in figure 12 present the resulting aerodynamic maps.The 3D contour plots were built by integrating the surface field predictions of 30 × 30 samples uniformly distributed.The C L correlates with α ∞ as expected, with the nonlinear C L slope at high angles of attack also well captured by the model.In addition, a shallow valley along the diagonal is visible.This seems to be caused by the shock wave intensifying while the peak pressure gradually decreases.The isocontours on the C D envelope reveal a nonlinear behaviour along the α ∞ axis, which resembles the expected quadratic dependence, especially at low M ∞ .The drag increase at higher Mach numbers is related to the boundary-layer separation induced by the shock wave.Last, the C My envelope reveals highly nonlinear phenomena.The pitching moment decreases (in magnitude) with α ∞ caused by the shock wave intensifying and moving upstream.By contrast, C My is largest at low α ∞ and M ∞ ∼ 0.81 consequence of the downstream location of the shock wave.The small spike observed at high α ∞ is likely consequence of the lack of sampling in that region.
Note on computing costs
A summary of the computing costs involved in the deployment of the framework and the significant saving compared to high-fidelity simulations is reported in table 3. The steady-state CFD conditions by Immordino et al [13] were solved with a 120 core HPC, totalling up to 16 000 CPU-h for the 40 training CFD samples.
The ROM was generated with a 6 GB GPU and the training process required around 2.5 GPU-h.New aerodynamic predictions are completed in less than a second, rather than almost 3 h in CFD, i.e. a speed-up of well over 99.9%.As a result, the 30 × 30 samples to construct the 3D envelopes in the right Panels of figure 12 were completed within minutes, whereas it would be impractical using only CFD.Furthermore, to address a typical aerodynamic characterisation campaign [37], comprising ten points along the angle-of-attack axis and a Mach-number resolution of 0.02, the overall computing gain, including the generation of the training dataset in CFD, would still be up to 50%.
Conclusions
The flow analysis around a 3D aircraft remains an expensive task despite access to larger and more performing computing services than ever before.This limitation takes on an even higher criticality when the designer is tasked with delivering the performance of the system across a range of relevant flow conditions.In engineering, the loads experienced by a reentry vehicle passing through the Earth's atmosphere, the stability and control characteristics of a transport aircraft across the flight envelope or the aerodynamic map of a racing car are examples of common tasks.To overcome the computational burden associated with running a multitude of CFD analyses, whose number is at the designer's discretion to obtain data in time for pressing deadlines, the use of ROMs is a viable alternative.However, these models still present a number of challenging decisions.The most critical decision one has to make is the choice of the mathematical structure for the ROM.
We developed a geometric deep-learning AE framework to achieve a cost-effective predictive model of output quantities of interest defined on a large spatial domain with an unstructured, irregular discretisation.The framework dealt with over 300 thousand outputs and about 80 thousand grid points distributed on a 3D discretisation of a wing/body aircraft configuration.We faced specific challenges that required the development of a novel approach, which can be taken to other disciplines with immediate applicability.The first challenge is represented by the large set of points used for the spatial discretisation.We created a dedicated multi-resolution AE for extracting multi-scale features from the data field, transferring influence across the domain and for memory efficiency.The second challenge is related to the unstructured and irregular point discretisation.We made use of GCNs that enable the convolutional operation on irregular domains using the mesh connectivity, very attractive to emulate high-fidelity computational engineering analyses.The resulting predictive framework offers a novel approach when data are defined on large 3D unstructured manifolds, which is the case for any realistic problem.Nonetheless, the framework keeps the ability to use the simpler cases of structured domains when available.
It is worth noting that the steady-state framework builds on one single model that outputs multiple vector fields distributed on an unstructured domain.For our application to aircraft aerodynamic loads, the predicted pressure and shear-stress coefficient distributions were integrated in space to calculate the total force and moment coefficients.This operation not only mimics the way integrated loads are obtained in a CFD solver, but there are advantages too.First, the generation, training and validation of one single model is noticeably cheaper and easier than doing it for two separate and distinct models.Then, it avoids having two independent models that may be best fit for their specific outputs but do not recover the actual relationship among the distinct outputs, i.e. integration in space for our aerodynamic problem.Finally, a physically sound distribution of flow quantities obtained from one single model leads to a sound interpretation of aerodynamic loads.
The NASA CRM wing/body aircraft configuration was used for demonstration.We used an existing database of 70 pre-computed cases to generate and validate the predictive model.In the reference study that provided us the database, sample points were placed across the flight envelope using a latin hypercube method, which is not receptive of any feature learned during the design space exploration.The model predictions achieved a good match to reference data across the flight envelope.It is expected that further improvements in model predictions are obtained using an adaptive design of experiments where sample points are placed at strategic locations of the design space.Once the model is generated, load predictions across the whole flight envelope of angle of attack and Mach number is possible within minutes.A thorough study demonstrated that the model captured the various nonlinear effects throughout the envelope, including variations of shock wave strength and position with the angle of attack and Mach number, and the appearance of shock-induced boundary-layer separation at certain flow conditions.
Today, there is an abundance of data from calculations and measurements.Our choice of using an existing database reflects this situation.Cost-wise, model predictions are obtained at a minimal cost, 0.1%, compared to running the computational fluid dynamics solver.In design applications, the issue of database generation is still actual.To minimise the preliminary data requirements, a combined data-driven with physics-knowledge implementation could be thought by embedding physics-informed loss terms during training.However, application of physics terms on the surface of a complex geometry is not trivial and proper consideration is sought as the fluid dynamics equations are formulated for the fluid volume.We believe the generalisation to variable shapes is attractive to further expand the applicability of our model.Since our model is designed to include the coordinates of the mesh nodes as inputs, the current implementation may be adapted to wing deflections for static, 3D aeroelastic analysis and for aerodynamic shape design and optimisation.
A.3. Sensitivity to model architecture hyperparameters
The influence from the various key hyperparameters of the proposed model architecture is analysed here.In particular, we are interested in assessing the sensitivity of the model performance to the training set size, the number of model weights or the multi-mesh cycle reduction.
A.3.1. Sensitivity to MM compression rate
Table A4 reports the prediction error summary for various mesh compression rates adopted in the MM cycle.Different models were generated for mesh coarsenings to 10 000, 5000 and 2500 grid points, respectively.A compression ratio larger than 30 was not pursued.The final mesh coarsening choice adopted in the Results section is highlighted in bold.We observe that the prediction error statistics are similar among the various MM compressions.Despite the prediction error is expected to worsen with coarser meshes, the results suggest that the various GCN blocks are able to compensate for this.The model performance remains acceptable even on significantly large mesh reductions.
A.3.2. Sensitivity to MM cycle levels
The sensitivity to the number of MM reduction layers is assessed in table A5.Results are reported for MM cycles of 2 (final choice highlighted in bold), 3 and 4 levels.The three-level MM cycle was constructed by first reducing to 20 000 nodes and subsequently reducing to 5000, assigning the same node count as for the two-level case.The four-level scheme was achieved by reducing to 40 000, 20 000 and 5000.GCN blocks were embedded between mesh levels, for a total of 174 460, 352 156 and 436 252 model weights, respectively.The statistics are similar among the various MM cycles.By contrast, the computational cost involved in the three-level and the four-level schemes was found, respectively, 32% and 81% larger compared to the two-level implementation.This motivated our choice of the two-level reduction as a more efficient implementation in terms of model memory requirements.A quantification of the prediction confidence for each multi-mesh cycle is illustrated in figure A3.Inter-quartile range plots for the three different MM levels and the various performance metrics, C L error, C D error and C My error.The uncertainty intervals are based on predictions of the validation dataset.The error for each validation sample is also illustrated in triangles.We observe that the three-level option was found with the marginally lowest variability.Nevertheless, the uncertainty is similar for the various MM levels, which provides confidence on the final choice of the two-level MM cycle.
A.3.3. Sensitivity to training dataset size
We now analyse how the prediction performance is affected with a smaller training dataset.In particular, the number of training samples was halved, i.e. using 20 samples to generate the model as opposed to the original 40.Minimising the amount of data requirements is interesting to reduce the computational cost involved with running the CFD simulations.The prediction error summary for the various load coefficients is reported in table A6, with the original dataset highlighted in bold.A degradation of the performance is observed with the reduced training set.However, the error values remain still reasonably good considering the small number of preliminary samples to generate the model.
Figure 1 .
Figure 1.Surface mesh representation of the common research model.
Figure 2 .
Figure 2. Dataset samples across the operating envelope of the CRM model.Circles correspond to training samples and triangles, validation cases.Lettered labels indicate selected samples to showcase the different physics across the envelope, figures 4 and 5.
Figure 3 .
Figure 3. Integrated aerodynamic force coefficients of the NASA CRM wing/body configuration for the reference dataset, classified by training (circles) and validation (triangles) samples.
Figure 4 .
Figure 4. Pressure coefficient distribution, CP, at the four sample points of figure 2.
Figure 5 .
Figure 5. Skin friction coefficient distribution, C f , at the four sample points of figure 2.
Figure 6 .
Figure 6.CRM mesh represented as a graph with node features and edge weights.
Figure 7 .
Figure 7. Autoencoder concept, involving the embedding of GCN blocks in the reduction (encoder) and reconstruction (decoder) of the field data.
Figure 8 .
Figure8.The two-level multi-mesh scheme for the CRM test case, showcasing the resulting pressure reconstruction from one mesh coarsening-refining cycle.
Figure 9 .
Figure 9. Probability density function to select the grid points in the mesh coarsening operation.Grid points ordered by face area in ascending order.
Figure 10 .
Figure 10.Graph coarsening procedure and connectivity regeneration as part of the dimensionality reduction algorithm.
Figure 11 .
Figure 11.Steady-state GCN-MM-AE model architecture for aerodynamic predictions of the CRM test case.
Figure 12 .
Figure 12.Aerodynamic coefficient predictions with the steady-state GCN-MM-AE model for the CRM test case.Error plots on the left are classified by training (circles) and validation (triangles) samples; percent error below 4% in green, amber between 4% and 10%, and red larger than 10%.
Table 1 .
Surface mesh sizes (number of nodes) of the two multi-mesh levels for the CRM case.
Table 2 .
Statistical summary of the prediction error for the CRM test case and various datasets.
Table 3 .
Computing costs for steady-state modelling of the CRM test case.
Table A4 .
Prediction error comparison for various MM compression ratios on the CRM validation dataset.
Table A5 .
Prediction error comparison for different MM cycle levels on the CRM validation dataset.
|
2024-03-23T15:16:28.735Z
|
2024-03-21T00:00:00.000
|
{
"year": 2024,
"sha1": "6ab0af46fb085b2d41620ef1da202bb66a9a23ec",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2632-2153/ad36ad/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "34f11c4495a9b1e2c61de4569c38713586d4c762",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
52249921
|
pes2o/s2orc
|
v3-fos-license
|
Biomass burning emissions and potential air quality impacts of volatile organic compounds and other trace gases from fuels common in the US
A comprehensive suite of instruments was used to quantify the emissions of over 200 organic gases, including methane and volatile organic compounds (VOCs), and 9 inorganic gases from 56 laboratory burns of 18 different biomass fuel types common in the southeastern, southwestern, or northern US. A gas chromatograph-mass spectrometry (GC-MS) instrument provided extensive chemical detail of discrete air samples collected during a laboratory burn and was complemented by real-time measurements of organic and inorganic species via an open-path Fourier transform infrared spectroscopy (OP-FTIR) instrument and three different chemical ionization-mass spectrometers. These measurements were conducted in February 2009 at the US Department of Agriculture’s Fire Sciences Laboratory in Missoula, Montana and were used as the basis for a number of emission factors reported by Yokelson et al. (2013). The relative magnitude and composition of the gases emitted varied by individual fuel type and, more broadly, by the three geographic fuel regions being simulated. Discrete emission ratios relative to carbon monoxide (CO) were used to characterize the composition of gases emitted by mass; reactivity with the hydroxyl radical, OH; and potential secondary organic aerosol (SOA) precursors for the 3 different US fuel regions presented here. VOCs contributed less than 0.78 %± 0.12 % of emissions by mole and less than 0.95 %× 0.07 % of emissions by mass (on average) due to the predominance of CO2, CO, CH4, and NOx emissions; however, VOCs contributed 70–90 (±16) % to OH reactivity and were the only measured gas-phase source of SOA precursors from combustion of biomass. Over 82 % of the VOC emissions by mole were unsaturated compounds including highly reactive alkenes and aromatics and photolabile oxygenated VOCs (OVOCs) such as formaldehyde. OVOCs contributed 57–68 % of the VOC mass emitted, 41–54 % of VOC-OH reactivity, and aromaticOVOCs such as benzenediols, phenols, and benzaldehyde were the dominant potential SOA precursors. In addition, ambient air measurements of emissions from the Fourmile Canyon Fire that affected Boulder, Colorado in September 2010 allowed us to investigate biomass burning (BB) emissions in the presence of other VOC sources (i.e., urban and biogenic emissions) and identify several promising BB markers including benzofuran, 2-furaldehyde, 2-methylfuran, furan, and benzonitrile.
Introduction
Biomass burning (BB) emissions are composed of a complex mixture of gases and particles that may directly and/or indirectly affect both climate and air quality (Jaffe and Wigder, 2012;Sommers et al., 2014).Emissions include greenhouse gases such as carbon dioxide (CO 2 ), methane (CH 4 ), and ni-Published by Copernicus Publications on behalf of the European Geosciences Union.
J. B. Gilman et al.: Biomass burning emissions and potential air quality impacts of VOCs
trous oxide (N 2 O); carcinogens such as formaldehyde and benzene, and other components potentially harmful to human health including particulate matter, carbon monoxide (CO) and isocyanic acid (HNCO) (Crutzen and Andreae, 1990;Hegg et al., 1990;Andreae and Merlet, 2001;Demirbas and Demirbas, 2009;Estrellan and Iino, 2010;Roberts et al., 2010Roberts et al., , 2011;;Sommers et al., 2014).The co-emission of nitrogen oxides (NO x = NO + NO 2 ) and reactive volatile organic compounds (VOCs, also known as non-methane organic compounds) from combustion of biomass may degrade local and regional air quality by the photochemical formation of tropospheric ozone (O 3 ), a hazardous air pollutant, and secondary organic aerosol (SOA) (Alvarado et al., 2015).This work characterizes primary biomass burning emissions of organic and inorganic gases of fuels common to the US and compares the relative impacts on regional air quality as it relates to potential O 3 and SOA formation.
Tropospheric O 3 may be formed in the atmosphere from the interactions of VOCs, NO x , and a radical source such as the hydroxyl radical (OH), which is formed from the photolysis of O 3 , aldehydes, hydroperoxides, or nitrous acid (HONO).Biomass burning is a large, primary source of VOCs, NO x , and HONO (i.e., O 3 precursors); however, these species are emitted at varying relative ratios depending on the fuel type and burn conditions making it difficult to predict O 3 formation from the combustion of biomass (Akagi et al., 2011;Jaffe and Wigder, 2012).An additional O 3 formation pathway occurs via oxidation of VOCs often initiated by reaction with the hydroxyl radical ( q OH) in the presence of NO 2 leading to the formation of peroxynitrates, such as peroxyacetic nitric anhydride (PAN).The formation of peroxynitrates may initially diminish O 3 formation in fresh BB plumes due to the initial sequestration of NO 2 , but enhance O 3 formation downwind via production of NO 2 from thermal dissociation of peroxynitrates (Jaffe and Wigder, 2012).Due to the complex relationship between O 3 production and VOC / NO x ratios and peroxynitrates, we use OH reactivity as a simplified metric to compare reactivity of all measured gaseous emissions by fuel region in order to identify the key reactive species that may contribute to photochemical O 3 formation.
SOA is organic particulate mass that is formed in the atmosphere from the chemical evolution of primary emissions of organic species.Here, chemical evolution refers to a complex series of reactions of a large number of organic species that results in the formation of relatively low volatility and/or high solubility oxidation products that will readily partition to, or remain in, the particle phase (Kroll and Seinfeld, 2008).SOA formation from BB emissions is highly variable (Hennigan et al., 2011) and chemical modeling results suggest that there is a "missing large source of SOA" precursors that cannot be explained by the sum of measured aerosol yields of SOA precursors such as toluene (Alvarado et al., 2015).Aerosol yield is a measure of the mass of condensable compounds created from oxidation per mass of VOC precursor and is often used to predict potential SOA mass of complex mixtures; however, care must be taken to ensure that the aerosol yields for all precursors were determined under similar conditions (e.g., VOC : NO x ratios, oxidant concentrations, etc.).In order to conduct comparisons of the potential to form SOA on a consistent scale, we use a model-based unitless metric, termed SOA potential (SOAP), published by Derwent et al. (2010) which "reflects the propensity of VOCs to form SOA on an equal mass basis relative to toluene." Advances in instrumentation and complementary measurement approaches have enabled chemical analyses of a wide range of species emitted during laboratory-based biomass burning experiments (Yokelson et al., 1996(Yokelson et al., , 2013;;McDonald et al., 2000;Schauer et al., 2001;Christian et al., 2003;Veres et al., 2010;Hatch et al., 2015;Stockwell et al., 2015).This information supplements several decades of field measurements of BB emissions reported in the literature (Andreae and Merlet, 2001;Friedli et al., 2001;Akagi et al., 2011;Simpson et al., 2011).Chemically detailed, representative measurements of VOCs and other trace gases from biomass combustion are critical inputs to photochemical transport models aimed at reproducing observed downwind changes in the concentrations of reactive species including VOCs, O 3 , peroxynitrates, and organic aerosol (Trentmann et al., 2003(Trentmann et al., , 2005;;Mason et al., 2006;Alvarado and Prinn, 2009;Heilman et al., 2014;Urbanski, 2014;Alvarado et al., 2015) and are essential to understanding impacts on chemistry, clouds, climate, and air quality.
For this study, a comprehensive suite of gas-phase measurement techniques was used to quantify the emissions of 200 organic gases, including methane and VOCs, and 9 inorganic gases from laboratory biomass burns of 18 fuel types from 3 geographic regions in the US (hereafter referred to as "fuel regions") in order to compare the potential atmospheric impact of these gaseous emissions.A list of all gasphase instruments and manuscripts detailing the results of the coincident measurement techniques is included in Table 1.These companion manuscripts include fire-integrated emission ratios (ERs) for species such as inorganic gases including HONO (Burling et al., 2010) and HNCO (Roberts et al., 2010), organic acids (Veres et al., 2010), formaldehyde and methane (Burling et al., 2010), and a large number of identified and unidentified protonated molecules (Warneke et al., 2011).Yokelson et al. (2013) synthesized the results of all the measurement techniques, including the GC-MS data presented here, in an effort to compile an improved set of fuelbased emission factors for prescribed fires by coupling lab and field work.Comparisons between laboratory and field measurements of BB emission factors are presented elsewhere (Burling et al., 2010(Burling et al., , 2011;;Yokelson et al., 2013).
Here we detail the results of the 56 biomass burns sampled by a gas chromatography-mass spectrometry (GC-MS) instrument which provided unparalleled chemical speciation, but was limited to sampling a relatively short, discrete segment of a laboratory burn.We begin by comparing mix- ing ratios measured by the GC-MS instrument to those concurrently measured by infrared spectroscopy and protontransfer-reaction mass spectrometry, both of which provide high time resolution sampling of laboratory fires.We then compare discrete ERs and fire-integrated ERs, representing the entirety of emissions from a laboratory burn, in order to quantify any potential bias that resulted from discrete versus "continuous" sampling techniques utilized in this study.In order to merge data sets from multiple instruments, we report mean discrete ERs of over 200 identified gases relative to CO for southwestern, southeastern, and northern fuel regions to compare the chemical composition of the mass emitted, the reactivities of the measured gases with the hydroxyl radical in order to identify the key reactive species that will likely contribute to O 3 formation, and utilize a model-derived metric developed by Derwent et al. (2010) to compare relative SOA formation potentials from each fuel region.Detailed chemical models are required to more accurately account for the various O 3 and SOA formation pathways, which is beyond the scope of this study.
In addition to the laboratory fire measurements, we present field-measurements of rarely-reported VOCs in ambient air during the Fourmile Canyon Fire that affected Boulder, Colorado in September 2010.The latter measurements revealed BB markers that were specific to the BB emissions, minimally influenced by urban or biogenic VOC emission sources, and were emitted in detectable quantities with long enough lifespans to be useful even in aged, transported BB plumes.
Fuel and biomass burn descriptions
The laboratory-based measurements of BB emissions were conducted in February 2009 at the US Department of Agriculture's Fire Sciences Laboratory in Missoula, Montana.A detailed list of the biomass fuel types, species names, fuel source origin, and the carbon and nitrogen content of the fuels studied here are included in Burling et al. (2010).Up to 5 replicate burns were conducted for each of the 18 different fuels studied.These fuels are categorized into three geographic fuel regions based on where the fuels were collected.The data presented here include nine southwestern fuels from southern California and Arizona including chaparral shrub, mesquite, and oak savanna/woodland; six southeastern fuels represented the pine savanna/shrub complexes indigenous to coastal North Carolina and pine litter from Georgia; and three northern fuels including an Englemann spruce, a grand fir, and ponderosa pine needles from Montana.All fuels were harvested in January 2009 and sent to the Fire Sciences Laboratory where they were stored in a walk-in cooler prior to these experiments.
All biomass burns were conducted inside the large burn chamber (12.5 × 12.5 × 20 m height), which contains a fuel bed under an emissions-entraining hood, an exhaust stack, and an elevated sampling platform surrounding the exhaust stack approximately 17 m above the fuel bed (Christian et al., 2003(Christian et al., , 2004;;Burling et al., 2010).Each fuel sample was arranged on the fuel bed in a manner that mimicked their natural orientation and fuel loading when possible and was ignited using a small propane torch (Burling et al., 2010).During each fire, the burn chamber was slightly pressurized with outside air conditioned to a similar temperature and relative humidity as the ambient air inside the burn chamber.The subsequent emissions were entrained by the pre-conditioned ambient air and continuously vented through the top of the exhaust stack.The residence time of emissions in the exhaust stack ranged from ∼ 5 to 17 s depending on the flow and/or vent rate.Each burn lasted approximately 20-40 min from ignition to natural extinction.
Instrumentation and sampling
A list of the gas-phase instruments and measurement techniques used in this study, a brief description of the inherent detection qualifications of each instrument, and references appear in Table 1.The gas chromatography-mass spectrometry (GC-MS) instrument and the proton-transfer-reaction mass spectrometry (PTR-MS) instrument were located in a laboratory adjacent to the burn chamber.The proton-transferreaction ion-trap mass spectrometry (PIT-MS) instrument, negative-ion proton-transfer chemical-ionization mass spectrometry (NI-PT-CIMS) instrument, and open-path Fourier transform infrared (OP-FTIR) optical spectroscopy instrument were located on the elevated platform inside the burn chamber.Hereafter, each instrument will be referred to by the associated instrument identifier listed in Table 1.
Sampling inlets for the four mass spectrometers were located on a bulkhead plate on the side of the exhaust stack 17 m above the fuel bed.The GC-MS and PTR-MS shared a common inlet, which consisted of 20 m of unheated 3.97 mm I.D. perfluoroalkoxy Teflon tubing (Warneke et al., 2011).The portion of the inlet line inside the exhaust stack (40 cm) was sheathed by a stainless steel tube (40 cm, 6.4 mm I.D.) that extended 30 cm from the wall of the exhaust stack and was pointing upwards (away from the fuel bed below) in an effort to reduce the amount of particles pulled into the sample line.A sample pump continuously flushed the 20 m sample line with 7 L min −1 flow of stack air reducing the inlet residence time to less than 3 s.Separate inlets for both the PIT-MS and NI-PT-CIMS were of similar materials and design, but shorter lengths further reducing inlet residence times and allowing for sample dilution for the NI-PT-CIMS (Roberts et al., 2010;Veres et al., 2010).
The open optical path of the OP-FTIR spanned the full width of the exhaust stack so that the emissions could be measured instantaneously without the use of an inlet.All measurements were time aligned with the OP-FTIR in order to account for different inlet residence times and instrument response times.Previous comparisons of OP-FTIR to a PTR-MS with a moveable inlet confirmed the stack emissions are well-mixed at the height of the sampling platform (Christian et al., 2004).Other possible sampling artifacts, such as losses to the walls of the inlets, were investigated via laboratory tests and in situ instrument comparisons (Burling et al., 2010;Roberts et al., 2010;Veres et al., 2010;Warneke et al., 2011).
Discrete sampling by in situ GC-MS
A custom-built, dual-channel GC-MS was used to identify and quantify an extensive set of VOCs.For each biomass burn, the GC-MS simultaneously collected two samples, one for each channel, and analyzed them in series using either an Al 2 O 3 /KCl PLOT column (channel 1) or a semi-polar DB-624 capillary column (channel 2) plumbed to a heated 4-port valve that sequentially directed the column effluent to a linear quadrupole mass spectrometer (Agilent 5973N).The sample traps for each channel were configured to maximize the cryogenic trapping efficiencies of high-volatility VOCs (channel 1) or VOCs of lesser volatility and/or higher polarity (channel 2) while minimizing the amount of CO 2 and water in each sample (Goldan et al., 2004;Gilman et al., 2010).While ozone traps were not required for these experiments, they were left in the sample path in order to be consistent with other ambient air measurements and laboratory calibrations using this instrument.
For each channel, 70 mL min −1 was continuously subsampled from the high volume (7 L min −1 ) sample stream for 20 to 300 s resulting in sample volumes from 23-350 mL each.Smaller sample volumes were often collected during periods of intense flaming combustion in order to avoid trapping excessive CO 2 , which could lead to dry ice forming in the sample trap, thereby restricting sample flow.Larger sample volumes allowed for detection of trace species, but peak resolution would degrade if the column was overloaded.Sample acquisition times longer than 300 s were not possible with the GC-MS used in this study.
The mass spectrometer was operated in either total ion mode, scanning all mass-to-charge ratios (m/z) from 29 to 150; or in selective ion mode, scanning a subset of m/z's.The majority of the samples were analyzed in selective ion mode for improved signal-to-noise; however, at least one sample of each fuel type was analyzed in total ion mode to aid identification and quantify species whose m/z may not have been scanned in selective ion mode.The entire GC-MS sampling and analysis cycle required 30 min; therefore, the GC-MS was limited to sampling each laboratory burn only once per fire for burns that lasted less than 30 min.Discrete GC-MS samples were collected at various stages of replicate burns as determined by visual inspection of the fire in addition to the real-time measurements via PTR-MS.The majority of the GC-MS samples were collected during the firsthalf of the laboratory burns when the gaseous emissions were most intense and analysis suggests that an equivalent number of GC-MS samples were collected in the flaming and smoldering phases (see Sect. 3.2).
Each VOC was identified by its retention time and quantified by the integrated peak area of a distinctive m/z in order to reduce any potential interferences from co-eluting compounds.Identities of new compounds that had never before been measured by this GC-MS were confirmed by (1) matching the associated electron ionization mass spectrum when operated in total ion mode to the National Institute of Standards and Technology's mass spectral database, and (2) comparing their respective retention times and boiling points to a list of compounds previously measured by the GC-MS.Examples of these species include 1,3-butadiyne (C 4 H 2 ), butenyne (vinyl acetylene, C 4 H 4 ), methylnitrite (CH 3 ONO), nitromethane (CH 3 NO 2 ), methyl pyrazole (C 4 H 6 N 2 ), ethyl pyrazine (C 6 H 8 N 2 ), and tricarbon dioxide (carbon suboxide, C 3 O 2 ).For some species, we were able to identify the chemical family (defined by its molecular formula and common chemical moiety) but not the exact chemical structure or identity.For these cases, we present the emissions as a sum of the unidentified isomers for a particular chemical family (see Table 2).We report only the compounds that were above the limits of detection for the majority of the biomass burns and where the molecular formula could be identified.
Of the 187 gases quantified by the GC-MS in this study, 95 were individually calibrated with commercially available and/or custom-made gravimetrically based compressed gas calibration standards.The limit of detection, precision, and accuracy are compound dependent, but are conservatively better than 0.010 ppbv, 15 and 25 %, respectively (Gilman et al., 2009(Gilman et al., , 2010)).For compounds where a calibration standard was not available (identified by an asterisk in Table 2), the calibration factors were estimated using measured calibrations of compounds in a similar chemical family with a similar retention time, and when possible a similar mass fragmentation pattern.In order to estimate the uncertainty in the accuracy of un-calibrated species, we use measured calibrations of ethyl benzene, o-xylene, and the sum of m-and p-xylenes as a test case.These aromatic species have similar mass fragmentation patterns, are all quantified using m/z 91, and elute within 1 min of each other signifying similar physical properties.If a single calibration factor was used for all these isomers, then the reported mixing ratios could be miscalculated by up to 34 %.We therefore conservatively estimate the accuracy of all un-calibrated species as 50 %.
Emission ratios
Emission ratios (ER) to carbon monoxide (CO) for each gasphase compound, X, were calculated as follows: where X and CO are the excess mixing ratios of compound X or CO, respectively, during a fire above the background.Background values, X bknd and CO bknd , are equal to the average mixing ratio of a species in the pre-conditioned ambient air inside the exhaust stack in the absence of a fire.
For the OP-FTIR, PTR-MS, PIT-MS and NI-PT-CIMS, backgrounds were determined from the mean responses of the ambient air inside the exhaust stack for a minimum of 60 s prior to the ignition of each fire.At least one background sample was collected for the GC-MS each day.The composition and average mixing ratios of VOCs in the stack backgrounds were consistent over the course of the campaign and were generally much lower than the mixing ratios observed during biomass burns.For example, the average background ethyne measured by the GC-MS was 1.22 ± 0.33 ppbv (median = 1.21 ppbv) compared to a mean ethyne of 150 ± 460 ppbv (median = 42 ppbv) in the fires.
The large standard deviation for ethyne in the biomass burns reflects the large variability in ethyne emissions rather than uncertainty in the measurement.
The type of emission ratio, discrete or fire-integrated, is determined by the sampling frequency of the instrument and sampling duration.The GC-MS used in these experiments is only capable of collecting discrete samples.Discrete ERs represent the average X relative to CO for a relatively short portion of a fire corresponding to the GC-MS sample acquisition time.The OP-FTIR, PTR-MS, and NI-PT-CIMS are fast-response instruments that are sampled every 1 to 10 s over the entire duration of each fire.These measurements were used to calculate both fire-integrated ERs that represent X/ CO over the entirety of a fire (dt ≥ 1000 s) (Burling et al., 2010;Veres et al., 2010;Warneke et al., 2011) as well as discrete ERs coincident with the GC-MS sample acquisition (dt = 20 to 300 s) as discussed in Sect.2.3.We reference all ERs to CO because the majority of VOCs and CO are co-emitted by smoldering combustion during the fire whereas CO 2 emissions occur mostly from flaming combustion (see Sect. 3.1).Additionally, ratios to CO are commonly reported in the literature for biomass burning and urban VOC emission sources.All data presented here are in units of ppbv VOC per ppmv CO, which is equivalent to a molar ratio (mmol VOC per mol CO).
Modified combustion efficiency
Modified combustion efficiency (MCE) is used here to describe the relative contributions of flaming and smoldering combustion and is equal to (2) where CO and CO 2 are the excess mixing ratios of CO or CO 2 , respectively, during a fire above the background (Yokelson et al., 1996).MCE can be calculated instantaneously or for discrete (time-integrated) samples.
Degree of unsaturation
The degree of unsaturation (D) is also known as "ring and double bond equivalent" (Murray et al., 2013) and is equal to where C, N, and H denote the number of carbon, nitrogen, and hydrogen atoms, respectively.Table 2 includes D values for each species reported.
Molar mass
Molar mass (µg m −3 ) emitted per ppmv CO is equal to where ER is the mean discrete emission ratio of a gas, MW is molecular weight (g mol −1 ), and MV is molar volume (24.5 L at 1 atm and 25 • C).Table 2 includes the nominal MW for each species reported.
OH reactivity
Total OH reactivity represents the sum of all sinks of the hydroxyl radical ( q OH) with all reactive gases and is equal to where ER is the discrete emission ratio for each measured gas (VOCs, CH 4 , CO, NO 2 , and SO 2 ; ppbv per ppmv CO), k OH is the second-order reaction rate coefficient of a gas with the hydroxyl radical (cm 3 molec −1 s −1 ), and A is a molar concentration conversion factor (2.46 × 10 10 molec cm −3 ppbv −1 at 1 atm and 25 • C).Table 2 includes the k OH values for all reported species that were compiled using the National Institute of Standards and Technology's Chemical Kinetics Database and the references therein (Manion et al., 2015).We estimated k OH values (indicated by an asterisk in Table 2) that were not in the database using those of analogous compounds.
SOA formation potential
The total SOA formation potential represents the sum of all "potential" SOA formed from all measured gases and is equal to where ER is the discrete emission ratio for each measured gases (VOCs, CH 4 , CO, NO 2 , and SO 2 ; ppbv per ppmv CO) and SOAP is a unitless, model-derived SOA potential published by Derwent et al. (2010).Briefly, Derwent et al. (2010) calculated SOAPs of 113 VOCs using a photochemical transport model that included explicit chemistry from the Master Chemical Mechanism (MCM v 3.1) and was initialized using an idealized set of atmospheric conditions typical of a polluted urban boundary layer.All SOAP values reflect the simulated mass of aerosol formed per mass of VOC reacted and are expressed relative to toluene (i.e., SOAP Toluene ≡ 100).
The SOAP values published in the Derwent et al. (2010) study are included in Table 2 and were used to estimate values for all other species (indicated by an asterisk in Table 2) based on chemical similarities.For example, species such as styrene and benzaldehyde have SOAP values of ∼ 200 (i.e., twice as much potential SOA formed compared to toluene) and were used as proxies for SOAP values for aromatics with unsaturated substituents, benzofurans, and benzenediols.
Fourmile Canyon Fire in Boulder, Colorado
Ambient air measurements of biomass burning emissions from the Fourmile Canyon Fire that occurred in the foothills 10 km west of Boulder, Colorado were conducted from 7 to 9 September 2010.Over the course of the Fourmile Fire, approximately 25 km 2 of land including 168 structures burned.
The burned vegetation consisted primarily of Douglas-fir (Pseudotsuga menziesii) and ponderosa pine (Pinus ponderosa) mixed with juniper (Juniperius scopulorum and communis), mountain mahogany (Cercocarpus), and various shrubs and grasses common to the mountain zone of the Colorado Front Range (Graham et al., 2012).During the measurement period, down-sloping winds ranging from 1 to 12 m s −1 (mean = 3.5 m s −1 ) periodically brought biomass burning emissions to NOAA's Earth System Research Laboratory located at the western edge of the city of Boulder.The previously described in situ GC-MS was housed inside the laboratory and sampled outside air via a 15 m perfluoroalkoxy Teflon sample line (residence time < 2 s) attached to an exterior port on the western side of the building.CO was measured via a co-located vacuum-UV resonance fluorescence instrument (Gerbig et al., 1999).
Temporal profiles and measurement comparisons
Temporal profiles of laboratory biomass burns provide valuable insight into the combustion chemistry and processes that lead to the emissions of various species (Yokelson et al., 1996).Figure 1 shows temporal profiles of an example burn in order to illustrate (i) flaming, mixed, and smoldering combustion phases and/or processes and (ii) the sampling frequencies and temporal overlap of the fast-response instruments compared to the GC-MS.Upon ignition, there is an immediate and substantial increase in CO 2 and NO x (NO x = NO + NO 2 ) indicative of vigorous flaming combustion.This transitions to a mixed-phase characterized by diminishing CO 2 and NO x emissions and a second increase in CO.The fire eventually evolves to a weakly-emitting, pro-tracted period of mostly smoldering combustion (Yokelson et al., 1996;Burling et al., 2010).Figure 1 also includes the temporal profile of the modified combustion efficiency (MCE, Eq. 2) which is a proxy for the relative amounts of flaming and smoldering combustion (Yokelson et al., 1996).
During the initial flaming phase of the fire, the MCE approaches unity due to the dominance of CO 2 emissions.
The MCE gradually decreases during smoldering combustion when CO emissions are more prominent.The majority of the GC-MS samples were collected during the firsthalf of the laboratory burns (e.g., t < 1000 s in Fig. 1) when the gaseous emissions were most intense.A fewer number of samples were collected during the end of a burn (e.g., t ≥ 1000 s in Fig. 1) when emissions were lower for most species.See Sect.3.2 for further discussion of the GC-MS sampling strategy.
In order to compare measurements from multiple instruments, we calculated the average excess mixing ratios of a species, X, measured by the fast-response instruments over the corresponding GC-MS sample acquisition times for all 56 biomass burns.We compare the measurements using correlation plots of X for VOCs measured by the GC-MS versus the same compound measured by the OP-FTIR or an analogous m/z measured by the PTR-MS.The slopes and correlation coefficients, r, were determined by linear orthogonal distance regression analysis and are compiled in Fig. 2a.The average slope and standard deviation of the instrument comparison is 1.0 ± 0.2 and 0.93 < r < 0.99 signifying good overall agreement between the different measurement techniques for the species investigated here.A few comparisons are discussed in more detail below.
The largest difference between the GC-MS and the OP-FTIR observations was for propene (slope = 1.36) indicating that the GC-MS response is greater than the OP-FTIR; however, a correlation coefficient of 0.99 suggests that the offset is more likely from a calibration difference that remains unresolved.The possibility of a species with the same retention time and similar fragmentation pattern as propene that is also co-emitted at a consistent ratio relative to propene is unlikely but cannot be completely ruled out.For furan, the GC-MS had a lower response than OP-FTIR (slope = 0.77) indicating that the GC-MS may be biased low for furan or that the OP-FTIR may have spectral interferences that bias the measurement high.The temporal profiles of these measurements shown in Fig. 1 suggest that there was a spectral interference with the OP-FTIR measurement of furan as evidenced by the large emissions in the flaming phase that was not captured by the m/z 69 response of the PTR-MS.These early "spurious" OP-FTIR furan responses would (i) only affect the comparison for the GC-MS samples collected in the flaming phase of the fires and (ii) have not been observed in other biomass burning experiments utilizing this OP-FTIR (Christian et al., 2004;Stockwell et al., 2014).
Comparison of the GC-MS (isoprene + furan) vs. PTR-MS m/z 69 has the lowest slope (GC-MS vs. PTR- (Stockwell et al., 2015).Direct comparisons of the real-time measurements for a variety of other species not measured by the GC-MS (e.g., formaldehyde, formic acid, and HONO) can be found elsewhere (Burling et al., 2010;Veres et al., 2010;Warneke et al., 2011).
Comparison of discrete and fire-integrated ERs
Fire-integrated ERs represent emissions from all combustion processes of a biomass burn whereas discrete ERs capture a relatively brief snapshot of emissions from mixed combus-tion processes during a particular sampling period.Figure 1 includes time series of VOC to CO ERs determined by the real-time measurement techniques for select gases.Here we compare the two different measurement strategies, discrete vs. fire-integrated, in order to (i) determine if the discrete ERs measured by the GC-MS may be biased by the sample acquisition times which typically occurred within the firsthalf of a laboratory burn (t < 1000 s, Fig. 1) when emissions for most gases from flaming and smoldering combustion generally "peaked" and (ii) assess how well the discrete GC-MS samples are able to capture the fire-to-fire variability of emissions relative to CO.We do this by determining discrete ERs for the OP-FTIR or PTR-MS for each of the 56 biomass burns using Eq. ( 1) where t start and t end times correspond to the GC-MS sample acquisition.The discrete ERs are then compared to the fire-integrated ERs measured by the same fast-response instrument so that potential measurement artifacts will not affect the comparison.
The slopes and correlation coefficients, r, of discrete versus fire-integrated ERs for select VOCs are summarized in Fig. 2b.These values were calculated using a linear orthogonal distance regression analysis of correlation plots of discrete vs. fire-integrated ERs as shown in Fig. 3.The average slope and standard deviation is 1.2 ± 0.2 indicating that the discrete ERs are generally higher than the fire-integrated ERs by 20 % on average.This positive bias is a consequence of the GC-MS sampling strategy which rarely included samples collected during purely smoldering combustion that occurs at the end of a burn (e.g., t ≥ 1000 s in Fig. 1) when absolute emissions and ERs are lower for most species.Using the data in Fig. 1 as an example, 95 % of the emissions of benzene (in ppbv) occur between ignition and 1000 s, and the mean ER during this time is twice as large as the mean ER in the later portion of the fire (time = 1001 s to extinction).For VOCs emitted during the later stages of a fire (e.g., 1,3-benzenediol), the discrete ERs will likely underestimate the emissions relative to CO.For example, the discrete ERs for benzenediol for the southeastern and southwestern fuels (Table 2) are 30 % lower than the mean fire-integrated ERs reported by Veres et al. (2010).
The ability of the GC-MS to capture the fire-to-fire variability in VOC emissions relative to CO is evaluated by the strength of the correlation, r, between the discrete and fireintegrated ERs (Fig. 2b).Species with the weakest correlations, such as ethyne and benzene, show a distinct bifurcation that is dependent upon the MCE of the discrete samples (Fig. 3).These compounds have a significant portion of their emissions in both the flaming and smoldering phases of a fire (see Fig. 1).For these types of compounds, discrete samples collected in the smoldering phase (low MCE) did not adequately represent the fire-integrated emissions that include the intense flaming emissions (high MCE) resulting in poor correlation between discrete and fire-integrated ERs for these species.We note that (i) the slopes are near unity for ethyne and benzene and (ii) there is an equal number of points above and below the 1 : 1 line for these species indicating that there was an equal number of GC-MS samples collected in both the flaming and smoldering phases of the laboratory burns.VOCs that had the strongest correlations between the discrete and fire-integrated ERs (e.g., methanol and toluene where r > 0.88) do not show a strong dependence on the MCE.Since CO is strongly associated with smoldering combustion (Yokelson et al., 1996;Burling et al., 2010), VOCs emitted primarily during this phase will be more tightly correlated with CO and the variability in the discrete vs. fire-integrated will be minimized.
In summary, the discrete GC-MS samples best characterize the fire-integrated emissions and fire-to-fire variability of species produced primarily by smoldering combustion.We conservatively estimate these values to be within a factor of 1.5 of the fire-integrated ERs for the majority of the species measured.A similar conclusion was reached by comparing discrete ERs measured during the same fire to each other by Yokelson et al. (2013).While fire-integrated ERs are considered to best represent BB emissions, these analyses suggest that collecting and averaging multiple discrete ERs at various stages of the same or replicate burns, as presented here, are an adequate substitute when fire-integrated ERs cannot be determined.Fire-integrated ERs are commonly used to determine fuel-based emission factors for a fire, but care must be taken converting discrete ERs into emission factors, as also discussed for this data in Yokelson et al. (2013).
Characterization of laboratory BB emissions
In order to merge data sets from multiple instruments, we report mean discrete ERs of over 200 organic gases, including methane and VOCs, and 9 inorganic gases relative to CO for the southwestern, southeastern, and northern fuel types in the US (Table 2).Mean ERs for each of the 18 individual fuel types are available at http://www.esrl.noaa.gov/csd/groups/csd7/measurements/2009firelab/.This study utilizes discrete ERs to characterize the chemical composition of the measured molar mass emitted, the VOC-OH reactivity, and the relative SOA formation potential of the measured gaseous emissions from various fuels categorized by the re-gion where they were collected in order to compare potential atmospheric impacts of these emissions and identify key species that may impact air quality through formation of O 3 and/or SOA.
Figure 4 is a pictograph of all ERs presented in Table 2 as well as a histogram of the ERs for each of the three fuel regions in order to highlight commonalities and differences in the magnitudes and general chemical composition of fuels from different regions in the US.The distribution of ERs are shown as a function of three simple properties including the degree of unsaturation (D, Eq. 3); the number of oxygen atoms; and molecular weight (MW) of individual VOCs.Atmospheric lifetimes and fates of VOCs will depend, in part, on these properties, which we use as simplified proxies for reactivity (D), solubility (O-atoms), and volatility (MW).Using this general framework, we highlight several key features that will be explored in further detail in the subsequent sections: i. ERs are highly variable and span more than four orders of magnitude.
ii.The relative magnitude and composition of the gases emitted are different for fuels from each of the three geographic regions, i.e., the distribution of ERs are unique for the fuels within each fuel region.
iii.Southwestern fuels generally have lower ERs and northern fuels have the largest ERs.Collectively, the molar emission ratios are a factor of 3 greater for the northern fuels than the southwestern.
iv.The largest ERs for all three fuel regions are associated with low molecular weight species (MW < 80 g mol −1 ) and/or those that contain one or more oxygen atom(s).These species also have lower degrees of unsaturation (D ≤ 2) and populate the upper left quadrants of Fig. 4.
VOCs with the largest ERs common to all fuel types are formaldehyde, ethene, acetic acid, and methanol (Table 2).
v.Over 82 % of the molar emissions of VOCs from biomass burning are unsaturated compounds (D ≥ 1) defined as having one or more pi-bonds (e.g., C-C or C-O double bonds, cyclic or aromatic rings, etc.).In general, these species are more likely to react with atmospheric oxidants and/or photo-dissociate depending on the chemical moiety, making unsaturated species potentially important O 3 and SOA precursors.VOCs that contain triple bonds (e.g., ethyne) are a notable exception as they tend to be less reactive.
vi.The number of VOCs in the upper right quadrants of Fig. 4 (increasing ERs and degree of unsaturation) is greatest for northern fuels and least for southwestern fuels.Many of the VOCs in this quadrant also have relatively high molecular weights (MW ≥ 100 g mol −1 ) and most contain at least one oxygen atom (e.g., benzenediol and benzofuran).The combination of these physical properties indicate that these species are relatively reactive, soluble, and of low enough volatility to make them potentially important SOA precursors.
Molar mass of measured BB emissions
Here we compare the magnitude and composition of biomass burning emissions as a function of molar mass, which is a readily calculated physical property used to quantify BB emissions.For all 3 fuel regions, CO 2 was the overwhelmingly dominant gas-phase emission and singularly contributed over 95 % of the molar mass emitted that was measured.Collectively, CH 4 and the inorganic gases (e.g., CO 2 , CO, NO x , etc.) comprised over 99 % of all gaseous molar mass emitted and measured, while VOCs contributed only 0.27 ± 0.03 %, 0.34 % ± 0.03 %, and 0.95 % ± 0.07 % for the southeastern, southwestern, and northern fuels, respectively.Figure 5a-c shows the fractional composition and total molar mass of measured VOCs emitted per ppmv CO for each fuel region.The molar mass emitted by northern fuels (324 ± 22 µg m −3 ppmv CO −1 ) is 3.5 times greater than the southwestern fuels (92 ± 9 µg m −3 ppmv CO −1 ).For all three fuel regions, the emissions are dominated by oxygencontaining VOCs (OVOCs), which collectively comprise 57-68 % of the total mass emissions.The single largest contribution by a single chemical class is from OVOCs with low degrees of unsaturation (D ≤ 1), which contribute 29-40 % of the total molar mass.This chemical family is dominated by acetic acid, formaldehyde, and methanol emissions (Table 2).Compared to hydrocarbons and OVOCs, nitrogencontaining VOCs are emitted in substantially smaller fractions, less than 8 % of the total measured molar mass.Dominant nitrogen VOCs include hydrocyanic acid (HCN), isocyanic acid (HNCO), acetonitrile (CH 3 CN), and methylni-trite (CH 3 ONO).The addition of all nitrogen-containing organics presented here would add approximately 5 % to the nitrogen budget presented in Burling et al. (2010); however, this would still leave > 50 % of the fuel nitrogen potentially ending up in the ash, or being emitted as N 2 or other unmeasured nitrogen-containing gases based on the nitrogen content of the fuels which ranged from 0.48 to 1.3 %.
One limitation of this analysis is the exclusion of "unknown" species, which are (i) gaseous compounds that were measured but remain unidentified and were therefore omitted from this analysis because the chemical formula and family could not be properly identified or (ii) were undetectable by the suite of instruments listed in Table 1.We estimate the mass contribution from the first scenario using the fuelbased emission factors compiled by Yokelson et al. (2013) for all measured species including "unknown" masses observed by the PIT-MS.These "unidentified" non-methane organic compounds (NMOC, equivalent to VOCs) accounted for 31-47 % of the mass of VOCs emitted for the same fuels studied here (Yokelson et al., 2013).The second category of un-observed unknown species are likely to be of sufficiently high molecular weight, high polarity, and/or low volatility and thermal stability to escape detection by GC-MS, a variety of chemical ionization mass spectrometers, and the OP-FTIR.For example, BB emissions of species such as glyoxal, glycoaldehyde, acetol, guaiacols, syringols, and amines have been reported in the literature (McDonald et al., 2000;Schauer et al., 2001;McMeeking et al., 2009;Akagi et al., 2011Akagi et al., , 2012;;Hatch et al., 2015) but would not be detectable by any of the instruments used in this experiment.The contribution of these types of compounds is difficult to assess, so we roughly estimate an additional contribution of ∼ 5 % to the total mass of VOCs emitted could be from un-observed unknown VOCs.Collectively, we estimate that the species reported in Table 2 and compiled in Fig. 5a-c account for approximately 48-64 % of the expected mass of non-methane organic gases emitted from the fuels studied here.The total VOC molar mass for each fuel type should be considered a lower limit and could increase by a factor of ∼ 2; however, doubling the molar mass of VOCs to account for all identified and "unknown" species would increase the total mass measured by less than 0.78 % since the vast majority of carbon emissions from biomass burning are in the form of CO, CO 2 , and CH 4 (Yokelson et al., 1996;Burling et al., 2010).All of the totals presented in Fig. 5 should also be considered lower limits; however, the additional contribution of unidentified and/or un-measured species to the following discussions could not be determined.
OH reactivity of measured BB emissions
Oxidation of VOCs, often initiated by reaction with the hydroxyl radical ( q OH), in the presence of NO x (NO + NO 2 ) leads to the photochemical formation of O 3 and peroxynitrates, including peroxyacetic nitric anhydride (PAN).Due to the complex relationship between O 3 production and VOC / NO x ratios and peroxynitrates, we use OH reactivity to (i) compare the magnitude of reactive gases emitted by combustion of fuels characteristic of each region and to (ii) identify key reactive species that may contribute to the photochemical formation of O 3 in a BB plume.Based on the calculated OH reactivities of all measured species listed in Table 2, VOCs are the dominant sink of OH for all fuel regions contributing 70-90 (±16) % of the total calculated OH reactivity even though non-methane VOCs were only 0.27-0.95% of the molar mass emitted.
Figure 5d-f shows the fractional contributions and total VOC-OH reactivities per ppmv CO for each of the 3 fuel regions.The fresh BB emissions from northern fu-els have the highest OH reactivity (61 ± 10 s −1 ppmv CO −1 ), which is 4.7 times greater than southwestern fuels (13 ± 3 s −1 ppmv CO −1 ).Collectively, OVOCs provide the majority of the OH reactivity of the southeastern fuels (54 %), while hydrocarbons dominate the southwestern (52 %) and northern fuels (57 %).Northern fuels have the largest contribution from highly reactive terpenes (14 %) due to the ERs of these species being, on average, a factor of 5 greater than southeastern fuels and a factor of 40 greater than southwestern fuels.
For all three fuel regions, alkenes have the largest contribution of any singular chemical class due to the large ERs of the reactive species ethene and propene, the latter of which is the single largest individual contributor to OH reactivity of any species measured.Oxidation of alkenes proceeds by OH addition to the double-bond or hydrogen abstraction and often results in the secondary formation of carbonyls (e.g., acetaldehyde and acetone), which are important peroxynitrate precursors (Roberts et al., 2007;Fischer et al., 2014).Primary emissions of formaldehyde is the second-largest contributor, after propene, to the OH reactivity of all VOCs emitted for all 3 fuel regions.Formaldehyde is reactive with OH and is a photolytic source of RO q radicals that also contribute to O 3 formation, in addition to being an air toxic.
Other important contributions to OH reactivity of BB emissions include unsaturated OVOCs (e.g., 2-propenal, methyl vinyl ketone, and methacrolein), poly-unsaturated alkenes (e.g., 1,3-butadiene and 1,3-cyclopentadiene), and furans.The majority of these types of species are highly reactive with a variety of oxidants and many of their oxidation products are photochemically active.For example, oxidation of 1,3-butadiene results in highly reactive OVOC products including furans and 2-propenal, a precursor of peroxyacrylic nitric anhydride (APAN) (Tuazon et al., 1999).The OH reactivity of furans is dominated by 2-methylfuran, 2-furaldehyde (2-furfural), and furan.Alkyl furans (e.g., 2,5dimethylfuran and 2-ethylfuran) have reaction rate coefficients on the order of ∼ 1 × 10 −10 cm 3 molec −1 s −1 at 298 K (roughly equivalent to that of isoprene) and the major oxidation products include dicarbonyls (Bierbach et al., 1992(Bierbach et al., , 1995;;Alvarez et al., 2009).Up to 27 furan isomers have been identified from the combustion of Ponderosa Pine (Hatch et al., 2015), indicating that this is an important class of species that should be further explored in order to better determine their potential contributions to O 3 and SOA formation.
Nitrogen-containing VOCs contribute less than 4 % of the OH reactivity of all fuels due to the low reactivities of the most abundant emissions, which often contain −C ≡ N functional groups.Some nitriles, such as acetonitrile (CH 3 CN), can have a lifespan on the order of months making these species good markers of long-range transport of BB plumes (Holzinger et al., 1999;de Gouw et al., 2003de Gouw et al., , 2006)).Other more reactive nitrogen-containing organics including 2-propenenitrile, benzonitrile, and heterocyclic species such as pyrroles could serve as BB markers of fresh plumes (Friedli et al., 2001;Karl et al., 2007).
SOA formation potential of measured BB emissions
Figure 5g-i shows the composition and mean SOA formation potentials of VOCs emitted for each of the three fuel regions.Southwestern fuels have the lowest SOA potential (480 per ppmv CO) compared to southeastern and northern fuels that have estimated SOAPs 2.7 and 5.1 times greater, respectively.Unsaturated OVOCs are the dominant fraction for all three fuel regions due to the relatively large ERs and SOAPs of benzenediols (sum of 1,2-and 1,3-), benzaldehyde, and phenols.Schauer et al. (2001) reports significant gaseous emissions of benzenediols from combustion of pine in a fireplace and shows that 1,2-benzenediol (o-benzenediol) is the dominant gas-phase isomer while 1,3-benzenediol (mbenzenediol) is primarily associated with the particle phase.
The discrete ERs used in this comparison may underestimate the emissions and SOA contribution of several compounds emitted in the later portions of a laboratory burn when emissions of most VOCs and CO were lower as previously discussed (Sect.3.2).
The largest contributions to SOAP from hydrocarbons include aromatics with saturated functional groups (if any) such as benzene and toluene and aromatics with unsaturated substituents such as styrene.Traditionally, these are the species that are thought to be the largest contributors to SOA formation from urban emissions (Odum et al., 1997;Bahreini et al., 2012), although predicted SOA is typically much lower than observed in ambient air suggesting that the aerosol yields may be too low or there are additional SOA precursors that remain unaccounted for de Gouw et al. (2005).
Monoterpenes have a very small (< 2 %) contribution to total SOAP.The calculated SOAPs of monoterpenes are only 20 % that of toluene (Derwent et al., 2010).This is in contrast to measured aerosol yields which are approximately 1.7 times higher for monoterpenes compared to toluene (Pandis et al., 1992).As a sensitivity test, we increased the SOAPs of the monoterpenes by a factor of 10 bringing the SOAP ratio of monoterpenes to toluene in line with that of measured aerosol yields.This resulted in modest increases in total SOAP of only 2 % for SW and 5 % for SE fuels.Northern fuels had the largest increase in total SOAP at 16 %.With the adjusted monoterpene SOAPs, the fractional contribution of terpenes increased from 1.8 % (Fig. 5i) to 15 % of the total SOAP while the contribution of unsaturated OVOCs remained the dominant class but was reduced from 67 to 58 % of the total SOAP.This sensitivity test suggests that the contributions of monoterpenes are likely underestimated for northern fuels if the SOAP scale is used; however, the largest contributions to SOAP for the northern fuels continues to be from oxygenated aromatics (benzenediols, phenols, and benzaldehyde).For comparison, Hatch et al. (2015) estimated that the SOA mass formed from the combustion of Ponderosa Pine is dominated by aromatic hydrocarbons (45 %), terpenes (25 %), phenols (9 %), and furans (9 %); however, their analysis did not include contributions from benzenediols (not measured), benzaldehyde or benzofurans (measured but not included in estimate).
Field measurements of BB emissions
Here we present field-measurements of VOCs in ambient air during the Fourmile Canyon Fire that affected Boulder, Colorado in September 2010.The in situ GC-MS measurements are shown in Fig. 6 and summarized in Table 3.We were able to identify and quantify a number of VOCs in ambient BB plumes that we had only previously observed in the fire emissions at the Fire Sciences Laboratory.Analysis of BB plumes from the Fourmile Canyon Fire afforded a unique opportunity to investigate BB emissions measured by this same GC-MS system in simulated and real fires and to explore issues associated with the presence of other VOC sources such as urban emissions and natural biogenic emissions during both daytime and nighttime; with nighttime smoke measurements being very rarely reported (Adler et al., 2011).First we identify the potential emission sources impacting the measurements.Acetonitrile is a common BB tracer that we use to help clarify periods of BB influence.As seen in Fig. 6, BB plumes are readily distinguished by concurrent increases in acetonitrile (CH 3 CN), carbon monoxide (CO), and several VOCs.Species such as benzonitrile and furan are very tightly correlated with acetonitrile (r > 0.94, Table 3) and enhancements in ambient mixing ratios above detection limit only occur in the BB plumes indicating that BB was the only significant source of these compounds.VOCs such as isoprene and alpha-pinene were similarly enhanced in the BB plumes and well correlated with acetonitrile during BB episodes; however, the mixing ratios observed in the BB plume were generally lower than those observed at other times from the natural sunlight-dependent emissions of isoprene (e.g., 8 September 09:00-16:00 LT -local time) and from the accumulation of monoterpenes in the nocturnal boundary layer (e.g., 8 September 2010 18:00 LT to 9 September 2010 06:00 LT).3-Carene was the only monoterpene that had significantly higher mixing ratios in the BB plume than in biogenic emissions.Ethene, ethyne, benzene, styrene, and methanol were enhanced in the BB plumes but are also present in urban emissions.An urban plume at 06:00-09:00 LT 9 September 2010 (Fig. 6) is enhanced in all of these species and CO; however, acetonitrile is not enhanced.
Observed enhancement ratios of several VOCs relative to acetonitrile and CO are compiled in Table 3 along with the types of emission sources for each VOC. Figure 7 shows a comparison of the VOC to acetonitrile ratios of select species for the Fourmile Canyon Fire and the laboratorybased biomass burns of all fuel types.We have identified benzofuran, 2-furaldehyde, 2-methylfuran, furan, and benzonitrile as the "best" tracers for BB emissions from these observations.These species (i) were well correlated with both acetonitrile and CO in the BB plumes, (ii) had negligible emissions from the urban and biogenic sources impacting the measurement site, and (iii) had large enhancements in BB plumes.In theory, the relative ratios of these species to acetonitrile may also be used as a BB-specific photochemical clock since each of these species represent a range of reactiv- ities that are much greater than that of acetonitrile (Table 3).We compared the enhancement ratios of each VOC marker vs. acetonitrile for the two BB plumes observed on 8 September 2010 in order to determine if the relative age of the two BB plumes could be distinguished.While the enhancement ratios for several VOCs in each plume were statistically different from one another, there was no clear relationship between the observed differences in the enhancement ratios and the relative reactivity of the VOCs.Thus, small differences in the observed enhancement ratios more likely relate to differences in the fuel composition, the relative ratio of flaming vs. smoldering emissions in each BB plume, or variable secondary sources.Given enough time for significant photochemistry to occur as a BB plume moves further from the source, these ratios could be more useful to estimate photochemical ages.
Conclusions
We report a chemically detailed analysis of the trace gases emitted from burning 18 different biomass fuel types important in the southwestern, southeastern, and northern US.A complementary suite of state-of-the-art instruments was used to identify and quantify over 200 organic and 9 inorganic gases emitted from laboratory burns.Most of the species were quantified via discrete sampling by the GC-MS, which also provided confirmation for the real-time PIT-MS and PTR-MS mass assignments (Warneke et al., 2011).The variability in emissions over the course of each biomass burn was measured in detail by the fast-response instruments providing valuable insight into the combustion chemistry and processes that govern the emissions of various species.
By comparing discrete and fire-integrated ERs for various VOCs relative to CO, we show that the discrete GC-MS samples adequately represented the fire-integrated ER within an average factor of 1.2 ± 0.2 and fire-to-fire variability for VOCs emitted mainly by smoldering, which are the majority of VOCs.Discrete ERs for VOCs emitted by both flaming and smoldering were highly variable and showed a clear bifurcation depending on the mix of combustion processes during sampling.This analysis highlights the importance of collecting multiple discrete samples at various stages of replicate burns if fire-integrated emissions cannot be measured to ensure adequate measurement of all VOCs.
The distribution of VOC emissions (magnitude and composition) was different for each fuel region.The largest total VOC emissions were from fuels common to the northern US while southwestern US fuels produced the lowest total VOC emissions.VOCs contributed less than 0.78 % ± 0.12 % of total detected gas-phase emissions by mole and less than 0.95 % ± 0.07 % by mass due to the predominance of CO 2 , CO, CH 4 , and NO x emissions.However, VOCs contributed 70-90 (±16) % of the total calculated OH reactivity and 100 % of the potential SOA precursors emitted from combustion of biomass.Over 82 % of the VOC emissions by mole are unsaturated species including highly reactive alkenes, aromatics and terpenes as well as photolabile OVOCs such as aldehydes and ketones.VOCs with the largest ERs common to all fuel types are formaldehyde, ethene, acetic acid, and methanol.
OVOCs contributed the dominant fraction of both the total VOC mass emitted (> 57 %) and potential SOA precursors (> 52 %), and also contributed a significant fraction of the OH reactivity for all fuel regions making them an important class of VOCs to understand the air quality impacts of BB emissions.Reactive and photolabile OVOCs such as formaldehyde, 2-propenal (acrolein), and 2-butenal (crotonaldehyde) are toxic, a source of free radicals, and/or precursors of peroxynitrates that may contribute to O 3 formation downwind of the source.Furans are a class of OVOCs in BB emissions that contributed 9 to 14 % of the VOC-OH reactivity for all fuel regions; however, their potential as SOA pre-Figure 7. Correlation plots of VOCs versus acetonitrile for all 56 laboratory biomass burns (grey markers) and Fourmile Canyon Fire (red markers correspond to the BB plume identified in Fig. 6).The best-fit line for the Fourmile Canyon Fire samples is shown in black along with the slopes(S) and fit coefficients (r).cursors, particularly for species such as 2-furaldehyde and benzofuran, requires further study.The estimated SOA formation potential was dominated by oxygenated aromatics (benzenediols, phenols, and benzaldehyde).Potentially important species that were not measured but should be considered in future studies include glyxoal, glycoaldehyde, acetol, guaiacols, and syringols (Stockwell et al., 2015).
The Fourmile Canyon Fire in Boulder, CO, allowed us to identify and quantify a number of VOCs in ambient BB plumes that we had only previously observed in the emissions from laboratory fires at the Fire Sciences facility and investigate BB emissions in the presence of other VOC sources such as urban emissions and biogenic emissions during both the day and nighttime.We identified benzofuran, 2-furaldehyde, 2-methylfuran, furan, and benzonitrile as the "best" tracers for BB emissions from our observations.In theory, the relative ratios of these species to acetonitrile may also be used as a BB-specific photochemical clock since each of these species represent a range of reactivities assuming a negligible photochemical source.
Figure 1 .
Figure 1.Temporal profiles of mixing ratios and emission ratios (ER) of selected gases and the modified combustion efficiency (MCE) for an example laboratory burn of Emory Oak Woodland fuel from Fort Huachuca, Arizona.(a) Mixing ratios of CO 2 , CO, and NO x measured by OP-FTIR.The MCE trace is colored by the key and scale on the right.The vertical bars represent the flaming combustion phase of the laboratory burn (yellow) and the GC-MS sample acquisition time (grey).(b)-(f) Discrete GC-MS measured mixing ratios are shown as markers.(b)-(g) Mixing ratios measured by PTR-MS (benzene, m/z 69 = isoprene + furan + other, and acetonitrile), OP-FTIR (furan, ethyne, and methanol), and NI-PT-CIMS (benzenediol) are shown as lines and the corresponding VOC to CO ERs are shown as filled traces.
Figure 2 .
Figure 2. Slopes and correlation coefficients, r, determined from correlation plots of (a) mixing ratios measured by the GC-MS versus the average mixing ratio measured by the OP-FTIR or PTR-MS during the GC-MS sample acquisition time and (b) discrete vs. fireintegrated emission ratios of select VOCs relative to CO as measured by the OP-FTIR or PTR-MS.The black dashed line represents slopes equal to 1.The average of the slopes and the standard deviation is shown by the red shaded bands.The green bands represent r > 0.90.
Figure 3 .
Figure 3. Correlation plots of the discrete versus fire-integrated emission ratios (ER) for ethyne and methanol measured by the OP-FTIR and benzene and toluene measured by the PTR-MS.Each data point represents one biomass burn and is colored by the modified combustion efficiency (MCE) corresponding to the discrete sampling times of the GC-MS.MCE values near unity are associated with flaming combustion and lower MCE values are associated with smoldering combustion.The linear 2-sided regression lines forced through the origin are shown as red lines and the 1 : 1 ratio is shown by the dashed lines.
Figure 4 .
Figure 4. Discrete molar emission ratios for all VOCs reported in Table 2 as a function of the degree of unsaturation, D, for each fuel region.Emission ratios are colored by the corresponding molecular weight and the marker width represents the corresponding number of oxygen (O) atoms.The dashed lines represent the median values for all VOCs from all fuel regions (ER = 0.0427 mmol per mol CO and D = 2).The histogram on the right summarizes the distribution of molar emission ratios for each fuel region.
Figure 5 .
Figure 5. Contributions of (non-methane) VOCs reported in Table 2 to (a)-(c) the measured molar mass, (d)-(f) OH reactivity, and (g)-(i) relative SOA formation potential for the southwestern, southeastern, and northern fuel regions.Totals for each fuel region are shown below each pie chart.
Figure 6 .
Figure 6.Time series of ambient air measurements in Boulder, CO during the Fourmile Canyon Fire.The top bar indicates nighttime (grey), daytime (yellow), and biomass burning plumes (red markers).CO and acetonitrile are included in all 4 panels.
Table 2 .
Mean VOC to CO discrete emission ratios (ERs, ppbv per ppmv CO) for the southwestern (SW), southeastern (SE), and northern (N) fuel regions.
Table 2 .
Continued.If the exact compound identity could not be determined, then the species are identified using general names that reflect the chemical family and formula are used.For example, hexenes (sum of 3 isomers) may include species such as cis-and trans-3-hexene.Alternative names, such as p-Cymene for 1-methyl-4-isopropylbenzene, or common abbreviations such as MEK for Butanone_ 2 are also included.
Table 3 .
(Manion et al., 2015)n coefficients (r) for VOC to carbon monoxide (CO) and VOC to acetonitrile (CH 3 CN) ratios observed in biomass burning (BB) plumes from the Fourmile Canyon Fire as identified in Fig.6.VOC to CO slope is in units of (ppbv VOC per ppmv CO); VOC to CH 3 CN slope is in units of (ppbv VOC per ppbv CH 3 CN); bold face denotes VOCs that are the best available BB markers.kOH= second-order reaction rate coefficients of VOC + OH reaction at STP (× 10 12 cm 3 molec −1 s −1 ) from the National Institute of Standards and Technology's Chemical Kinetics Database and the references therein(Manion et al., 2015).Ratio of k OH+VOC / k OH+CH 3 CN at STP.
|
2018-09-13T04:34:12.129Z
|
2015-12-17T00:00:00.000
|
{
"year": 2015,
"sha1": "389b14958eb7237765a3d53b257bd3f6c697325e",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/15/13915/2015/acp-15-13915-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "389b14958eb7237765a3d53b257bd3f6c697325e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
222316519
|
pes2o/s2orc
|
v3-fos-license
|
Generation of Recombinant SARS‐CoV‐2 Using a Bacterial Artificial Chromosome
Abstract SARS‐CoV‐2, the causative agent of COVID‐19, has been responsible for a million deaths worldwide as of September 2020. At the time of this writing, there are no available US FDA−approved therapeutics for the treatment of SARS‐CoV‐2 infection. Here, we describe a detailed protocol to generate recombinant (r)SARS‐CoV‐2 using reverse‐genetics approaches based on the use of a bacterial artificial chromosome (BAC). This method will allow the production of mutant rSARS‐CoV‐2—which is necessary for understanding the function of viral proteins, viral pathogenesis and/or transmission, and interactions at the virus‐host interface—and attenuated SARS‐CoV‐2 to facilitate the discovery of effective countermeasures to control the ongoing SARS‐CoV‐2 pandemic. © 2020 Wiley Periodicals LLC. Basic Protocol: Generation of recombinant SARS‐CoV‐2 using a bacterial artificial chromosome Support Protocol: Validation and characterization of rSARS‐CoV‐2
Here, we report a detailed protocol for the rescue of rSARS-CoV-2 by transfection of a full-length cDNA clone of the SARS-CoV-2 USA-WA1/2020 strain (accession no. MN985325) using a bacterial artificial chromosome (BAC) in Vero E6 cells. The BAC is used to assemble the SARS-CoV-2 cDNA under the control of the cytomegalovirus (CMV) promoter, which allows the expression of the vRNA in the nucleus by cellular RNA polymerase II. This system has previously been shown to overcome the difficulties associated with undesirable expression of toxic viral proteins during the propagation of cDNA clones in bacteria (Almazán et al., 2006;Hotard et al., 2012;Pu et al., 2011), which permits the production of infectious virus without requiring an in vitro ligation and transcription step (Ávila-Pérez et al., 2020;Ávila-Pérez, Park, Nogales, Almazán, & Martínez-Sobrido, 2019). The rescued rSARS-CoV-2 is easily detectable by cytopathic effect (CPE) and immunofluorescence (IFA) with a monoclonal antibody (MAb) against the N protein. Furthermore, rSARS-CoV-2 displays similar viral fitness to the wild-type (WT) parental SARS-CoV-2, as determined by plaque assay and growth kinetics. The use of BAC-based reverse genetics to generate rSARS-CoV-2 represents an excellent tool to study fundamental viral processes, pathogenesis, and transmission, as well as for development of attenuated forms of the virus for implementation as live-attenuated vaccines (LAVs) for the prophylactic treatment of SARS-CoV-2 infection, in addition to the identification and characterization of antivirals for the therapeutic treatment the SARS-CoV-2 in humans.
GENERATION OF RECOMBINANT SARS-COV-2 USING A BAC
In this article, we describe a detailed protocol to generate rSARS-CoV-2, USA-WA1/2020 strain, using reverse genetics techniques based on the use of a BAC. The SARS-CoV-2 BAC can be stably propagated in bacteria and, upon transfection of Vero E6 cells, generates infectious rSARS-CoV-2. The rSARS-CoV-2 is easily detected in infected Vero E6 cells by cytopathic effect (CPE) and immunofluorescence, and displays plaque sizes and growth kinetics similar to its wild-type counterpart. Since the reverse genetics system is based on a single BAC plasmid, the viral genome can be directly manipulated to produce mutant rSARS-CoV-2, which is necessary to understand the function of specific viral proteins, mechanisms of viral pathogenesis and/or transmission, and interactions at the virus-host interface, as well as to generate attenuated forms of the virus, which will facilitate the discovery of effective countermeasures to control the ongoing SARS-CoV-2 pandemic.
Biosafety Recommendations
All of our experiments involving infectious wild-type or recombinant SARS-CoV-2 were conducted under appropriated Biosafety Level 3 (BSL-3) laboratories and approved by the Texas Biomed Institutional Biosafety (IBC) committee. Individuals working with SARS-CoV-2 had proper biosafety training before entering BSL-3 laboratories. Cell culture procedures involving no infectious virus was conducted at BSL-2 and moved to BSL-3 for viral infections. The development of plaque assays and viral titrations were conducted at BSL-2 after complete inactivation of the virus in BSL-3 with established inactivation procedures.
of 17
Current Protocols in Microbiology Luria-Bertani (LB) medium (see recipe) and LB agar plates (see recipe) Chloramphenicol (Thermo Fisher Scientific,
Preparation of pBeloBAC11-SARS-CoV-2 for the rescue of rSARS-CoV-2
The generation and validation of the BAC containing the full-length genome of SARS-CoV-2 is detailed in Ye et al. (2020). Briefly, unique restriction sites were selected within the SARS-CoV-2 viral genome based on their distribution and spacing, and by their absence within the pBeloBAC11 plasmid. The BstBI and Mlul restriction sites were removed from the S and M gene, respectively, via silent mutation to ensure that these restriction sites were unique, and as a molecular marker to distinguish the rescued rSARS-CoV-2 from the natural viral isolate. Five cDNA fragments covering the entire 29,930-bp viral genome of SARS-CoV-2 USA-WA1/2020 (accession no. MN985325) were synthesized de novo by Bio Basic (Ontario, Canada). Then, the fragments were sequentially assembled into the pBeloBAC11 plasmid (NEB) using unique restriction enzymes and standard molecular biology techniques. The full-length SARS-CoV-2 genome was cloned under the control of the cytomegalovirus (CMV) promoter and flanked at the 3 end by the Hepatitis Delta Virus (HDV) Ribozyme (Rz) and the bovine growth Schematic representation of the pBAC plasmid to rescue rSARS-CoV-2: The full-length cDNA of the SARS-CoV-2 genome is flanked at the 5 end by the cytomegalovirus (CMV) polymerase II−driven promoter and at the 3 end by the hepatitis delta ribozyme (Rz) and bovine growth hormone (bGH) polyadenylation signal. The entire ∼30,815-bp construct was inserted into the pBeloBAC11 plasmid using Pcil and HindIII restriction sites. The pBeloBAC11 plasmid is an E. coli vector commonly used to generate BACs because it can support the insertion of large DNA fragments as a single copy in cells. The pBeloBAC11 contains a chloramphenicol resistance (Cm R ) gene as a selective marker. Viral proteins are those previously described in Figure 1.
2. Transfer bacteria to a 10-ml culture tube containing 1 ml of SOC medium and incubated at 37°C for 1 hr with shaking at 200-250 rpm.
3. Spread 100 μl of the bacteria onto LB agar plates supplemented with 12.5 μg/ml of chloramphenicol and place in an incubator at 37°C for 16 hr.
4. After 16 hr incubation, transfer and grow a single bacterial colony transformed with pBeloBAC11-SARS-CoV-2 in 1 ml of LB medium supplemented with 12.5 μg/ml of chloramphenicol in a 37°C shaking incubator at 200-250 rpm for 16 hr.
We recommend growing three bacterial colonies individually.
5. Add 1 ml of bacterial culture to a 2-L flask containing 500 ml of LB liquid medium supplemented with 12.5 μg/ml of chloramphenicol, and grow the bacteria in a 37°C shaking incubator at 200-250 rpm for 16 hr, or until an OD 600 of 0.8 is reached.
Avoid vortexing or vigorously shaking the BAC, as this may cause DNA shearing due to its large size.
Rescue of rSARS-CoV-2
For the rescue of rSARS-CoV-2 from pBeloBAC11-SARS-CoV-2, we recommend three independent transfections for each recombinant virus to increase the probability of successful rescue. If one is attempting rescue of multiple recombinant viruses, scale up the Chiem et al.
of 17
Current Protocols in Microbiology following steps accordingly. We also recommend transfecting the empty pBeloBAC11 plasmid as an internal control for these viral rescues. A schematic representation of the transfection protocol is detailed in Figure 3.
10. Prepare Opti-MEM-LPF2000-DNA plasmid mixture: Add the 250 μl Opti-MEM-LPF2000 mixture (step 8) to the 50 μl plasmid transfection mixture (step 9). Allow the mixture to incubate at room temperature for 20-30 min. Meanwhile, remove the cell culture medium from the Vero E6 cells and add 1 ml of transfection medium.
The cells should be 90% confluent (~1.2 × 10 6 cells/well) on the day of the transfection.
11. Slowly add the 300 μl Opti-MEM-LPF2000-DNA plasmid mixture (step 10) to the Vero E6 cells in a dropwise fashion. Gently rock the plate back and forth and place in an incubator at 37°C with 5% CO 2 for 16 hr.
a. Wash the cells with 1× PBS twice and then add 1 ml of 1× trypsin-EDTA solution. b. When the cells detach, resuspend the cells in 10 ml of cell culture medium and transfer to a 15-ml conical tube. c. Centrifuge the cells 10 min at 226 × g room temperature. d. Remove the medium and resuspend the cells in 12 ml of post-infection medium.
Transfer the cells to a T-75 flask. Plaque assay: Confluent monolayers of Vero E6 cells (6-well plate format, 1.2 × 10 6 cells/well, triplicates) were infected with SARS-CoV-2 or rSARS-CoV-2 at 37°C for 1 hr and overlaid with agar. After 72 hr in a 37°C incubator with 5% CO 2 , cells were fixed in 10% neutral buffered formalin for 16 hr before agar was removed. Next, cells were permeabilized with 0.5% Triton X-100 for 10 min and prepared for immunostaining as previously described using the anti-NP MAb (1C7) and vector kits (Vectastain ABC kit and DAB HRP substrate kit; Vector Laboratories). (D) Viral growth kinetics: Vero E6 cells (12-well plate format, 0.5 × 10 6 cells/well, triplicates) were infected (MOI of 0.01) with SARS-CoV-2 (black) or rSARS-CoV-2 (red) and placed in a 37°C incubator with 5% CO 2 for 4 days. At 12, 24, 48, 72, and 96 hr post-infection, viral titers in supernatants were determined by plaque assay (PFU/ml). Error bars indicate the standard deviations from three separate experiments. The dashed black line indicates the limit of detection (10 PFU/ml).
As additional internal controls, two additional T-75 flasks with confluent monolayers of Vero E6 cells are prepared and are either mock-infected or infected with WT SARS-CoV-2 at a multiplicity of infection (MOI) of 0.01 plaque forming units (PFU).
15. Incubate the T-75 flasks in a 37°C incubator with 5% CO 2 for 2 days. Check the cells daily for cytopathic effect (CPE) to assess the presence of rSARS-CoV-2 (Fig. 4A).
16. Once CPE is observed, place the T-75 flasks into ziploc bags and freeze at −80°C.
Thaw the cells at room temperature and transfer the tissue culture supernatant containing SARS-CoV-2 to a 50-ml conical tube. Centrifuge 10 min at 226 × g, 4°C, to pellet cell debris.
17. Collect the tissue culture supernatants and aliquot them into cryogenic tubes (0.5 ml/tube) before storage at −80°C.
of 17
Current Protocols in Microbiology
VALIDATION AND CHARACTERIZATION OF RSARS-COV-2
In order to validate the successful rescue of rSARS-CoV-2, fresh Vero E6 cells are infected with tissue culture supernatants containing rSARS-CoV-2. The presence of rSARS-CoV-2 can be determined by immunofluorescence by expression of SARS-CoV-2 NP using a mouse anti-SARS NP monoclonal antibody (MAb) 1C7 (Fig. 4B). WT SARS-CoV-2 and mock infected cells should be included as internal controls in the IFA.
5. After viral absorption, remove the viral inoculum, wash once with 1× PBS, and then add 1 ml of fresh post-infection medium to each well. Return the plates back to the 37°C, 5% CO 2 incubator. Incubate the plates for 24 hr.
6. Remove the tissue culture supernatants from the infected Vero E6 cells and fix the cells by completely submerging plates in 10% neutral buffered formalin for 16 hr.
The process of fixation for 16 hr with 10% neutral buffered formalin will inactivate the virus and permit it to be safely removed from the BSL-3 laboratory to continue the characterization of the virus at BSL-2. Biocontainment procedures to remove samples from BSL-3 must still be strictly followed.
7. Gently rinse the plates with tap water to remove remaining 10% neutral buffered formalin. Do not place the cells directly under the faucet.
8. Tap off all remaining water from the plates and wash with 1× PBS three times.
9. Add 500 μl/well of permeabilization solution for 10 min at room temperature.
10. Remove the permeabilization solution and wash the cells three times with 1× PBS.
11. Add 500 μl/well of blocking solution for 1 hr at room temperature.
12. Remove the blocking solution and incubate the cells with 500 μl/well of mouse anti-SARS NP MAb 1C7 (1 μg/ml) diluted in blocking solution in a 37°C incubator for 1 hr.
If desired, other MAb or polyclonal antibodies (PAb) for the detection of SARS-CoV-2 may be used in place of 1C7.
13. After 1 hr, remove the solution and wash the cells three times with 1× PBS.
At this point, plates should be shielded from light with aluminum foil.
15. Remove the previous solution containing secondary antibody and DAPI and wash the cells three times with 1× PBS. Leave 1 ml of 1× PBS in each well. Samples can be stored at 4°C while protected from light with aluminum foil.
16. Analyze the samples under a fluorescence microscope.
17. In immunofluorescence (Fig. 4B), the detection of SARS-CoV-2 NP is shown in green (FITC), while the nucleus is stained in blue (DAPI). As expected, only Vero E6 cells treated with tissue culture supernatants containing WT; SARS-CoV-2 or rSARS-CoV-2 were positive in FITC signal.
of 17
Current Protocols in Microbiology
Plaque assay of rSARS-CoV-2 for titration and visualization of plaque sizes
In order to quantitatively titrate the amount of rSARS-CoV-2 from our rescue tissue culture supernatants and to assess their plaque phenotype, we recommend the use of plaque assays. WT SARS-CoV-2 and mock infected cells should be included as internal controls in the plaque assay.
20. Wash Vero E6 cells with 1× PBS three times before infection with 1 ml/well of the 10-fold serial dilutions of rSARS-CoV-2.
21.
Incubate the plates at 37°C in a 5% CO 2 incubator for 1 hr. Gently rock the plates back and forth every 15 min. 32. Remove the medium in each well, wash with 1× PBS three times, and visualize the viral plaques using a DAB peroxidase kit in accordance to the manufacturer's instructions.
33. To determine the viral titer, count the number of PFU in each well, average the triplicates, then multiply by the dilution factor and divide by the volume used during infection. Figure 4C.
Growth kinetics
In order to assess viral fitness of the rSARS-CoV-2 and to compare it to that of its natural counterpart, we recommend doing growth kinetics (Fig. 4D).
of 17
Current Protocols in Microbiology 47. Remove the 1× PBS and permeabilize the cells with permeabilization solution for 10 min, then wash the cells with 1× PBS three times.
48. After removing the 1× PBS, incubate the cells with blocking solution at room temperature for 1 hr.
49. Detect viral plaques using mouse MAb against SARS NP 1C7, Vectastain ABC kit, and DAB reagent, as described for plaque assays above.
50. To determine the viral titer, count the number of PFU in each well, average the triplicates, then multiply by the dilution factor and divide by the volume used during infection.
51.
A representative result of a viral growth kinetic assay is shown in Figure 4D.
LB agar plates
Prepare 1 L of LB medium (see recipe), and add 15 g of Bacto agar (BD, cat. no. 214010). Autoclave, cool to ∼50°C, add 12.5 μg/ml of chloramphenicol, and pour into 100-mm bacterial culture plates. Store up to 3 weeks at 4°C.
Vero E6 cells
Vero E6 cells (African green monkey kidney epithelial cells; ATCC, cat. no. CRL-1586) are used in all experimental procedures, including viral rescue, plaque assay, titration, and growth kinetics. This cell line was selected because they are readily infected with SARS-CoV-2 (Ogando et al., 2020), produce high viral titers, and have moderate transfection efficiencies.
Background Information
Since the identification of SARS-CoV-2 in December 2019 in China, the virus has rapidly dispersed across the world and caused over 16 million confirmed cases and over 1 million deaths (Andersen et al., 2020;Wu et al., 2020). The World Health Organization (WHO) officially declared a pandemic of SARS-CoV-2 in March of 2020.
Currently, no FDA-approved therapeutic vaccines or prophylactic antivirals are available for the treatment of SARS-CoV-2 infection and associated COVID-19. Reverse genetics systems to rescue recombinant viruses represent an excellent experimental tool for the development of attenuated forms of SARS-CoV-2 for their implementation as live attenuated vaccines (LAVs) and for the rapid identification of drugs with antiviral activity against SARS-CoV-2. To date, two different approaches to generate rSARS-CoV-2 have been described (Thi Nhu Thao et al., 2020;Xie et al., 2020). However, both of these reverse genetics techniques to generate rSARS-CoV-2 are laborious and require in vitro ligation and transcription for successful viral rescue.
Here, we describe the experimental procedure to generate rSARS-CoV-2 using a BACbased reverse genetics approach (Ye et al., 2020). After transfection of Vero E6 cells with the pBeloBAC11-SARS-CoV-2 plasmid, infectious rSARS-CoV-2 can be generated in the tissue culture supernatants of transfected cells. The rSARS-CoV-2 is easily detectable by determining CPE or by IFA using an anti-NP MAb. Importantly, the rSARS-CoV-2 generated using our BAC-based reverse genetics approach exhibit plaque phenotype and growth kinetics similar to a WT SARS-CoV-2 natural isolate. Many problems, including the large size of the SARS-CoV-2 genome and the instability of some viral sequences in bacteria, can be circumvented by the use of the BAC platform, since pBeloBAC11 is maintained as a single copy per cell.
The development of a BAC-based reverse genetic system for SARS-CoV-2 is a powerful tool that enables researchers to study multiple aspects of SARS-CoV-2 biology and pathogenesis in vitro and in vivo. This approach permits easy manipulation of the SARS-CoV-2 genome for the generation of recombinant viruses containing mutations or deletions of viral proteins, allowing investigators to ask important questions regarding SARS-CoV-2 replication, virulence, and pathogen-host interactions in cultured cells or validated animal models of SARS-CoV-2 infection (Golden et al., 2020). Also, these reverse genetics approaches will allow the generation of attenuated forms of SARS-CoV-2 for their potential implementation as LAVs for the prevention of COVID-19 disease. Finally, with these reverse genetics in place, rSARS-CoV-2-expressing reporter genes such as fluorescent or luciferase proteins could be rescued. Reporter-expressing rSARS-CoV-2 represents an excellent option for the rapid identification of compounds with antiviral activity using high-throughput settings that could be employed for the treatment of SARS-CoV-2 infections.
Critical Parameters
SARS-CoV-2 is a Biosafety Level 3 (BSL-3) pathogen, and all steps prior to inactivation must be done in the proper biocontainment laboratory. The success of the experiments described in this article is dependent on many Chiem et al.
of 17
Current Protocols in Microbiology critical parameters. First, a large quantity of highly purified pBeloBAC11-SARS-CoV-2 is required, which can be generated as described in the Basic Protocol. The rescue of rSARS-CoV-2 requires transfection with Lipofectamine 2000 (LPF2000) in Opti-MEM containing no antibiotics. The inclusion of antibiotics may hinder transfection efficiency and the production of infectious rSARS-CoV-2. It is important to maintain healthy Vero E6 cells and check them for potential contamination (e.g., mycoplasma). Vero E6 cells should be passaged a day prior to transfection to ensure optimal cell conditions. The proper amount of Vero E6 should also be used in each of the experiments. We recommend conducting all transfections in triplicate to ensure the successful rescue of rSARS-CoV-2. If attempting to rescue a mutant rSARS-CoV-2, the researcher must determine whether the mutation is deleterious to the virus. Mutations that are too detrimental to the virus will prevent successful viral rescue. Another consideration that must be addressed is the optimal temperature for viral rescues. For WT rSARS-CoV-2, the optimal temperature is 37°C; however, mutant rSARS-CoV-2 may require lower temperatures. In plaque assays, the DMEM/F-12 agar must be overlaid over the cells at approximately 39°C, by maintaining the DMEM/F-12 mixture in a 39°C water bath before adding warm agar. We recommend sequencing the rescued rSARS-CoV-2 to identify potential mutations that could arise from passaging the virus in Vero E6 cells.
Troubleshooting
If viral rescue is not achieved, there are several possible causes. Vero E6 cells may be at a high passage number that decreases transfection efficacy. A new batch of fresh Vero E6 cells with a low passage number should be used. We also recommend using Vero E6 cells that were passaged the day before transfection to improve transfection efficiency. A poor preparation of pBeloBAC11-SARS-CoV-2 might also result in low virus rescue efficiency. It is recommended to generate and verify the purity of the pBeloBAC11-SARS-CoV-2 preparation for successful viral rescue. Note that the pBeloBAC11-SARS-CoV-2 is a low-copy-number plasmid, at approximately 1-2 copies per cell. Unexpected contamination can occur during transfection of the pBeloBAC11-SARS-CoV-2 without antibiotics. To resolve this, use fresh media and be sure all materials are sterile.
Understanding Results
CPE in transfected Vero E6 cells is an indication of successful viral rescue. However, full validation is conducted by immunofluorescence and/or plaque assays. In the immunofluorescence assay, the presence of SARS-CoV-2 in infected cells is detected with the 1C7 anti-NP monoclonal antibody and a secondary anti-mouse FITC-antibody using fluorescent microscopy. We recommend including mock-transfected cells or cells transfected with empty pBAC as control. In the plaque assay, the presence of virus is determined by immune staining using the same 1C7 anti-NP monoclonal antibody. Viral plaque sizes are dependent on the incubation time of the plaque assay. We recommend an incubation time of 48-96 hr post-infection for the plaque assays, to allow the formation of visible plaques.
Time Considerations
After transfection of Vero E6 cells with pBeloBAC11-SARS-CoV-2, yielding rSARS-CoV-2, we have been able to observe CPE as early as 48-72 hr post-transfection. However, we recommend maintaining the transfection for 3-4 days. The validation and characterization of rSARS-CoV-2 (IFA and/or plaque assays) require approximately 24 and 48-72 hr, respectively. An extra 16-hr overnight fixation of samples in 10% neutral buffered formalin for complete viral inactivation before removing the plates from BSL-3 conditions is also required. Immunostaining of the IFA and plaque assays requires an additional ∼5-6 hr. So, from the transfection of the Vero E6 cells until verification of virus, the total protocol requires approximately 7 days for completion.
|
2020-10-14T13:05:39.805Z
|
2020-10-13T00:00:00.000
|
{
"year": 2020,
"sha1": "1a46756dd3759965191d059fe477624a4e64f458",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cpmc.126",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f33651302e4b5280832c5d6d3f56a010b3810280",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
201987884
|
pes2o/s2orc
|
v3-fos-license
|
Personal fear of their own death and determination of philosophy of life affects the breaking of bad news by internal medicine and palliative care clinicians
Introduction Patients with life-threatening disease should be informed about the diagnosis and prognosis of life-expectancy. Breaking bad news (BBN) by a clinician may be affected not only by their lack of communication skills but also their philosophy of life, beliefs, fear of their own death, their length of tenure, and their exposure to dying and death. Material and methods This questionnaire-based study aimed to investigate the impact of these factors on BBN in internal medicine practitioners (INT) versus palliative care physicians (PCP), and to detect the possible impediments to the proper communication process and the clinicians’ needs regarding their preparation for such a conversation. Results Thirty-eight PCPs and 64 INTs responded. Determination of philosophy of life, but not religiousness, positively correlated with the number of working years in palliative care. Two-thirds of the respondents declared fear of death, and it diminishes along with working years, especially in palliative care. For most physicians, BBN appeared difficult; however, less so for PCPs, persons with a high level of determination of philosophy of life, and men. The most frequent impediment was insufficient communication skills. Consistently, the respondents expressed the need for closing the gap in communication skills, especially by mentoring or training on communication. Conclusions Fear of death may restrain inexperienced medical professionals from BBN to patients and makes it difficult. Working in palliative care augments the determination of philosophy of life and diminishes fear of death. The higher the determination of philosophy of life, the more likely BBN is to be performed. Philosophy of life, spirituality, and communication skills should be addressed in postgraduate education.
Introduction
Communication with a patient about dying and his/her death is an integral part of palliative care. It is indispensable in many areas of medicine as an element of holistic care, although a palliative care professional is particularly exposed to end-of-life issues raised by the patient. Dr. Balfour Mount, one of the founders of palliative care, says that peo-ple who provide health care are so afraid of death themselves that fear compromises their ability to administer to the needs of terminally ill patients [1]. The problem is significant and may be captured by quoting Dr. Mount's words: "If we have unconscious angst provoked by the patient, it is little wonder that we are less well primed to see and assess what their needs are. Death-related anxiety is an important part of our psychic milieu. We are also uniquely unprepared as a society to cope with death. This is why the needs go unmet, and the suffering is unnecessary" [1].
Patients have a right to be honestly informed about their state of health. In the case of an unfavourable prognosis, a physician ought to inform a patient about it with tact and respect [2]. Breaking bad news to patients in an appropriate manner is not an optional skill but an essential part of professional practice [3]. Patients not only want to get information about the diagnosis but also the chances for a cure, the treatment and its alternatives, side effects of the therapy, and a realistic estimate of the expected duration of life [4,5].
A palliative care practitioner should have general communication skills, including the essentials of breaking bad news. The study of life would be incomplete without the study of death and the dying process, the grieving process, and the social attitudes toward death, e.g. in the form of workshops on the principles of thanatology and communication to make medical training holistic [6]. These issues should be an element of undergraduate educational programs, as a preparation to face the problems of the patient's suffering and death. Practical training sessions on breaking bad news on diagnosis and prognosis, and sufficient time dedicated to communication skills during undergraduate education are also important.
There are many factors that have an impact on the attitude of physicians towards breaking bad news, such as gender, empathy, parental death, parental anxiety, communication skills, socio-economic status, cultural conditions, spirituality, religious beliefs, the cohesion of philosophy of life (conformity between one's conduct and convictions), fatigue, burnout, and the attitude towards one's own death [7]. Inexperienced doctors are afflicted by stress resulting from breaking bad news more than experienced ones; however, poor communication is not strictly related to inexperience with breaking bad news [8,9]. Breaking bad news entails strong emotions such as anxiety, the feeling of accountability for the distress of the patient, or fear of his/her negative response, which may result in a reluctance to reveal unpleasant information [10]. Physicians may refrain from informing patients properly, even if patients express their strong will to know the diagnosis and the prognosis as much as possible [5,11].
Fear of death and a low level of determination (the strength of confidence) of philosophy of life may restrain medical professionals from breaking bad news to patients [12]. It may also result in physicians' inadequate reactions to death and dying, which will affect proper communication with the patient because the fear may be expressed in the physician's gestures and postures [13,14].
This study aimed to investigate the influence of philosophy of life and fear of death among physicians on the ability to effectively communicate with patients about dying and death. Additionally, a comparison between palliative care physicians and internal medicine practitioners was performed in order to find possible differences and specific needs of these professionals regarding preparation for breaking bad news.
Material and methods
The questionnaire-based survey was performed among palliative care physicians and internal medicine practitioner groups attending post-graduate courses on palliative care in Poland. The questionnaire (Table I) was designed by the authors and consisted of seven questions in Polish, of which the first four were validated (preliminary testing and test-retest reliability) in a previously performed study [12].
The questions were formulated based on the training module on communication for students before graduation and physicians. They referred to the aspects that had been raising controversies most often. A four-point ordinal scale was used for questions Q1 to Q5 (definitely not -0, rather not -1, rather yes -2, definitely yes -3). The participants took part in the survey voluntarily, upon informed consent, and were advised to omit the questions for which they would rather not give an answer or were uncertain. Verbatim answers for open-ended questions were collected and analysed for Q6 and Q7 as well.
Statistical analysis
In the descriptive analysis, the absolute frequency was used. The statistical dependence between non-parametric (ordinal numeric) measures was assessed using Spearman's rank correlation coefficient. Kruskal-Wallis and Mann-Whitney U tests were applied for the statistical analysis of non-parametric data. Frequency analysis was performed using the c 2 test, V-test, and Fisher's exact test, as appropriate. P-values < 0.05 were considered as statistically significant. Data were analysed using Statistica 13 (TIBCO Software Inc.).
Results
The results are presented in Table I.
Demography and work experience
In total 102 questionnaires were filled in by 38 palliative care physicians and 64 internal medicine practitioners. Although participation in the survey was voluntary, all the participants that had been asked expressed the will to participate in the survey. Sixty-two respondents (63%) were women, with no statistical difference between palliative care physicians and internal medicine practitioner groups. The average age was 34.6 (95% CI: 32.9-36.4) years. Palliative care physicians were statistically older (43.6 years; 95% CI: 40.8-46.4) than internal medicine practitioners (29.3 years; 95% CI: 28.7-29.9); p < 0.00001. As expected, they also had longer working tenure (17.8 years; 95% CI: 15.2-20.3) than their internal medicine colleagues (3.9 years; 95% CI: 3.4-4.4); p < 0.00001. The average tenure in palliative care of palliative care physicians was 5.2 years (95% CI: 3.6-6.9).
Determination (the strength of confidence) of philosophy of life and religiousness
Question "Q4. Are you a religious person?" tested both the level of determination of philosophy of life and the level of religiousness (Table I). 37.3% of respondents regarded themselves as deeply religious (Q4. answer D); significantly more palliative care physicians (53%) than internal medicine practitioners (28%) regarded themselves as deeply religious (p = 0.007). 49% had a high level of determination of declared (religious or secular) philosophy of life (Q4. answers B and D), with no statistical differences between the groups. The age and the number of working years did not correlate with determination of philosophy of life or religiousness. However, the level of determination of philosophy of life (but not religiousness) positively correlated with the number of working years in palliative care (r = 0.23, p < 0.05), particularly among the physicians working > 5 years in palliative care (Table II).
Fear of own death
64.7% of all respondents declared fear of death ("Q3. Do you fear your own death?"), with more internal medicine practitioners than palliative care physicians doing so (73.4% vs. 50.0%, respectively; p = 0.007). Strong fear of death was declared by 31.3% of internal medicine practitioners and 10.5% of palliative care physicians (p = 0.0288). There was also a negative correlation between fear of death and total number of working years (r = -0.26), working tenure > 5 years (r = -0.25), and working years in palliative care (r = -0.20); p < 0.05.
There was no correlation between fear of death, age, the level of determination of philosophy of life, and the level of religiousness.
91.2% of respondents answered positively to the question "Q2. Would you like to know if there were only 2 months left of your life?", with no statistically significant differences between studied groups. The will to know about one's own upcoming death was statistically stronger in physicians working for over 5 years.
95.1% of respondents positively answered the question: "Q1. Should a patient be informed about unfavourable prognosis and upcoming death?", and it did not correlate with any variables except for the will to know about one's own upcoming death (r = 0.33, p < 0.05).
Conversation with a patient as a problem
For 64.7% of respondents, conversation with a patient on dying and death was a problem (Q5), and 13.7% of them assessed it as a significant one. More internal medicine practitioners (71.9%) than palliative care physicians (52.6%) regarded such conversations as a difficulty; however, the difference was not statistically significant. For the practitioners with a high level of determination of philosophy of life, such a conversation was less difficult (r = -0.23, p < 0.05). Religiousness had no such impact, nor did the age and the number of working years.
There was also a positive correlation between the fear of death and the difficulty of having a conversation about dying and death (r = 0.21, p < 0.05). The conversation with a patient about dying and death appeared to be a problem statistically more often for women (73.4%) than for men (50%), p = 0.0166.
The most frequently indicated problem was lack of skills of effective communication with a patient (87.3%, including verbatim answers). 34.3% of re- spondents pointed out time consumption, 26.5% of physicians had moral doubts about providing such a conversation, and 24.5% revealed that it caused them personal suffering. Several respondents underlined the problem of the families' interference in the physician-patient communication process. No statistically significant differences were observed between the studied groups.
Physicians' needs regarding getting prepared for a conversation with a patient on dying and death The most frequently expressed need regarding the preparation for breaking bad news was mentoring by an experienced practitioner (75.5% of respondents), followed by training on communication with a patient (68.6%) and the support of a psychologist (54.9%). Other types of help appeared to be less necessary according to respondents. There were statistically insignificant differences between the studied groups. As expected, the need for training on communication skills was presented more often by the persons who recognised technical difficulties as a problem (p = 0.006).
Discussion
There were more women than men among the respondents, which is typical for the medical environment in Poland and an expected result. The palliative care physicians were much (14 years) older than internal medicine practitioners and had longer tenure. This is also typical because most physicians take up palliative medicine as a second specialisation after several years of practice. It is worth mentioning that the average tenure of the palliative care physicians in palliative care was 5 years, which is quite a long time of repeated exposure to dying and death, as these are much more present than in other types of care. Most of the internal medicine practitioners were residents, at an early stage of their practice.
The majority of respondents regarded themselves as religious and less than 20% as atheists. Statistically more palliative care physicians were deeply religious than internal medicine practitioners. Deep (internal) religiousness correlated with the age and, consistently, working years. Half of the respondents declared a high level of determination of philosophy of life (religious or secular). This group may be considered as encompassing persons with a high spiritual orientation. However, there might also be spiritually oriented physicians in the remaining group, but their philosophy of life was not determined. Determination of philosophy of life did not differ between palliative care physicians and internal medicine practitioner groups. It is interesting that the level of determination of philosophy of life (but not religiousness) positively correlated with the number of working years in palliative care, especially when it exceeded five. This finding might suggest that working as a palliative care physician results in the self-determination of the philosophy of life. The repeated and intensive confrontation with dying and death bears reflection on one's own mortality, the meaning of suffering, and the existential value of life. A recognition of one's own mortality may allow clinicians to discuss death more comfortably [15]. The findings of this study also suggest the positive effect of repeated exposure on dying and death on the personal spiritual growth of the physician (religious or secular). There is evidence that physicians' emotional reactions to a patient's death affect not only patient care but also the personal lives of physicians [16].
Two-thirds of the respondents declared fear of their own death, but the palliative care physicians significantly did so less often. One-third of internal medicine practitioners and only every tenth palliative care physician declared a strong fear of death. It seems that the intensive exposure to dying and death tempers the physician's own fear of death.
The longer the working years as a physician, the lower the level of fear of death, especially when the tenure exceeds 5 years. This suggests that the first 5 years of physician's practice in particular may cause distress resulting from the deaths of patients. In this period, psychological and spiritual support, as well as mentoring, may be pivotal for the proper formation of the physician's attitude. Again, the longer the work in palliative care, the lower the fear of death. It is worth underlining that neither religiousness nor the level of determination of philosophy of life had an impact on fear of death, unlike in the survey performed among students [12]. It seems that other factors modify fear of death along with tenure, and most obvious is the repeated exposure to dying and death.
Almost all respondents would like to be informed about their own impending death regardless of the fear of death, which is typical of a highly-educated population [17]. Their level of autonomy is high, and the need to decide on the last days of life is predominant. This attitude correlates with the level of conviction about the necessity of informing the patient about unfavourable prognosis and upcoming death.
According to almost all physicians, the patient should be informed of bad news. However, according to Elisabeth Kübler-Ross, patients usually know the bad news about impending death, regardless of whether they are informed or not by the physician. What is more, only the patients who have come to terms with their death are able to pass away calmly [18].
Patients expect high levels of both empathy and information quality, no matter how bad the news is [19]. This means the unfavourable information should be delivered but in a proper manner, with empathy and tactfulness, which may represent both a technical and emotional problem for the practitioner. For two-thirds of physicians in the study, a conversation with a patient about dying and death was a problem. However, fewer palliative care physicians than internal medicine practitioners regarded such conversations as difficult. It is also less difficult for the respondents with a high level of determination of philosophy of life, and men. Interestingly, there was no such impact of religiousness, the age of respondents, or overall working years. It was also confirmed that fear of death results in the difficulty of having a conversation about dying and death. A clinician should pay more effort to his/her own philosophy of life, and consider/meditate on the issues of dying, the meaning of life, suffering, and death to be prepared for proper communication with the patient.
The problem declared by the majority (87%) of respondents was lack of skills of effective communication with the patient. Time consumption, moral doubts about providing such conversation, or the personal suffering caused by it appeared to be less important. Some respondents pointed out the communication with a patient's family as a particular challenge. Physicians face a dilemma when families do not wish the patient to know about a cancer diagnosis, and this highlights the necessity of taking into consideration the social circumstances in healthcare [20].
Consistently, 69% of respondents expressed needs for training in communication skills (Q7.A). Three-fourths regarded mentoring by an experienced practitioner or training on communication with a patient as the most valuable means. This finding confirms the observation from other research on palliative care [21]. Around half of the physicians would expect a psychologist's support as well. Not only did palliative care physicians appear more spiritually oriented than internal medicine practitioners, but also more often they expressed an interest in attending training programs on spirituality in medicine.
The results of this study differ in many aspects from one performed on the population of medical students shortly before graduation [12]. The young individuals, when entering adult life and approaching the physician's profession, have their knowledge, skills, convictions, fears, prejudices, spirituality, and religious beliefs. Based on these internal resources the young professional acquires new experiences and develops skills of communication with the patient. According to Karger et al. (2015), longer and longitudinally integrated palliative care teaching may support an attitudinal change in a better way [22]. So, it was indicated in this study that the length of tenure in palliative care modifies the professional's determination of philosophy of life and lowers the level of fear of death. Palliative care physicians acquire skills of breaking bad news to a higher degree than do internal medicine practitioners, and they recognise the importance of spirituality and try to foster it. Also, poor performance in breaking bad news may not only be an effect of lack of skills but also of burnout and fatigue [8]. Unlike burnout and fatigue, a personal value system is not a subject that undergoes prevention. However, this study highlighted that it might be modified by tenure and exposure to dying and death.
The limitation of the study is its synthetic short form. We decided to keep it as short as possible to focus on communication, and not on the fear itself. Consistently, we did not introduce scales of fear of death, such as the Lester Attitude Toward Death Scale or the Collett-Lester Fear of Death Scale [23,24], because they have not been validated in the Polish language. They might have added some value regarding the elements of the fear.
There is also an issue in statistical comparison of the results of the studies performed in a group of students and experienced professionals, although it seems obvious that there should be post-graduation courses on communication skills for professionals of all specialisations, e.g. in students, the fear of death may affect the ability to break bad news to the patient and the will to know about one's own impending death, while in this study there was no such impact found [12]. Such an investigation is planned as a separate study.
Another limitation of the study is the question (Q2) of a respondent's own hypothetical death, which may be far from the truth in the case of a real threat of death. So, the answer to this question should be regarded as declared readiness to such information, and the results should be taken with caution.
In conclusion, personal fear of one's own death may restrain inexperienced medical professionals from breaking bad news to patients. The level of determination of philosophy of life, but not religiousness, impacts the tendency to inform patients of upcoming death. Working in palliative care seems to augment determination of a philosophy of life. The higher the determination, the more positive the attitude to break bad news is. The specialisation and the length of the physician's tenure have an impact on the level of the physician's personal fear of death. Practitioners with longer working years and palliative care physicians tend to be less afraid of their own death. A longer tenure in palliative care makes breaking bad news less difficult than in internal medicine, although it always remains a significant problem. Mentoring by an experienced professional would be the most appreciated by the practitioners, as well as training sessions on communication with the patient and the support of a psychologist. Personal attitude should be addressed within the curriculum of physician-patient communication education.
|
2019-09-09T18:39:25.575Z
|
2019-07-11T00:00:00.000
|
{
"year": 2019,
"sha1": "5273db86ab608cec180db8c6928437ef0f6055f3",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5114/aoms.2019.85944",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53624e096f92c519a19f2ca86bfc0fe4a211431a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73480726
|
pes2o/s2orc
|
v3-fos-license
|
Cathodic-leading pulses are more effective than anodic-leading pulses in intracortical microstimulation of the auditory cortex
Objective. Intracortical microstimulation (ICMS) is widely used in neuroscientific research. Earlier work from our lab showed the possibility to combine ICMS with neuronal recordings on the same shank of multi-electrode arrays and consequently inside the same cortical column in vivo. The standard stimulus pulse shape for ICMS is a symmetric, biphasic current pulse. Here, we investigated the role of the leading-phase polarity (cathodic- versus anodic-leading) of such single ICMS pulses on the activation of the cortical network. Approach. Local field potentials (LFPs) and multi-unit responses were recorded in the primary auditory cortex (A1) of adult guinea pigs (n = 15) under ketamine/xylazine anesthesia using linear multi-electrode arrays. Physiological responses of A1 were recorded during acoustic stimulation and ICMS. For the ICMS, the leading-phase polarity, the stimulated electrode and the stimulation current where varied systematically on any one of the 16 electrodes while recording at the same time with the 15 remaining electrodes. Main results. Cathodic-leading ICMS consistently led to higher response amplitudes. In superficial cortical layers and for a given current amplitude, cathodic-leading and anodic-leading ICMS showed comparable activation patterns, while in deep layers only cathodic-leading ICMS reliably generated local neuronal activity. ICMS had a significantly smaller dynamic range than acoustic stimulation regardless of leading-phase polarity. Significance. The present study provides in vivo evidence for a differential neuronal activation mechanism of the different leading-phase polarities, with cathodic-leading stimulation being more effective, and suggests that the waveform of the stimulus should be considered systematically for cortical neuroprosthesis development.
Introduction
Electrical stimulation inside cortical tissue, i.e. intracortical microstimulation (ICMS), is a widely used stimulation method in neuroscientific research [1,2]. It is a powerful tool in establishing the direct contribution of neuronal activity to perception and cognition, i.e. in forming a causal connection to the cortical network [3]. Methods of causal inference, like electrical stimulation, pharmacological intervention or lesioning, have been used to associate a specific function to a particular part of the cortical network. ICMS is especially powerful in this regard because it can be used to reversibly activate small parts of the network with high temporal and spatial precision. The precision as well as the overall efficacy of the activation achieved with ICMS depends on the exact stimulation parameters, like the intensity of the stimulation (including current and duration of a single pulse and number and inter-pulse interval of pulse trains) [4][5][6][7], or the position of the stimulating electrode [6,[8][9][10][11][12].
It is important to note up front the conceptual differences between ICMS and other electrical brain stimulation applications like cortical surface stimulation, stimulation of peripheral nerves or deep brain stimulation (DBS). The usual size of the stimulation electrodes and amplitude of current pulses differ on the order of magnitudes between ICMS and DBS/ surface stimulation. A comprehensive analysis of the commonalities and differences between several electrical stimulation paradigms, however, is not in the scope of this report. The presented results are therefore specifically related to ICMS and not to other stimulation paradigms.
The standard pulse shape in ICMS is a biphasic, squarewave pulse [5,13,14]. While monophasic pulses allow for the most efficient and selective neuronal activation [15], they are most prone to damaging the tissue due to charge accumulation [13]. Therefore, in most cases electrical stimuli are designed with a second, charge-balancing phase of opposing polarity to the first phase, to 'recover' all reversible processes at the tissue-electrode interface and avoid damage to both the tissue and the stimulating electrode [13]. However, the second phase may also influence the neuronal response to the stimulus, as shown e.g. by including a brief interphase gap between the two stimulus phases [16].
The polarity of the first phase, the leading-phase polarity, is most often chosen to be cathodic (negative). This is based on theoretical considerations following the cable theory model of neuronal activation: applying current to neuronal tissue shifts the extracellular potential to a specific value proportional to the deposited charge. The influence of the extracellular potential on the neuronal activation is most often formulated using the 'activating function' proposed by Rattay [17]. Positive values of the activating function, the second spatial derivative of the extracellular potential in case of an unmyelinated fiber (figure 1(a)) and the second difference quotient of the extracellular potential for a myelinated axon, are thought to correspond to higher probabilities of neuronal excitation. Therefore, the activating function predicts the strongest neuronal excitation to occur during cathodic stimulation. However, the second spatial derivative of anodic stimulation also shows positive values, sometimes called 'virtual cathodes', but at some distance to the stimulating electrode ( figure 1(a)). Based on these mathematical models, anodic stimulation is consequently also expected to activate excitable membranes, but with reduced efficacy.
The efficacy of ICMS to activate neuronal populations is usually measured in terms of the elicited cortical response. Often this was achieved indirectly by measuring a behavioral response of an animal following ICMS [1,[18][19][20], sometimes more directly by recording neuronal activity including intraor extracellular electrical potentials [7,11,21] or optical surrogates of neuronal activity like fluorescence [22], voltage sensitive dye activity [23,24] or intrinsic optical signals [9].
In previous work we have shown the possibility to measure the efficacy of ICMS in close proximity to the stimulation by combining ICMS with extracellular electrophysiological recordings on a single shank of a linear multi-electrode array in vivo (figure 1(a)) [6,25]. In the present study we used this method to determine the influence of the leading-phase polarity, i.e. cathodic-versus anodic-leading (figure 1(b)), of single, charge-balanced, biphasic ICMS pulses on the activation of the cortical network. We recorded the neuronal activity inside the primary auditory cortex of guinea pigs during single pulses of ICMS with both leading-phase polarities in two sets of experiments (supplementary table S1 (stacks.iop.org/ JNE/16/036002/mmedia)). In the first group of experiments ('varying depth'), each of the 16 electrodes of the linear multielectrode array was stimulated sequentially, with a fixed stimulation current of ~6 µA. In the second group of experiments ('varying current'), we sequentially stimulated electrodes 1 (most superficial electrode), 9, and 16 (deepest electrode) with both leading-phase polarities while varying stimulation current. All other stimulus pulse parameters were kept the same throughout all experiments (charge-balanced, biphasic square waves, 200 µs/phase, figure 1(b)). The neuronal activation data were collected during single-pulse ICMS from the remaining 15 electrode contacts.
Experimental animals
Data from a total of 15 experiments on Dunkin-Hartley guinea pigs (Crl:HA, Charles River Laboratories International Inc., France, 350-665 g, 13 male, 2 female) were used for the present experiments.
All experiments were conducted in accordance with EU Directive 2010/63/EU, the German law for the protection of animals, and were approved by the ethics committee of the government of the state of Lower Saxony, Germany (Lower Saxony state office for consumer protection and food safety, LAVES; approval no. 14/1548).
Experimental groups
The experiments were separated into two sets (supplementary table S1). The first set of animals (n = 7, 'Varying depth') was stimulated with both leading-phase polarities on all 16 electrodes of the first shank of a 2 × 16 linear multi-electrode array, with a fixed current. The second set of experiments (n = 8, 'Varying current') was stimulated with both leading-phase polarities and varying currents (~0.1-37 µA) on electrodes 1, 9, and 16 of a 1 × 16 linear multi-electrode array. One animal was part of both stimulation paradigms. The data presented here is a subset of a larger stimulation paradigm presented to each animal. Some data related to cathodic-leading ICMS and acoustic stimulation has been published previously in [6,25].
Anesthesia and preparations
The detailed procedures for animal handling and preparation can be found in previous publications [6,25] and are only shortly recapitulated herein.
All animals were pre-medicated with 0.5 g Bene-Bac ® (Albrecht GmbH, Germany) and 0.3 mg diazepam (Ratiopharm GmbH, Germany). Anesthesia was induced and maintained by intramuscular injections of a ketamine/xylazine mixture (50 mg kg −1 , 10% Ketamin, WDT, Germany, 10 mg kg −1 induction and 5 mg kg −1 maintenance, 2% Xylazin, Bernburg, Germany). The induction mixture was supplemented with atropine sulfate (0.1 mg kg −1 , B.Braun Melsungen AG, Germany). Analgesia was provided by subcutaneous injection of 0.05 ml carprofen (Rimadyl, Pfizer GmbH, Germany). To control the physiological status during the experiment the animals were artificially ventilated and core body temperature, ECG, expiratory CO 2 concentration and respiratory pressure were constantly monitored.
Electrophysiological recordings were made in a soundproof, electrically shielded chamber with the animal fixed inside a stereotaxic frame using a metal bolt cemented to the frontal bones with dental cement (Paladur; Heraeus Kulzer GmbH, Germany). In the second set of experiments a small craniotomy was made at the vertex to place an Ag/AgCl reference electrode on top of the dura. In all experiments a broad craniotomy exposed the right auditory cortex. The dura was resected and the opening was filled with silicone oil. An Ag/ AgCl electrode in the neck served as common ground.
Cortical recordings and electrical microstimulation was performed using an Alpha Omega electrophysiology system (AlphaLab SnR, AlphaOmega LTD, Israel). After mapping the surface potentials in response to acoustic click stimulation (40 dB above the auditory brainstem response threshold) recorded using either an Ag/AgCl ball electrode or a 16-channel surface grid electrode (Blackrock Microsystems, Germany) [26], either a single-shank, 16 channel multi-electrode array (A1 × 16-5mm-150-177) or a double-shank, 32 channel multi-electrode array (A2 × 16-10mm-150-500-177, NeuroNexus, USA) was inserted perpendicularly to the cortical surface at the position of highest response amplitude. Electrodes were inserted manually with stereotactic micromanipulators (precision < 10 µm) to a depth of 2.5 mm, ensuring a complete coverage of all six cortical layers. Electrode position was additionally controlled by functional parameters (current source density (CSD) profile) and verified by postexperimental histology like presented in previous publications [6,25]. In both types of penetrating arrays, the same contact size (177 µm 2 ), inter-electrode distance (150 µm) and number of contacts per shank (16) were used. Cortical potentials were recorded, bandpass filtered (1 Hz to 9 kHz), digitized (22 kHz sampling rate), and stored for offline analysis.
Acoustic and electric microstimulation protocol
Acoustic click stimulation (50 µs condensation click) was presented on the contralateral (left) ear using a loudspeaker (DT-48; Beyerdynamic GmbH & Co. KG, Germany) connected to the outer ear canal with a plastic cone. The amplitude of the click stimuli was calibrated offline using a 1/4″ condenser microphone (Type 4939 with a 2670 pre-amp and a Nexus conditioning amplifier, Brüel&Kjaer, Denmark) to dB SPL peak equivalent (dB SPL pe ) values, i.e. having the same peak amplitude as a 1 kHz tone burst of a given intensity measured in dB SPL.
ICMS was performed in a monopolar fashion against the electrode in the neck. Stimulation consisted of single biphasic, charge-balanced, square-wave current pulses of either leading phase-polarities (200 µs/phase, no inter-phase gap, figure 1(b)). Stimulation intensity varied between 0.1 and 37 µA, as measured over a 100 Ω resistor placed in series in the current return path (see supplementary material, figure S1). Electrical stimuli were repeated 32 times with a repetition rate of ~1 Hz and the first and last trial were dropped. Acoustic stimuli were repeated 30 times.
Electrical stimulation artefacts were blanked by linear interpolation (0-3 ms post-stimulation onset). In approx. 0.1% of all single recordings the stimulation artefact saturated the recording input range (486 of 468 480 single recordings). For the analysis of local-field potential (LFP) activity the data was lowpass-filtered using 2nd order, zero-phase Butterworth filtering (<150 Hz), and highpass-filtered (>300 Hz) for the analysis of multi-unit spiking activity (MUA). As in a previous publication [6], the CSD profile was calculated as the second spatial derivative of the LFP component as follows: where φ z t designates the LFP at time t and depth z. ∆z represents the distance between two adjacent recording contacts (here 150 µm). The CSD was inverted, to analyze current sinks, i.e. putative neuronal activation, with positive values. A single CSD signal is called a 'CSD trace'. While the combination of all single traces from a concurrent recording is called the 'CSD profile'. For the calculation of the CSD, edge electrodes of the LFP profile were doubled following Vaknin et al [27]. CSD traces were baseline corrected relative to the 50 ms pre-stimulus. For automatic amplitude quantification of current sinks, the current sources were removed from the data (set to 0). Single spike events were classified from the MUA data using a thresholding procedure [28].
Normalized CSD peak amplitudes in the time window up to 50 ms post-stimulation at varying stimulation current or varying acoustic click intensity were fit according to the following logistic function: where c denotes the stimulation intensity in dB SPL pe or dB re. 0.1 µA, R Low is the baseline response amplitude, R High is the upper asymptote, w denotes the steepness of the function and R 50% is the point of inversion. We calculated the dynamic range as the difference between the stimulus intensities that satisfy the conditions Amplitude CSD (c) = 0.75 and Amplitude CSD (c) = 0.25, in order to facilitate comparisons to earlier studies [29][30][31].
For detailed analysis of the position of the evoked activity, the sink component of the CSD traces were normalized by the maximum sink amplitude of the CSD profile. When the normalized peak amplitude in the 50 ms post-stimulation window exceeded a value of 0.8, the response was considered supra-threshold and the respective CSD trace was coded with a 1, otherwise with 0. This resulted in logical activity vectors giving the position of the strongest neural activation. Concatenating 16 activity vectors, constructed for stimulation on each of the 16 electrodes of the multi-electrode array, resulted in logical activity matrices of size 16 × 16. The nominal difference between two activity matrices was calculated as: with AM denoting a single activity matrix and norm Frob denoting the Frobenius norm of a matrix calculated as: where x ij are the single elements of an m × n activity matrix AM. This difference is 0 if two matrices are the same and increasingly larger with increasing distance between both matrices. All statistical analyses were performed in MATLAB. For statistical evaluation, the CSD amplitude was transformed to dB re. 1 mV mm −2 and the stimulation current to dB re. 1 µA. This allowed to investigate the relative effects of the experimentally varied factors across animals, even though the absolute amplitude values are expected to differ across animals. All statistical tests (e.g. Wilcoxon signed-rank tests, Kruskal-Wallis tests, repeated measures ANOVAs) were performed two-sided. Post-hoc testing was performed using Tukey's honest significance difference (HSD). P-values < 0.05 were considered statistically significant.
Results
ICMS of medium strength (~6 µA, three times mean threshold value, see supplementary table S2 and figure S2) inside the auditory cortex evoked activity regardless of the leading phase polarity. Neuronal activity was visible as positive and negative deflections in the LFP profile in response to acoustic (figure 2(a)) as well as electric stimulation ( figure 2(b)). Consequently, after localizing the generators of the LFPs within the electrode shanks there was a characteristic pattern of sinks in the respective current-source density (CSD) profiles. The CSD is calculated from the LFP signals as the second spatial derivative and is a representation of the underlying (subthreshold) current sinks and sources generating the LFP signal. Current sinks (depicted upwards, with black fill in figure 2) are assumed to correspond to cell membrane depolarizations and consequently neuronal activation. Quantifications of the CSD are therefore restricted to the sink component of the CSD in the following.
CSD profiles in response to anodic-leading ICMS were specific to the depth of stimulation as previously reported for cathodic-leading stimulation [6]. Both leading-phase polarities resulted in CSD profiles which were visually very similar to each other. For example, a stimulation on electrode 1 (figure 2(a)) evoked a strong but short-duration source followed by a longer-duration sink in the topmost CSD trace in both the cathodic-leading as well as the anodic-leading ICMS condition.
CSD amplitudes in the experimental group 'Varying depth'
A repeated measures ANOVA with post-hoc Tukey's HSD comparisons (performed on data transformed to dB re. 1 mV mm −2 ) revealed significant amplitude differences due to the withinsubject factor leading phase polarity (F(1,112) = 169.39, p < 0.0001) and the between-subject factor stimulated electrode (F(15,112) = 4.19, p < 0.0001), without interaction between these two factors (F(15,112) = 1.15, p = 0.3236). The post-hoc pairwise comparison of CSD amplitudes (in dB) of both leading-phase polarities for each stimulated electrode revealed statistically significant differences in 14 out of 16 stimulated electrodes (supplementary table S3). Highest mean sink amplitudes were reached when stimulating in supragranular layers, i.e. roughly electrodes 3-7. Averaged over all 16 stimulated electrodes the mean CSD sink peak amplitude was 0.56 ± 2.50 dB re. 1 mV mm −2 for cathodic-leading stimulation and −3.34 ± 2.62 dB re. 1 mV mm −2 for anodicleading stimulation (Wilcoxon signed-rank test, z = −3.52, p = 0.0004). This amounts to a mean difference between cathodic-leading ICMS and anodic-leading ICMS of 3.9 dB. In total, cathodic-leading stimulation led to higher LFP peakto-peak ( figure 3(a)), as well as CSD peak amplitudes (figures 3(b) and (c)) than anodic-leading stimulation in the experimental group 'Varying depth'.
CSD amplitudes in the experimental group 'Varying current'
Both LFP amplitude [6] and CSD sink peak amplitude increased with stimulation current ( figure 4(a)) in the animals of the experimental group 'Varying current'. Statistical evaluation of CSD amplitudes (in dB re. 1 mV mm −2 ) using a repeated measures ANOVA with Tukey's HSD post-hoc testing revealed significant influences of the between-subject factors stimulation current (current measured in dB re. 1 µA, F(14,315) = 198.10, p < 0.0001) and stimulated electrode (F(2,315) = 184.71, p < 0.0001), with a significant interaction between these factors (F(28,315) = 6.54, p < 0.0001). However, the within-subject factor leading phase polarity (F(1,315) = 0.65, p = 0.4198) was not significant, neither was (a) LFP peak-to-peak amplitude when stimulating in different depths with cathodic-leading (left), and anodic-leading (middle) single ICMS pulses and the difference between both polarities (right). Note that the amplitude is always positive. The color code also represents leading-phase polarity of the stimulus. In the difference signal, blue designates higher amplitude in the cathodic-leading condition and red higher amplitude in the anodic-leading condition respectively. (b) Same as panel (a), but for the CSD sink peak amplitude. (c) CSD sink peak amplitudes for each electrode being stimulated with cathodic-leading and anodic-leading stimulation. (d) Sink peak latencies for each electrode being stimulated with cathodic-leading and anodic-leading stimulation. Amplitudes and latencies were averaged over all 15 recorded electrodes. The errorbars show the variance over n = 8 animals, as the SEM. * p < 0.05. the three-way interaction (F(28,315) = 0.79, p = 0.7684). The interaction between factors current and leading phase polarity also did not reach significance with a threshold of p = 0.05, but showed a trend (F(14,315) = 1.63, p = 0.0698). The most influential factor for the resulting CSD amplitude, judging by the fraction of each factors sum of squares over the total sum of squares, was the stimulation current with an R 2 = 0.7077, followed by the stimulated electrode, R 2 = 0.0943 and stimulus polarity with R 2 < 0.0001.
Comparison of dynamic range for ICMS and acoustic stimulation
While the CSD amplitude growth in response to different ICMS stimulation intensities was very uniform between the different animals (supplementary figure S3), the CSD amplitude growth in response to acoustic click stimulation of varying intensity showed more variation between the animals ( figure 4(b)). In some animals, the CSD amplitude increased very strongly over a few decibels of stimulus level increase. The mean of all animals, on the other hand, showed a smooth incline over the tested click level range (figure 4(c)). The steepness of these amplitude growth functions was quantified between 25% and 75% of the normalized CSD amplitude (figure 4(d)). The grand mean of the dynamic range was 5.718 ± 1.436 dB (mean ± standard deviation) for cathodic-leading ICMS and 4.607 ± 0.818 dB for anodicleading ICMS. The acoustic stimulation had a dynamic range of 19.972 ± 15.441 dB. A Kruskal-Wallis test over all seven groups (3 stimulated electrodes * 2 leading-phase polarities + acoustic stimulation) was statistically significant (d.f. = 6, χ 2 = 23.61, p = 0.0006). A post-hoc Tukey's HSD comparison between the groups revealed statistically significant differences between the median of the acoustic dynamic range and the anodic-leading ICMS on electrode 9 and anodicand cathodic-leading ICMS on electrode 16.
Spatial pattern of activity in the experimental group 'Varying depth'
To determine whether the change in ICMS leading phase polarity leads also to a change in the spatial pattern of the excited neuronal activity, we generated 'activity matrices' out of the CSD sink amplitude data of the experimental For each stimulation condition we normalized the CSD sink profile according to the peak sink amplitude of the profile. Thresholding with a value of 0.8 resulted in logical 'activity vectors' ( figure 5(a)). These activity vectors showed only the position (=depth) of strongest evoked neural activity irrespective of its absolute amplitude for each electrode stimulated. Combining the activity vectors for ICMS on all 16 electrodes in a single animal led to an activity matrix for each leading phase polarity ( figure 5(b), supplementary figure S6). The average over all animals resulted in an activity matrix showing the probability for a sink in a given depth when stimulating the different electrodes ( figure 5(c)). To quantify the change in the sink pattern when changing the leading phase polarity, the activity matrices for each ICMS polarity were subtracted from each other.
As discussed above, cathodic-leading as well as anodicleading ICMS evoked strongest activity in the electrodes close to the position of the stimulating electrode, i.e. local activity, visible in the activity matrix as a diagonal line above the stimulated electrodes ( figures 5(b) and (c)). There was a dissociation between the effects of anodic-and cathodic-leading stimulation when stimulating electrodes in the lower half of the array, i.e. the deep layers. While in cathodic-leading ICMS local activity next to the stimulated electrode was observed similar to the superficial electrodes, anodic-leading ICMS showed a more variable spatial pattern in the evoked activity.
The nominal difference between activity matrices was quantified by calculating normalized Frobenius norms, i.e. the Euclidean distance of 2D matrices (see material and methods, supplementary figure S7). This difference revealed that the similarity between ICMS of each polarity between animals is the same as the difference between the polarities in a single animal (Kruskal-Wallis test, df = 2, χ 2 = 4.68, p = 0.0966; figure 6(a)). In other words, for both leading phase polarities the variation from one animal to the next was the same as the difference between cathodic-leading and anodic-leading in a single animal. Subsequently, the activity matrices were separated into four quadrants ( figure 5(c)). Calculating the nominal difference of the first and the fourth quadrant between the animals revealed higher variance in the fourth quadrant in anodic-leading stimulation as well as the difference matrices (figures 6(b) and (c)). This documents that anodic-leading stimulation fails to generate the characteristic spatial pattern of local activity if stimulating the lower half of the electrode array. Consequently, while cathodic-leading stimulation evoked the typical local activity regardless of stimulation depth, anodic-leading stimulation evoked such local activity only if stimulating superficial positions.
Multi-unit activity in the experimental group 'Varying depth'
We additionally analyzed the multi-unit activity response to ICMS. This has the advantage of documenting the suprathreshold response as opposed to subthreshold activity revealed by the CSD. As with the CSD, the general shape of the spiking response to cathodic-and anodic-leading ICMS was generally similar (figures 7(a) and (b)). While the CSD response showed peak latencies of around 15 ms, most multiunit activity was found in the first 10 ms after stimulation ( figure 7(c)).
The number of the spikes generated in the first 10 ms poststimulus showed a similar dependence on stimulation depth as the CSD amplitude ( figure 8(a) amplitude for cathodic-leading stimulation was 219.24 ± 100.21 spikes (mean ± standard deviation, summed over 15 recording electrodes and 30 trials), and 132.28 ± 56.30 spikes for anodicleading stimulation ( figure 8(b)). Again this was statistically significant (Wilcoxon signed-rank test, p = 0.0156), and additionally confirmed the stronger cortical response to cathodicleading ICMS than to anodic-leading ICMS.
Furthermore, we calculated a spike index as the ratio of spikes in the first ms post-artefact over the total number of spikes in the first 7 ms post-artefact (i.e. 3-10 ms post-stimulation, figure 8(c)). The mean of the spike index averaged over all 16 stimulation channels for cathodic-leading stimulation (0.41 ± 0.14) was statistically higher than the spike index for anodic-leading stimulation (0.28 ± 0.05, Wilcoxon signedrank test, p = 0.0156, figure 8(d)). This demonstrates that the distribution of spikes is biased to the first ms in cathodicleading stimulation, whereas in anodic-leading stimulation the multi-unit activity is more evenly distributed in time.
Discussion
Both cathodic-and anodic-leading biphasic ICMS pulses evoked local cortical activity next to the stimulated electrode, albeit with a substantially reduced dynamic range compared to acoustic stimulation. The response to cathodic-leading stimuli was significantly larger than the response to anodic-leading stimuli, regardless of analyzed response modality (LFP, CSD, or MUA). This difference in effectivity of neuronal activation led to a differential effect when stimulating with reduced current in the deep cortical layers, where cathodic-leading stimuli still evoked a local neuronal response, while anodicleading ICMS failed to do so. In consequence, the present study provides in vivo evidence for a higher effectiveness of cathodic-leading symmetric pulses in ICMS.
In vivo evidence for differential activation mechanisms between cathodic-and anodic-leading stimulation
Simulations [32][33][34] and electrophysiological studies [35] consistently showed differential effects of the stimulus polarity on neuronal activation. The simplified prediction of the 'activation function' of extracellular stimulation, the second spatial derivative of the extracellular potential, is that cathodic stimuli have a higher probability to activate neurons ( figure 1(a)). From this perspective, our finding of an increased response amplitude to cathodic-leading stimuli is not surprising. However, the effect of stimulus polarity on neuronal activation is actually more complex than this simple prediction. Simulations using the activation function found complex relationships between stimulus polarity and electrode-neuron distance, with some spatial arrangements (e.g. electrode close to the axon initial segment) leading to a preference for cathodic stimuli and other arrangements (e.g. electrode close to the cell body) to a preference for anodic stimuli [15,[32][33][34], or a differential effect according to the neuron orientation relative to the electrode [36].
Assuming that anodic stimuli preferentially activate local cell bodies and cathodic stimuli preferentially local axons, it might be concluded that cathodic stimulation leads to smaller action potential initiation latencies. Our present data support such a notion by showing more short-latency multi-unit activity in cathodic-leading than in anodic-leading stimulation. However, there is a substantial limitation in the present approach. The artefact blanking limited our data analysis to responses 3 ms after stimulus onset and consequently impeded conclusions about the direct effect of stimulation. Furthermore, it is generally assumed that the bulk of the neuronal activation seen after ICMS originates from trans-synaptically activated neurons [37,38]. Consistently with these suggestions, we consider the spiking data presented here to originate mainly from trans-synaptically activated neurons.
There are also several limitations to the concept of the activation function itself (see for example [39,40]). By definition, it is only concerned with the geometry of the extracellular potential and ignores the characteristics of the physiology of the neurons inside the potential gradient (e.g. strengthduration relationships [35]), as well as ignoring the effects of time. Therefore, it cannot be used straightforwardly to predict the neuronal activation when adding a second phase to the electrical stimulus, i.e. applying biphasic stimuli. Indeed, simulations suggest that (symmetric) biphasic stimuli of either leading-phase polarity (as were used in this study) may not have the same selectivity as monophasic pulses [15,36]. This highlights the fact that during biphasic stimulation, both phases of the stimulus might contribute to the resulting neuronal activation. And as a consequence, differential effects between stimuli of different polarity are actually not to be expected for biphasic stimuli, in contrast to monophasic stimulation.
Asymmetric biphasic pulses, with one phase reduced in amplitude but proportionally extended in time to retain chargebalancing, showed the same selectivity as monophasic stimulation [15]. In the present study we did not test other pulse shapes than the symmetric, biphasic, square wave pulses, and therefore cannot conclude on possible polarity differences in eliciting local neuronal activation with asymmetric or mono-/ triphasic pulses. However, a study testing behavioral detection thresholds of these asymmetric pulse shapes in the auditory cortex found no difference between different amplitude/ duration ratios of the charge-balancing phase [41]. Instead the authors reported that detection thresholds depended solely on the magnitude of the cathodic phase. Again, as this does not change with changing the leading-phase polarity in the symmetric pulses used here, we did not necessarily expect to find the differential neuronal activation shown here in figure 3. It has to be noted though that in the previous study symmetric biphasic stimuli were reported to have higher detection thresholds than the other stimuli [41].
In behavioral experiments, anodic-leading stimulation is generally harder to detect (i.e. had higher current thresholds) than cathodic-leading stimulation, as seen in the auditory cortex of rats [41], monkey V1 [4] and human V1 [42]. Given our results presented here, this is a direct consequence of the lower amount of cortical activity evoked by anodicleading stimulation. There are, however, contrasting reports about the interaction between leading-phase polarity and the depth of stimulation below the cortical surface. While saccade initiation in monkey V1 showed lower anodic-leading thresholds in superficial layers and higher anodic-leading thresholds in deep cortical layers than cathodic-leading pulses [43], it was the other way around for a movement induction in the motor cortex of the rat (higher anodic-leading thresholds superficially and lower anodic-leading thresholds in deep layers) [44]. This might indicate that such observations are specific to the target structure.
Furthermore, the reduced response amplitude generated by anodic-leading stimulation could be considered another evidence for the preferential activation of fibers of passage over cell bodies. It is generally assumed that at any given depth below the cortical surface there are relatively more fibers of passage than cell bodies in the vicinity of the electrode. This should result in a bias towards the effectiveness of cathodic-leading stimulation, regardless of the stimulation depth. This is what we have found in the present data and it corresponds well to the non-uniform, sparse cell activation seen with ICMS [10,22].
In deep layers the anodic-leading ICMS failed to raise the cortical response amplitude above our detection threshold, whereas in cathodic-leading stimuli such layer-dependent effect was not observed (figure 9). It is unlikely that the mechanism of stimulation changes with stimulation depth. If we assume that anodic-leading stimulation preferentially activates local cell bodies, then the cell bodies stimulated in the deep layers are most likely not significantly driving other neurons in the vicinity, i.e. failing to generate a local response.
Cathodic-leading stimulation, on the other hand, activates axons from the local neurons, but also from other neurons projecting onto the cells in the vicinity of the electrode contact. These latter axons might explain why cathodic-leading stimulation activates deep layers, but anodic fails to do so.
The absence of a statistically significant influence of the leading-phase polarity in the experimental group 'Varying current' is likely to be related to the low stimulation current data (close to stimulation thresholds). It is reasonable to assume the influence of the leading phase polarity to increase with increasing stimulation current ( figure 4(a)). Statistically, this should have led to a significant interaction between the stimulation current and the leading phase polarity. The here presented data showed a trend towards this interaction (p ≈ 0.07). However, due to stimulation safety concerns, we limited the maximum amount of current applied per pulse.
Implications for the development of neuroprosthetic devices
The observed commonalities and differences between cathodic-and anodic-leading ICMS pulses have several implications when considering to use ICMS in neuroprosthetic devices, i.e. stimulating cortical implants. The most robust observation of the present study was that cathodicleading stimulation led to higher response amplitudes than anodic-leading ICMS. This suggests the use of cathodicleading stimulation in potential clinical applications of ICMS. Achieving the same amount of cortical response with lower stimulation intensities is beneficial not only in terms of power consumption of the stimulation, but also in regard to the safety of the stimulation, both for the cortical tissue and the stimulation electrodes [13]. However, in a clinical setting there might be other factors to consider besides absolute thresholds, for example available therapeutic windows (difference between effect thresholds and side-effect thresholds), which might also be influenced by stimulus polarity.
The dynamic range of the electrical stimulation in the auditory cortex was found to be significantly smaller than the dynamic range of physiological, acoustic stimulation. But while the inter-experimental variation in ICMS was relatively low, acoustic stimulation showed a high variability of the dynamic range from one penetration to the next. This most likely reflects a difference in the amount of neuronal recruitment, which in the case of artificial electrical stimulation depends mostly on the extent of the supra-threshold extracellular potential due to a specific stimulation current. In the case of acoustic stimulation, on the other hand, the neuronal recruitment is dependent on several factors besides the stimulation intensity, like the specific electrochemical properties of the neurons or the cortical state at the time of stimulation [45,46]. The dynamic range seen here for electrical stimulation of the auditory cortex is well within the range of values found for electrical stimulation in several other stations of the auditory pathway (table 1). It has to be kept in mind, however, that electrical stimulation does not always lead to cortical response saturation before a safety limit is reached with the stimulation current. Exact dynamic ranges are therefore hard to compare and the definitions might vary between studies. It is important to note that deaf patients can achieve useful speech recognition scores with cochlear implants also with a relatively narrow dynamic range [47].
Conclusion
Supra-threshold ICMS inside the auditory cortex using symmetrical, charge-balanced pulses evoked neuronal activity regardless of leading-phase polarity. However, cathodic-leading pulses were more effective than anodic-leading stimulation, particularly in deep cortical layers. Therefore, with the exception of special research conditions, cathodic-leading stimuli are preferable for ICMS when using symmetric biphasic pulses. Figure 9. Schema of the interpretation of the differential effect of leading-phase polarity according to stimulation depth. Stimulating with a low current in the upper half of the cortical column (left, supragranular stimulation) activated the local neuronal population (depicted in green). Because of the low current amplitude, activation is restricted to the local population and there was no columnar response (compare to [6]). Stimulating with a low current in the lower half of the cortical column on the other hand led to a differential effect according to leading-phase polarity (right, infragranular stimulation). While cathodic-leading stimuli activated the local cell population, anodic-leading stimuli failed to do so. This study
|
2019-03-11T17:19:35.257Z
|
2019-02-21T00:00:00.000
|
{
"year": 2019,
"sha1": "3297b409e6ef48c5954f8768f44eea47a8ed1864",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1741-2552/ab0944",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "f706c5395dedc5a1cdbd2a060d2d9111dca58555",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Physics"
]
}
|
86593637
|
pes2o/s2orc
|
v3-fos-license
|
Model-Based Methods to Translate Adolescent Medicine Trials Network for HIV/AIDS Interventions Findings Into Policy Recommendations: Rationale and Protocol for a Modeling Core (ATN 161)
Background: The United States Centers for Disease Control and Prevention estimates that approximately 60,000 US youth are living with HIV. US youth living with HIV (YLWH) have poorer outcomes compared with adults, including lower rates of diagnosis, engagement, retention, and virologic suppression. With Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) support, new trials of youth-centered interventions to improve retention in care and medication adherence among YLWH are underway. Objective: This study aimed to use a computer simulation model, the Cost-Effectiveness of Preventing AIDS Complications (CEPAC)-Adolescent Model, to evaluate selected ongoing and forthcoming ATN interventions to improve viral load suppression among YLWH and to define the benchmarks for uptake, effectiveness, durability of effect, and cost that will make these interventions clinically beneficial and cost-effective. Methods: This protocol, ATN 161, establishes the ATN Modeling Core. The Modeling Core leverages extensive data—already collected by successfully completed National Institutes of Health–supported studies—to develop novel approaches for modeling critical components of HIV disease and care in YLWH. As new data emerge from ongoing ATN trials during the award period about the effectiveness of novel interventions, the CEPAC-Adolescent simulation model will serve as a flexible tool to project their long-term clinical impact and cost-effectiveness. The Modeling Core will derive model input parameters and create a model structure that reflects key aspects of HIV acquisition, progression, and treatment in YLWH. The ATN Modeling Core Steering Committee, with guidance from ATN leadership and scientific experts, will select and prioritize specific model-based analyses as well as provide feedback on derivation of model input parameters and model assumptions. Project-specific teams will help frame research questions for model-based analyses as well as provide feedback regarding project-specific inputs, results, sensitivity analyses, and policy conclusions. Results: This project was funded as of September 2017. Conclusions: The ATN Modeling Core will provide critical information to guide the scale-up of ATN interventions and the translation of ATN data into policy recommendations for YLWH in the United States. (JMIR
Introduction outcomes and cost-effectiveness, computer-based health policy models can add substantial value to clinical trials and observational studies [4]. Projecting such long-term estimates is particularly important for studies among YLWH, for whom the health effects of poor virologic control may not manifest for years or decades [5]. Models can also combine data from multiple sources and compare a wide range of possible interventions, leveraging the extensive data collected within ATN and other studies into timely guideline and policy recommendations [6]. The Cost-effectiveness of Preventing AIDS Complications (CEPAC)-computer simulation models [7] of HIV infection in infants, children, and adults have been used to inform health policy related to HIV prevention [8,9], testing [10][11][12], and care [13][14][15][16][17][18], both in the United States and internationally. CEPAC model-based work has been cited in national HIV care guidelines for the United States, Brazil, Chile, Mexico, France, and Colombia, among others, as well as in the World Health Organization (WHO) guidelines [19][20][21][22][23]. For example, a CEPAC-Pediatrics model-based analysis projected that use of lopinavir/ritonavir in children younger than 3 years as first-line antiretroviral therapy (ART) led to longer life-expectancy and was cost-saving compared with first-line use of nevirapine; this analysis helped inform the WHO's recommendation in 2013 of a lopinavir/ritonavir-based regimen for first-line ART in that age group [16,24]. To date, few HIV modeling or cost-effectiveness studies have been conducted among youth; most have focused on HIV screening and prevention [25][26][27][28][29][30][31][32]. Previous work has not incorporated ageand time-varying changes in adolescent and young adult health-related behavior among YLWH.
Background
Approximately 60,000 youth are living with HIV in the United States. Youth living with HIV (YLWH) have poorer outcomes than adults living with HIV, including lower rates of diagnosis, engagement, retention, and virologic suppression [1,2]. Established in 2001 by the Maternal and Pediatric Infectious Disease Branch of the Eunice Kennedy Shriver National Institutes of Child Health and Development, the Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) has conducted rigorous evaluations of interventions to improve medication adherence, retention in care, and viral load (VL) suppression among YLWH [3]. ATN
Objectives
This protocol will leverage existing data from successfully completed NIH-supported studies (Table 1) to inform the development of novel approaches for modeling critical components of HIV disease and care in YLWH. As ATN investigators study new interventions to improve VL suppression among YLWH, the CEPAC-Adolescent computer simulation model will be developed to define the benchmarks for uptake, effectiveness, durability of effect, and cost that will make these interventions clinically beneficial and cost-effective. In addition, as new data emerge from ongoing ATN trials about the effectiveness of these interventions, the computer simulation model will serve as a flexible tool to project the long-term clinical impact and cost-effectiveness of these interventions. This project will, therefore, provide critical information to guide the scale-up of ATN interventions and the translation of ATN data into policy recommendations for YLWH in the United States.
Methods participant recruitment and enrollment capacity. These research program projects are as follows: • Comprehensive Adolescent Research and Engagement Studies [54], a comprehensive community-based project that aims to optimize the HIV prevention and treatment continuum for at-risk and acutely infected youth as well as youth with established HIV infection.
• iTech [55], a research program that aims to impact the HIV epidemic by conducting innovative, interdisciplinary research using technology-based interventions across the HIV prevention and care continuum for adolescents and young adults.
• Scale it Up [56], a research program that aims to assess and enhance the real-world effectiveness, implementation, and scalability of theoretically based and developmentally tailored interventions focused on improving HIV treatment and prevention self-management for youth.
Each research program project (U19) supports several individual protocols [54][55][56]. The ATN Coordinating Center (U24) is located at the University of North Carolina at Chapel Hill. The Coordinating Center provides support, coordination, and operational infrastructure to ATN. The Coordinating Center also supports several stand-alone protocols such as "A Triggered, Escalating, Real-Time Adherence Intervention," which uses electronic-dose monitoring to inform an adherence intervention for youth without virologic suppression. The Coordinating Center also supports the Modeling Core.
Adolescent Medicine Trials Network for HIV/AIDS Interventions Structure and Establishment of the Modeling Core
The ATN structure consists of 3 ATN research program projects (U19s) and a Coordinating Center (U24; Figure 1). Each of the 3 ATN research program projects (U19) has a well-defined research focus supported by core infrastructures as well as assembled for each analysis. Each project team will include Modeling Core investigators and the protocol chair or a representative from the trial being analyzed. Project team members have expertise in multiple relevant areas including epidemiology, health services research, economics, intervention science, implementation science, behavioral science, clinical trials development, and the clinical care of YLWH. Project teams will help develop the research question, identify any additional structural simulation model modifications, provide input on needed data parameters (eg, help identify potential issues of population mismatch for parameters derived from different sources), and review preliminary model results (eg, for face validity and identifying key sensitivity analyses). Abstracts, presentations, and manuscripts presenting model results will be reviewed in accordance with the ATN publications policy. The Modeling Core has established a Modeling Core Steering Committee that will meet regularly and include Modeling Core investigators, at least 1 principal investigator or liaison from each of the 3 ATN research program projects (U19s) and the ATN Coordinating Center, protocol chairs or representatives from stand-alone trials for which modeling is planned, and additional interested ATN investigators.
ATN investigators in the Modeling Core Steering Committee will provide feedback on the derivation of data inputs, design of new model structure within the CEPAC-Adolescent model, and selection of policy analyses to perform. Once specific ATN studies are identified as potential candidates for modeling analyses, the Modeling Core investigators will work with relevant protocol teams to ensure that data likely to be useful for later modeling are collected prospectively in each study.
After the Modeling Core Steering Committee has determined which policy analyses will be performed, project teams will be Textbox 1. Categories of key outcomes (specific events within each listed category will also be analyzed separately).
Categories of key outcomes:
•
Centers for Disease Control and Prevention (CDC) HIV clinical diagnoses (CDC-A, B, and C)
• Severe or life-threatening, non-HIV-related diagnoses (eg, pneumococcal events) • Chronic non-HIV-related diagnoses (eg, cardiac and renal disease and malignancy) • Medication toxicity (division of AIDS ≥Grade 2) • Psychiatric events • Sexually transmitted infections • Pregnancy or pregnancy outcomes • Death outcome to be modeled, secondary outcome measures to be included, and the types of economic estimates to be derived from the model.
Design for Objective 1
The design for Objective 1 was to determine rates of key clinical events for YLWH engaged in care stratified by age, CD4 cell count, and ARV and VL status in completed and ongoing NIH-supported studies.
Incidence rates of key clinical events (Textbox 1) will be described based on current age, sex, current CD4, current ARV use, and VL as well as mode of HIV acquisition (perinatally HIV-infected youth [PHIVY] or nonperinatally HIV-infected youth [NPHIVY]) [58]. These data will permit assigning risks of clinical events to simulated patients in the simulation model developed in Objective 2.
Population and Data Sources
Formal requests were approved to analyze data from 4800 YLWH in completed NIH-supported studies after appropriate data use agreement and network approvals were secured (Table 1). These studies include observational studies, nonrandomized interventions, and a randomized trial. All include youth aged 13 to 24 years at study entry. The primary focus of each study ranged widely, from determining the safety and efficacy of ARV medications to evaluating clinical, immunological, and psychiatric outcomes. All included a minimum set of key outcomes needed for this analysis, and all clinical events were recorded using comparable diagnostic codes. Protocols and data collection forms from all studies will be reviewed to understand how data can be harmonized among studies, as has been done in previous analyses [58]. Resource use input parameters will be derived from adolescent intervention or trial-specific data where available, as in previous work [29,59]. New data emerging from ongoing studies will be integrated into the model.
Data Management
Data analysis concept sheets and data use agreements have been approved for these analyses by individual networks as well as through the Eunice Kennedy Shriver National Institute of Child Health and Development Data and Specimen Hub repository [60]. Data will be cleaned (when applicable), harmonized, and safely stored at the Center for Biostatistics in AIDS Research at the Harvard TH Chan School of Public Health, which is
Objective 1
Objective 1 was to determine rates of key clinical events for YLWH engaged in care stratified by age, CD4 cell count, and antiretroviral (ARV) and VL status in completed and ongoing NIH-supported studies. Using ClinicalTrials.gov [57], studies were reviewed that included YLWH aged 13 to 24 years at the US sites, and collected data related to CD4 cell count, VL, and ART regimens as well as clinical event data during the era of modern ART. Selected studies were conducted within the 2 largest NIH-sponsored national networks supporting clinical trials and observational studies in youth affected by HIV-the ATN and the IMPAACT Network. Incidence rates of opportunistic infections; HIV and non-HIV events (Textbox 1); and mortality based on age, sex, patterns of CD4 count and VL, and ARV use among YLWH will be evaluated in completed and ongoing NIH-sponsored studies (Table 1) in accordance with individual data use agreements.
Objective 2
Objective 2 was to develop the CEPAC-Adolescent model-a simulation model to reflect unique characteristics of YLWH.
The CEPAC-Adolescent simulation model will be developed to reflect the unique characteristics of YLWH. The foundational inputs of the expanded model will be populated with estimates from completed and ongoing NIH-supported studies (Table 1) derived in Objective 1 as well as other published sources. As new data emerge from the ATN or other sources related to clinical events, resource utilization, and specific interventions, model inputs will be updated.
Objective 3
Objective 3 was to use the simulation model to project the clinical impact, cost, and cost-effectiveness of selected interventions evaluated in ATN.
The Modeling Core Steering Committee will work with the ATN Executive Committee and the ATN External Scientific Panel to prioritize ATN studies for model-based analyses, based on data availability and the most relevant questions in health care policy for YLWH each year. The Modeling Core Steering Committee functions will include activities such as providing feedback on the costing perspectives to be used, the primary identify groups of participants following similar trajectories, in a secondary analysis, we will assess associations between baseline characteristics of study participants and membership in particular trajectory groups. In the CEPAC-Adolescent model, these attributes will be used to account for heterogeneity in care engagement.
Design for Objective 2
The design for Objective 2 was to develop a simulation model to reflect the unique characteristics of YLWH.
Current Model Structure
The [68] that was adapted for use in C++, the programming language of CEPAC. When finished running, model output can be extracted and analyzed by the user, as shown in Figure 2, which traces a simulated patient's CD4 cell count and HIV RNA over the course of several important clinical events.
Adolescent-specific patterns of medication adherence and care engagement will be simulated based on work in Objective 1. Currently, in the CEPAC models, clinical event risk, retention in care, and adherence vary between individual people, but do not vary over time or with changes in development or life events. The new model structure will be developed to reflect age-and time-varying adolescent-and young adult-specific aspects of HIV disease progression and care for YLWH based on these patterns. The model structure will account for heterogeneity-the specific additions to the model structure will be informed by the data generated through activities conducted as a part of Objective 1. Additional details of the existing CEPAC models, including flowcharts, a user guide, and sample patient traces can be found on the CEPAC website [7].
compliant with federal regulations governing information security.
Outcomes
Clinical events that impact short-and long-term (lifetime) outcomes and health care costs, such as the occurrence of specific opportunistic infections, non-AIDS-defining illnesses, sexually transmitted infections, pregnancy, and psychiatric events, within the categories listed in Textbox 1 will be analyzed.
Statistical Analysis
Incidence rates of each outcome will be estimated, stratified by mode of HIV acquisition and the combination of time-varying age (7-12, 13-17, 18-24, and 25-30 years), CD4 cell count (<200, 200-499, and ≥500/µL), and VL and ARV status, as in previous work [58]. The VL or ARV status will be categorized as follows: (1) suppressive ARVs-VL less than 400 copies/mL and any prescribed ARVs, (2) nonsuppressive ARVs-VL 400 copies/mL or more and prescribed ARVs expected to be suppressive, and (3) no ARVs-VL 400 copies/mL or more and no prescribed ARVs [58]. Linear interpolation between CD4 cell counts and log 10 -transformed VL will be used to estimate dates when strata thresholds are crossed. These estimated dates will allow us to determine baseline strata and calculate total person-time contributed to each stratum.
As in previous work, trends in incidence rates of outcomes across ordinal age, CD4 cell count, and VL/ARV categories, stratified by mode of HIV acquisition, will be assessed using Poisson regression models, accounting for within-subject correlation with robust SEs [61]. The hypothesis that higher rates of clinical events will be associated with person-time spent with lower CD4 counts, older age, and at higher VL will be examined [58]. VL of 400 copies/mL or more was selected based on historic lower levels of detection for assays used during the study period [58].
We will also advance approaches to describe and predict the trajectories of CD4, VL, and care engagement over time. Locally weighted smoothing plots will be used to obtain a graphical summary of CD4 and VL trajectories over time by mode of HIV acquisition and baseline age [62]. On the basis of visual inspection, linear regression or piecewise linear regression models will be fitted to obtain slope parameters for CD4 and VL over follow-up time among subjects with at least 2 available measures. Baseline covariates such as mode of HIV acquisition, age, CD4 count, VL, and ART regimen will be added to these regression models to assess association with observed CD4 and VL trajectories. To determine whether there are any important differences between subjects with longitudinal CD4 and VL data and those missing such data, baseline characteristics will be compared between these 2 populations.
If there are sufficient numbers of YLWH who miss visits or are lost to follow-up, these will also be used to identify patterns of care followed by distinct subgroups such as those who are in care, those who are care interrupters, and those who are not in care. Latent trajectory groups will be identified from the study data with group-based trajectory modeling [63,64]. After we next be calibrated to data from the literature and the studies in Objectives 1 and 2 to reflect current populations of YLWH and treatment strategies. One-way, multiway, and probabilistic sensitivity analyses will be conducted, following international guidelines to address uncertainty in data inputs for the model [70,71]. This involves varying single and multiple parameters over wide ranges and reassessing all clinical results and cost-effectiveness outcomes. Sensitivity analyses can inform the potential impact of strategies in scenarios that more closely resemble programmatic rather than trial settings.
Design for Objective 3
The design for Objective 3 was to use the computer simulation model to project the clinical impact, costs, and cost-effectiveness of interventions evaluated in ATN.
Clinical data not specific to ATN interventions will be from Objective 1, reflecting key components of disease progression and treatment for youth with and without VL suppression. Intervention-specific data will be derived from the ATN studies selected for model-based analyses; for ATN interventions, effectiveness, duration of effect, and intervention cost will be parameterized based on data from each modeled ATN trial. Model outcomes will include short-term survival and costs (calibrated to trial results) as well as projected long-term survival and costs, including life expectancy and lifetime per-person costs, and transmissions averted. To compare interventions, incremental cost-effectiveness ratios (difference in lifetime costs divided by the difference in life expectancy, in dollars per year-of-life saved) will be calculated and compared with
Translating Objective 1 Data Into Model Inputs
Incidence rates of clinical events (Textbox 1, Objective 1) will be converted into monthly event probabilities. During each patient-month for which a specific set of characteristics apply (age, CD4, and ARV/VL category), these data will be used to assign a modeled risk of each key clinical event over the next 30-day period.
Resource utilization and cost data related to HIV care, ART, and the occurrence of acute events will be derived from adolescent-specific literature when available and otherwise will be derived from adult literature and varied in sensitivity analyses, as in previous work [29]. When specific interventions and studies are identified as candidates for model-based analyses, the Modeling Core will work with study teams to collect the data necessary for future model-based analyses in real time (eg, time and motion studies, activity logs, and costs of personnel and supplies).
Model Validation and Approach to Uncertainty
The model will be internally validated, assessing the accuracy of the model structure by comparing model output (opportunistic infections, viral suppression probability, and survival) with the empiric data in the studies from which model input parameters were derived [69]. As the model projects outcomes over lifetime horizons, the longer-term model results cannot be compared with empiric data; however, as ATN-studied interventions become more widely implemented over time, past model results will be compared with newly available data. The model will
Role of the Funding Sources
The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Results
This project was funded as of September 2017.
Discussion
The planned data analyses and model development in Objectives 1 and 2 will position the ATN Modeling Core to evaluate a wide range of new ATN studies and other emerging data. Future work may include new therapies that are likely to be studied in the near future among YLWH, for example, long-acting ART [78]. The Modeling Core also collaborates with the ATN Data Harmonization Working Group to standardize the collection of resource utilization and cost data across all active ATN studies [79].
Strengths and Limitations
This protocol has several limitations inherent to model-based analyses. First, many models necessarily use short-term data to project across longer-term horizons. This extrapolation requires assumptions about whether and how trial-derived clinical risks and costs will change over time. However, when these assumptions are clearly described, examined rigorously in sensitivity analyses, and interpreted appropriately, this ability of models to leverage short-term data into longer-term policy recommendations is one of the key strengths of model-based approaches [4,69]. Second, research participants in Objective 1 studies may not be representative of the larger population of YLWH in the United States. However, these studies remain among the best sources of data for YLWH in the United States. Study-derived risks will be varied widely in model-based sensitivity analyses to examine the potential impact of variations in these results.
Conclusions
In summary, a Modeling Core has been established within ATN 161. A computer simulation model reflecting disease progression, care and treatment outcomes, and HIV transmission among adolescents and young adults will be developed. YLWH are a growing and vulnerable population in the United States, in whom lack of VL suppression contributes to poor clinical outcomes for individual patients, increases health care costs, and drives the ongoing HIV epidemic. Existing data from completed and ongoing NIH-supported studies will be leveraged to develop the adolescent-specific model. The Modeling Core Steering Committee will work closely with ATN leadership and investigators to design and conduct model-based analyses, addressing critical questions about HIV care among YLWH that cannot be fully answered by trials and cohort studies. The Modeling Core will also build a foundation to inform the design of new studies of interventions across ATN and to evaluate the clinical impact and cost-effectiveness of those interventions, directly translating the work of ATN into critical policy recommendations for YLWH in the United States.
commonly used thresholds for the United States [72]. Adolescents comprise only a small fraction of participants in HIV-specific health-related quality of life studies, and emerging data suggest that youth may attach different values to specific health states compared with adults [73][74][75][76][77]. Moreover, one study found that, in general, adults place less weight on impairments in mental health (eg, being worried, sad, or annoyed) and more weight on moderate to severe levels of pain, relative to adolescents [75]. In general, values attached to identical health states are typically lower for younger people in comparison with adults of all ages and may depend on the elicitation method utilized [74]. Where available, adolescent-specific utility weights will be incorporated, and the impact of utility weights on policy conclusions will be examined in sensitivity analyses, as in previous work [29].
Our work in Objective 3 will have the following 4 key areas of emphasis: Madeline Stern for assistance with formatting and proofreading. This study was funded by U24HD089880-02S1 (principal investigator: Carpenter).
|
2019-03-28T13:34:01.348Z
|
2018-09-27T00:00:00.000
|
{
"year": 2019,
"sha1": "12c73b1c65f0cd93f5bbb584639a83025d9bf209",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/resprot.9898",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "86d6759fbeb65f424d8cadc4f7bddad1f1e136d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
221711683
|
pes2o/s2orc
|
v3-fos-license
|
Familial dilated cardiomyopathy associated with a novel heterozygous RYR2 early truncating variant
This article is available in open access under Creative Common Attribution-Non-Commercial-No Derivatives 4.0 International (CC BY-NC-ND 4.0) license, allowing to download articles and share them with others as long as they credit the authors and the publisher, but without permission to change them in any way or use them commercially. Familial dilated cardiomyopathy associated with a novel heterozygous RYR2 early truncating variant Sarah Costa, Argelia Medeiros-Domingo, Alessio Gasperetti, Alexander Breitenstein, Jan Steffel, Federica Guidetti, Andreas J. Flammer, Katja E. Odening, Frank Ruschitzka, Firat Duru , Ardan M. Saguner
genetically determined [2,3].Genetic variants over a number of proteins that affect cardiomyocyte function are an important cause of DCM.Up until recently, mutations in the Ryanodine receptor 2 (RYR2) gene have been shown to be involved, especially in catecholaminergic polymorphic ventricular tachycardia (CPVT) [4], and arrhythmogenic cardiomyopathy [3].Herein, is reported on a family with a novel early truncation in RYR2 associated with an autosomal-dominant form of DCM.
A 51-year-old woman of Caucasian descent (Case 1) was admitted to the documented hospital for decompensated heart failure (New York Heart Association [NYHA] stage IV).
Her 12-lead electrocardiogram (ECG) showed sinus rhythm with T-wave inversions (TWI) in leads II, III, aVF and V3-V6 (Fig. 1A).The trans-thoracic echocardiography (TTE) showed a heavily dilated left ventricle (LV) (LV end diastolic volume index [LVEDVi] 129 mL/m 2 ) with a decreased LV ejection fraction (LVEF) of < 15%, in the presence of diffuse hypokinesia without evidence of an LV thrombus (Fig. 1B) The left atrium (LA) was moderately dilated (LA volume index [LAVI] 45 mL/m 2 ).Magnetic resonance imaging (MRI) showed no evidence of fibrosis or fatty infiltration, but confirmed DCM (Fig. 1C) with normal right ventricle (RV) dimensions and function.Medical therapy for heart failure was started and optimized including lisinopril (5 mg bid), later changed to valsartan (50 mg tid), bisoprolol (5 mg bid), spironolactone (25 mg qd), and torasemide (10 mg qd).48 h Holter-ECG only showed a very low premature ventricular complex (PVC) burden (0.3%) and no tachyarrhythmia, while two exercise stress tests revealed no PVCs or other forms of ventricular tachyarrhythmia.Nevertheless, despite being on optimal guideline-directed medical therapy at 4 month follow-up, a markedly decreased LVEF on TTE (22%) necessitated the implantation of a subcutaneous implantable cardioverter-defibrillator (ICD) for primary prevention.The family history indicated a familial autosomal-dominant form of DCM: the mother of the index case was transplanted for heart failure due to DCM at the age of 45 years, while the mother's brother died of heart failure due to DCM at the age of 70.No genetic tests or tissue were available.During cascade screening, one of the daughters (Case 2) of the index patient, a 26-year-old woman, was found to have a slightly dilated LV (LVEDVi 66 mL/m 2 ) with a normal LVEF (57%).Her 12-lead ECG showed normal sinus rhythm and normal de-/repolarization.A 48 h Holter-ECG showed a low PVC burden (0.06%) and no tachyarrhythmia.Further investigations revealed late gadolinium enhancement (LGE) on cardiac MRI, specifically in the LV apical and septal areas, as well as a transmural scar in the inferior LV.
The genetic test performed in the index patient (Case 1; performed using next generation sequencing technology -Illumina's Trusight Cardio sequencing panel, covering 176 genes) resulted in a previously unreported variant in Exon 4 of the RYR2 gene (heterozygous c.294G>A; p.Trp98Ter), which is considered likely pathogenic (class IV) following the 2015 American College of Medical Genetics criteria [5], since it is an early truncating variant and leads to a significantly shortened and dysfunctional protein product.
The same variant was identified through Sanger Sequencing in the phenotypically affected daughter (Case 2), while it was not identified in the two other healthy siblings.Recently, evidence has been emerging about genes encoding sarcoplasmatic reticulum (SR) proteins as putative for DCM [3].
The cardiac RYR2 is an important calcium (Ca 2+ ) release channel of the SR and plays an essential role in excitation-contraction coupling in cardiomyocytes [6].RYR2 dysfunction causes an abnormal Ca 2+ leakage from the SR, which can generate delayed afterdepolarizations, which in turn can lead to ventricular arrhythmias [7].RYR2 variants altering the termination of Ca 2+ release seem to lead to a cardiomyopathic phenotype, which is usually associated with mutations in sarcomeric proteins.Specifically, DCM-associated sarcomeric variants tend to decrease the myofilament Ca 2+ sensitivity and thus increase cytosolic Ca 2+ transients.The abnormal cytosolic Ca 2+ transient resulting from altered myofilament Ca 2+ sensitivity is thought to trigger cardiac remodeling (via Ca 2+ /calmodulindependent signaling pathways, the calcineurin/NFAT pathways, or apoptotic signaling) that can lead to DCM [8].Moreover, in dystrophic cardiomyopathy, the RYR hypersensitivity for Ca 2+ due to redox modifications is not only responsible for excessive stress responses, but also changes the signal transduction linking L-type Ca 2+ channels to RYRs during excitationcontraction coupling [9].This connection between abnormal RYR function and dystrophic cardiomyopathy further underlines a putative role for abnormal RYR function not only in arrhythmogenesis, but also in cardiomyopathies.
Genetic variants in the RYR2 gene are frequently autosomal dominant and usually associated with CPVT, but radical variants such as truncating variants, have also been described in the setting of DCM.There are various studies, where RYR2 variants have been recognized as causative in a small number of patients with DCM.In 2007, Bhuiyan et al. [10] reported two families with a deletion in exon 3 of the RYR2 gene, displaying a phenotype of CPVT in some family members and a DCM phenotype with LV dysfunction in other family members.Ohno et al. [6] linked large deletions in Exon 3 of the RYR2 gene to LV noncompaction cardiomyopathy in two unrelated probands and their affected family members.This is in line with the present findings: the variant in this family leads to a stop codon which generates an early truncation (exon 4 out of 105 exons), leading to a dysfunctional protein product.Together with the, albeit limited, co-segregation shown in the reported family, this confirms a putative pathogenic role of the current reported truncating heterozygous RYR2 genetic variant (c.294G>A; p.Trp98Ter) in the setting of familial DCM.
Figure 1 .
Figure 1.Diagnostic work-up in the index patient.A. 12-lead electrocardiogram showing sinus rhythm with T wave inversions in II, III, aVF and V3-V6, and two premature ventricular complexes originating from the anterobasal left ventricle (LV); B. Transthoracic echocardiogram showing a heavily dilated LV (LV end-diastolic volume index: 129 mL/m 2 ); C. Cardiac magnetic resonance imaging, confirming dilated cardiomyopathy without fibrosis or fatty infiltration.
|
2020-08-05T13:06:29.952Z
|
2020-07-29T00:00:00.000
|
{
"year": 2021,
"sha1": "1ba6f8fd010f89347a52090f78e76199be8286d3",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.viamedica.pl/cardiology_journal/article/download/CJ.a2020.0099/51506",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dfb3ead77cfd75cb6facbc7b0cdb29289e0ed29e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211672282
|
pes2o/s2orc
|
v3-fos-license
|
The Empire Tweets Back? #HumanitarianStarWars and Memetic Self-Critique in the Aid Industry
In 2015, a series of memes appeared on Twitter under the hashtag #HumanitarianStarWars. Combining still images from the original Star Wars movies with ironic references to humanitarian/development jargon and institutions, the memes presented a humorous reflection on the modern aid industry. While memetic content has become an increasingly scrutinized area in digital culture studies—particularly with regard to unbounded and anonymous online communities, and popular discursive contestation—this article examines #HumanitarianStarWars to shed light on the possibilities and problematics of social media auto-critique undertaken by “insiders” in a particular professional realm. Keeping in mind critiques of the racial and imperial connotations of the (Western) pop-culture mythology itself, the article explores the use of the Star Wars franchise as a vehicle for commentary on an industry at work in the “Global South.” It highlights an ambiguous process of meaning-making that can be traced through the memes’ generation, circulation, and re-mediation. Although the memes provide a satirical self-reflection on practitioners’ experiences and perspectives of power relations in the global development industry, certain tendencies emerge in their remixing of this Hollywood universe that may reinforce some of the dynamics that they ostensibly critique. The article argues that examination of the ideological ambivalence of an institutional micro-meme can yield valuable insights into tensions playing out in professional social media spaces where public/private boundaries are increasingly and irrevocably blurred.
have a fairly clear idea of who the participants are in this game of memes because of their literacy in Western popular culture, their use of particular forums, and their obvious familiarity with aid, development, and humanitarian jargon. This is not an "Internet culture" per se, but a "professional" culture-people letting off steam through a memetic autocritique of the work they do or are implicated in.
The #HSW meme was fairly short lived, and all of the 38 distinct creations linked to this (that the author has identified) were generated by social media users in 2015. The majority of these memes were "image macros" (Wiggins & Bowers, 2015), with a smaller number of Tweets without pictures. This article reflects on trends identifiable within this limited number of texts and analyzes the content and discourse of a selection of these memes. Compared to the billions of views and (potentially) millions of remixes identified for other viral memes (see Soha & McDowell, 2016), the quantity of digital engagement with #HSW seems insignificant, hence my description of it as a "micro" meme. Nonetheless, I argue that a focus on ideological ambiguities reflected in this form of professional communication can be highly instructive, particularly when we can identify specialist media coverage of the trend (Banning-Lover, 2015), and the engagement of social media accounts of "official" institutions-in this case, governmental and non-governmental development organizations. In "getting" (or apparently not getting) the joke, and using the meme to communicate about their work, these institutions re-appropriated and further adapted the non-official labor of industry staff. This exemplifies the blurring of professional/private boundaries that characterizes social media culture in general (Lange, 2007) and intersects with wider changes in patterns and discourses of humanitarian communications that have been precipitated by the development and increased ubiquity of new media technologies.
Participation in the meme reveals the existence of particular discourses and communities within the aid industry. Kanai's (2016) study of "spectatorial girlfriendship," in relation to GIF memes on Tumblr, demonstrates how jokes privilege "ideal readers" who are able to read into memes particular assumptions-in her case study those grounded in neoliberal, post-feminist, and post-race subjectivities (p. 6). Examination of #HSW illustrates the emergence of another particular "readerly lens," in this case characterized by participants' familiarity with abstracting institutional jargon, and ambiguously articulated postures toward dynamics of power and race in their industry. This article thus responds to Kanai's (2016) suggestion that future studies of memetic meaning-making examine "how literacy is demonstrated by followers through their individuated adaptations [of memes]" (p. 6). Despite its relatively small scale, the meme represents a complex and multifaceted "assemblage" (Deleuze & Guattari, 1988;Frazer & Carlson, 2017) of intertextual meaning-making among (Western) professionals in an important industry that engages (sometimes controversially) with people around the globe, but often primarily in the "Global South." With this in mind, the discourses and narratives generated in this process are worthy of critical analysis.
As a White, Western, ex-humanitarian professional and current academic working on related topics, I include myself within the category of institutional "insider" that I identify as being associated with the production of this meme. Indeed, my exposure to #HSW came as a result of my experience in the humanitarian industry, and later in the academic fields of development studies and African studies. Involvement in related (social media) networks meant that I came across the memes, and I started using them as teaching aids to introduce particular topics and theories in a postgraduate "development management" module I was convening at the time (e.g. Figure 1). Much of the content analysis undertaken in the latter parts of this article grew out of self-reflection on this use of the memes as teaching aids, and my interrogation of the messages and underlying assumptions that could potentially be "decoded" Source. Anonymous. (Hall, 2001) from them by students from diverse professional and cultural backgrounds.
The article undertakes its examination of #HSW as a distinctive auto-critical meme in the following steps. I first survey definitions and theorizations of Internet memes that highlight and debate the role of ideology in online spaces. To begin to explore apparent ideological ambivalence in the #HSW case study, I then examine the particular participatory culture within which this meme emerged. In identifying a specific professional participatory culture of meme generation, I illustrate an apparent divergence from theories that have tended to focus either on unbounded and anonymous "Internet cultures" or socially marginalized groups. As such, I contribute to existing theories on meme practices by highlighting what auto-critical memetic culture can help us understand about professional spaces where private/public boundaries are increasingly blurred. Here I outline Chouliaraki's discussion of "post-humanitarian" communications in the social media age (Chouliaraki, 2010) and adapt her conceptualization of the "ironic spectator" (Chouliaraki, 2013) to account for ways in which individuals within the aid industry position themselves through these memes. I argue that these institutional texts reflect tensions within the industry on various discursive levels. This is demonstrated in the final two sections through analysis of examples of the hashtag/meme: first, focusing on the ways in which the Star Wars metaphor provides a vehicle for an apparently anti-imperialistic self-reflection on the industry, and then considering existing film studies literature on Star Wars to probe deeper connotations that can be read in the application of this particular mythology for parodying this specific industry.
This article seeks to both unpack and critically engage with this mode of satire. Leaving aside the fact that the best way to "kill" a joke is to analytically dissect it, I am interested in exploring the underlying tensions that make the memes (potentially) funny. Reading this critique, however, is not entirely straightforward. Star Wars mythology is about empire and resistance, so, on one hand, the use of this visual backdrop speaks to long-standing critiques of the aid industry in reinforcing global inequalities and participating in forms of neo-colonialism. Having said this, the Star Wars mythology-as a Western cultural behemoth in of itselfhas been interpreted through the lens of the racialization of alien species and tropes of White supremacy. Hooper X's (fictional) movie polemic-quoted at the beginning of the article-can be read alongside academic film studies critique that examines the merits of these arguments (Howe, 2012;Wetmore, 2017). As such, I interrogate here whether the use of these images of alien worlds and civilizations to parody aid workers and humanitarian action serves to satirize imperialistic trends of global governance, or instead reflects implicit worldviews of the meme producers themselvesdehumanizing and alienating the people they work "for," and the (Global South) contexts they work in.
Internet Memes and Ideology
The concept of the meme was first coined in 1976 by biologist Richard Dawkins as part of his broader theorization of cultural evolution. For Dawkins, a meme constituted a distinct "cultural unit"-such as behavior, a part of speech, a form of dress, a concept (e.g., "God")-that is passed on in human society and that is replicated over time through transmission, mimesis, and adaptation. In the Internet age, this conception of memetic content and behavior has been seized upon by communications scholars to theorize the various ways "semantic units" and "design worth copying" (Pelletier-Gagnon & Pérez Trujillo Diniz, 2018, p. 2) are disseminated, remixed (and potentially) subverted through online communities.
Far from remaining an abstract analytical concept, the term has entered popular culture with creators of digital content explicitly referring to such images, videos, animations, catchphrases, and #hashtags as memes. Online cultures are awash with a vast quantity of memetic behavior and production. However, only a tiny percentage of memes will "go viral" (a phrase that itself harks back to Dawkins's original biological metaphor) through the sharing behavior of users and the platform or search engine algorithms that facilitate such spread (Knobel & Lankshear, 2007). As such, Shifman (2013) points out that the concept of the meme has moved from academic into popular discourse, and then back again, as "new media" scholars attempt to conceptualize the nature and importance of this type of mass communicative culture and its social, political, and economic impacts. Figure 2 illustrates the journey of the concept, and its recursive, self-referential capacity. It also provides an example of one particular type of online meme: the "image macro" (Wiggins & Bowers, 2015). Such memes usually feature a still image from popular culture paired with a satirical, ironic, incongruous, outrageous, or absurd caption. Often, image macro memes set up the joke in the top line, and feature an image that "baits" the reader into a biased expectation and then "switches" with the apparently incongruous (and thus funny) punchline. The "successful Black man" image macro meme below (and its White variant) are examples of how this tactic can be used to ostensibly subvert racial stereotypes (Figure 3a and b).
One could say "ostensibly" because the ideological intent of the creator cannot necessarily be inferred, and it may remain an open question as to the extent to which such memes challenge through satire-or reproduce through repetition-the stereotypes that the joke relies upon. The ability (and disposition) of audiences to "decode" this material (Hall, 2001) remains important here, but this is complicated by the anonymity of the "produsers" (producer/users: Bruns, 2008) and the opaqueness of environments of production and dissemination.
Ambiguities of satire become salient in certain social media environments that-some would argue-are characterized by the use of humor to bring otherwise unacceptable racist or misogynist content into mainstream discourse. Discussion boards such as 4chan or Reddit have provided fertile ground for anonymized memetic humor, as well as some of the more toxic subcultures of racist and misogynistic discourse that either generate or appropriate memes themselves (Massanari, 2017). Furthermore, when memes "go viral" they penetrate wider online spaces and attract ever more radical remixing and subversion. Pelletier-Gagnon and Pérez Trujillo Diniz's (2018) study of the (in)famous "Pepe the Frog" image charts this meme's online evolution: from the appropriation of an obscure comic book character and the appendage of innocuous taglines, to its "colonisation" as vehicle for "alt-right" racism and a subsequent campaign by the original artist either to reclaim the message or kill off the memetic character completely.
In increasingly polarized online environments, while one audience member may decode content as parody, another may take the extreme viewpoint expressed at face value. Cumulative algorithmic effects of user engagement (irrespective of one's "decoding" or "getting" of the joke) define what goes viral and enters a mainstream of pop-culture discussion and further remixing. Such ambiguities of meme practice contribute to a "post-truth" political climate in which former taboos become expressible and legitimize new types of political agency and entrepreneurship highly sensitive to the communicative dynamics of social media (Tait, 2017). The Trump presidency is, of course, the archetype product of (and producer in) this emergent information environment. Indeed, some media scholars draw explicit links between memetic communication in "toxic" cyber-cultures, and the rise of the "alt-right" and the related electoral success of the celebrity-entrepreneur President (Massanari, 2017;Vaidhyanathan, 2018).
These new objects of study for meme scholars illustrate how research has moved beyond an earlier primary focus on "progressive" political communication, and now often highlight the ideological ambiguities of the publics within which meme generation, engagement, and contestation takes place. For instance, Frazer and Carlson's (2017) discussion of indigenous Australians' meme creation highlights that while memes can challenge official (colonial) histories, these spaces also become sites for contestation and racist re-appropriations of these materials. At the same time, popular engagement with "trolls" can itself have a strengthening effect on these communities of resistance and emancipation.
All of these accounts highlight the need to examine underlying ideologies articulated through meme practices. Moving away from Dawkins's focus on imitation (the "mimeme"),
Source. Anonymous.
Wiggins (2019) emphasizes how the concept of the "enthymeme" better captures the practices of discursive and ideological interplay and argumentation that characterize Internet memes (p. 1). He defines Internet memes as "units of discourse [that] indicate an ideological practice" (Wiggins, 2019, p. xv) and describes how the directionality of critical memes generally involve at least two groups: "one which is positioned to 'get the joke' and one which may be the target of the joke" (Wiggins, 2019). However, as will be illustrated in the #HSW case study below, this is potentially complicated by the auto-critical stance of the memes relating to a specific industry and created by "insiders." In Shifman's (2013) approach to meme analysis, "stance" referred to the tone, style or communicative function of the text, to be considered alongside content and form (p. 367). Wiggins (2019, p. 9) points out that as Shifman generally applied this typology to video memes, some adjustment is helpful for analysis of image-based memes, namely a merging of stance and content, "given that the conveyance of ideas and ideologies occurs within deliberate semiotic and intertextual construction, especially with the absence of human speech" (p. 15). I adopt this approach to stance here, taking a particular interest in the intertextual choices made by meme producers in their appropriation of the Star Wars imagery and mythology.
Examining producers' "deliberation" (Wiggins, 2019, p. 17) on how their texts should be ideally understood (and by which imagined audiences) is not always an easy task. Here the concept of "ambivalence" is also important to consider, as has been emphasized in recent analyses of the impact of online cultures on wider political discourse. In Phillips and Milner's (2017) account of "mischief, oddity and antagonism" online, the Internet itself is understood as an inherently "ambivalent" realm. Drawing from Phillips's (2015) earlier work on trolling, they argue that certain communities create cultural products ("jumble[s]" of ideologically incoherent or contradictory material) that position all groups as bait for laughter and are intended to be understood ambivalently (Phillips & Milner, 2017, p. 211).
Problematizing in a similar way a presumed earnest Internet of rational intent and clear impact, Papacharissi (2015) encourages us to consider how platforms and digital practices support affective processes and publics. Here, supposedly clear distinctions between communicative emotion and rationality are collapsed, encouraging scholars to give greater consideration to how structures and logics of online platforms themselves shape the actual content being produced. For instance, Papacharissi (2014) describes Twitter (the main platform engaged with in my case study) as being "defined by hashtags which combine conversationality and subjectivity in a manner that supports both individually felt affect and collectivity" (p. 27). Continuing this discussion and drawing connections between the communicative affordances of online spaces and the ambivalence or ambiguity of meme discourses, the following section outlines some key features of the #HSW meme in terms of its producers, associated networks, and wider trends in the field of humanitarian communications.
#HSW and a Professional Participatory Culture
Before directly engaging with the ideological ambiguity of #HSW, it is first necessary to think more closely about the practices and communities from which the memes emerged.
Here both continuities and contrasts are visible with relation to wider scholarship on memetic participatory cultures.
Defining features of the image macro meme (as with other types of memetic content) are their ease of replication and adaptation. This "remixing" is a quintessential behavior of the Web 2.0 era, and for a meme to survive and spread it needs to be quickly mimicked and altered, with varying degrees of fidelity to its original form or content. This necessity often accounts for the "DIY" character of meme creation or remixing. Here, the sloppy photoshopping of Dawkins's head in Figure 2 corresponds with Douglas's (2014) description of "Internet ugly" as a characteristic aesthetic style. Contributing to the reproduction of particular styles, the use of dedicated online tools allows users to quickly remix the words and images for participation in game-like meme conversations that take place on different online platforms.
The #HSW image macros share much in common with these archetypical memes. They feature recognizable images from popular culture, reworked with ironic captions. They are generated by (semi)anonymous users often with the aid of such online toolkits. They have spread on social mediaparticularly Twitter, with the hashtag serving as an "indexical" marker to perpetuate the conversation of production and sharing (Bonilla & Rosa, 2015). In other ways, however, they differ. For example, instead of relying on a single iconic image, the memes draw upon the entire universe of Star Wars mythology, with any still from those films being made potentially applicable to the joke. That joke relies partially on a shared understanding of Star Wars, but perhaps much more on producers' common experiences of the humanitarian/ development/aid profession.
This raises the question of who exactly is producing these memes, and the importance of identifying a specific participatory culture for understanding memes as a form of organizational auto-critique. Literature on digital memes has often focused on the "huge and heterogenic crowd" (Shifman, 2013, p. 371) of Internet culture writ large. Particular online spaces (such as discussion boards) that have been important sites of meme generation often have producer anonymity ingrained into their user interfaces, while the dissemination and (potentially extreme) remixing of memes often renders the original location or identity of the producing culture even more opaque and unknowable.
Moving beyond this focus on broader (mostly anonymous) Internet communities, scholars have begun to explore meme creation by particular communities, often highlighting the digital discursive engagement of historically marginalized groups such as indigenous peoples (Frazer & Carlson, 2017;Lenhardt, 2016). My analysis of #HSW continues this sectional trend, but instead explores dynamics of memetic production within a particular and identifiable professional culture, itself affected by the blurring of the boundaries of private and institutional communication that social media use affords.
The brief memetic flourish of #HSW hardly took the world by storm, and certainly could not be said to have entered wider public consciousness or popular culture. Nonetheless, it did attract attention from insiders: those members of (offline) professional communities that are connected in some way to the aid industry-be they practitioners themselves, or academic or journalistic commentators focused on the sector. The "insider" status of the participants in #HSW was clear from the social media profiles of many of those posting and reposting the memes. This, along with the small scale of the meme and the particular platform that was predominantly used for sharing content (Twitter), had implications for the (semi)anonymity of their producers. When these image macro memes were being posted within Tweets, it was generally unclear whether that user had created that particular image or had found it and shared it from elsewhere. In this sense I cannot and do not attribute authorship to any of the specific memes that I discuss in this article. Given this ambiguity (and practical difficulties in attributing authorship), I did not seek or gain permission to use or discuss these memes. Although there are important debates ongoing about the ethics of using social media data where boundaries between public and private communications are blurred (Fuchs, 2018;Townsend & Wallace, 2016), I consider this to be less ambiguous in the case of Twitter, a more clearly "public" micro-blogging platform (even as opposed to Facebook) whose users are presumed to understand the public visibility of the content they post or share. To set a requirement to attribute authorship and then gain consent for reproducing and analyzing specific memes would be impracticable and could also foreclose examination of a trend that was picked up on (and contributed to) by "official" development industry institutions themselves, as I discuss below.
There were certain individuals who were linked (often by other participants) with the wider hashtag, but although they may have been associated with the initial generation or spread of the meme their authorship of specific content is not verifiable. This is an important considering the wide range of different (and often ambiguous) discourses expressed across the meme series. As such, although I generally do not include individual Twitter user names here, it is important to consider the professional backgrounds of participants. For example, some of the specific figures involved identified themselves as development professionals and/or writers who had moved in these industry circles in various parts of the world. Falling into the latter category was a writer 1 with experience in the NGO sector and a publications history of books such as Expat Etiquette: How to Look Good in Bad Places (Bear & Good, 2016)-a (semi)satirical guide for foreigners working in "difficult" locations.
Subsequent attributable write-ups of the trend on blogs and industry websites also hinted at the genesis of the meme around the water cooler of development agency offices and the fact that many meme producers were working within the industry: What if the principles applied to humanitarian work were used in Star Wars? To some it sounds rather silly, to others it is a ripe for using to parody the entire industry. Fortunately for all of us, the latter won out and the #humanitarianstarwars meme took off last week on Twitter and is showing few signs of slowing down. An idea said in jest two years ago is now a thing and we are all better for it. Well -at least we can have a few laughs on a Wednesday about it. (Murphy, 2015) The fact that this participatory culture related to a particular professional community was further reinforced by the engagement in the meme by various development organizations through official social media accounts. Médecins Sans Frontières's (MSF) UK Twitter account remixed the meme (Figure 4) to reference their prohibition on the carrying of weapons in their vehicles-a serious business in conflict zones, where humanitarian agencies are increasingly targeted by belligerents.
An official USAID Twitter response was intriguing in that it seemed to misunderstand the purpose of the meme, stating that its intention was to "make humanitarian issues accessible." 2 Given the reliance of the memes on "in-jokes" involving NGO jargon, it seems unlikely that anyone engaged in their creation was thinking of audiences outside of the industry. The notion that these parodies would make NGO activities and discourses more understandable to outsiders seems to be a misreading of their generally satirical and auto-critical intent.
Miltner's (2014) examination of the absurd and seemingly pointless "lolcat" meme subculture demonstrates the creation of a specific community through a discourse that is unintelligible to outsiders, not "in" on the constantly selfreferential and recursive joke. NGOs, of course, already have this: a jargon of buzzwords and acronyms impenetrable (and of presumably little interest) to those not involved in these professional communities. Use of this jargon as part of the joke not only helps us identify the likely "insider" status of the producers but indicates the imagined audiences that these meme-makers have in mind during their engagement with the trend-other professionals who will get the joke. However, as Litt and Hargittai (2016) remind us, an imagined audience may not necessarily align with an actual audience and may fluctuate over time. The appropriation of the meme by the "official" social media accounts described above demonstrates this potential mismatch, either in relation to those organizations' apparent disregard for some of the more radical critiques contained in the memes, or a misunderstanding of the function of the meme for staff joking among themselves.
Regardless, "official" engagement with the meme takes place here in a context where development actors increasingly appreciate the power of, and directly utilize, social media platforms for different forms of public communication including for fundraising, "brand management," advocacy, and awareness raising (Scott, 2014). As such, boundaries between professional and private social media use may become blurred as both individuals and organizations present and express themselves in public. This is a trend I have observed in the Horn of Africa where high-level international agency representatives may maintain private social media accounts but communicate directly through these on the issues and communities with which they work. Given the impact of development/humanitarian interventions, their high (and sometimes controversial) profile, these practices raise questions around reputation management for large organizations. This becomes increasingly salient when considering recent scandals around sexual exploitation by foreign aid workers and attempts to apply #MeToo-type scrutiny around harassment to the industry. Many international organizations have increasingly strict rules about the social media usage of their employees, often because the physical embeddedness of staff in "the field" further contributes to the online blurring of private and professional boundaries of public communications on these platforms.
Beyond operational considerations relating to internal and external institutional communications, the wider discursive or ideological position of this memetic case study also warrants scrutiny. For Chouliaraki (2010), the rise of new media technologies (and Web 2.0 applications) has been one of the driving forces behind shifts in aesthetics and discourses of what she terms "post-humanitarian" communication. She argues that NGO communications toward potential donors and supporters have increasingly encouraged a "selforientated morality" and a "contingent ethics of solidarity" (Chouliaraki, 2013, p. 5). These move humanitarian discourse away from grand narratives of shared humanity as the basis of compassion and action, to approaches that foreground the feelings and disposition of the (potential) giver. This renders the member of the audience an "ironic spectator": "an impure or ambivalent figure that stands, at once, as sceptical towards any moral appeal to solidary action and, yet, open to doing something about those who suffer" (Chouliaraki, 2013, p. 2). Chouliaraki's account focuses on humanitarian communications that are intended for external audiences, while the case study presented here of (semi) internal communication between figures in the industry offers an intriguing glimpse of how the act of getting and participating in a joke serves to subjectivize humanitarian professionals. Again, ambivalence is a key concept here. For Chouliaraki, this links her understanding of irony as a "disposition of detached knowingness" to the continued imperative for people to act in the face of suffering (Chouliaraki, 2013).
As such, the #HSW meme resonated beyond cyberspace and into a professional "field"-in Bourdieu's sense, an institutional arena where various forms of social, cultural, or symbolic capital are circulated, reproduced, and competed over (Bourdieu, 1990;Gaventa, 2003). The meme provided space for anonymous auto-critique of a massive global sector which affects the lives of millions of people. For many critical or "post" development studies scholars, this is also an industry implicated in regimes of securitized global governance, militarized liberal imperialism, and neo-colonial type power relations of dependence between the "global north" and "south" (Duffield, 2008;Escobar, 1992;Gulrajani, 2011;Hancock, 1992). To highlight the ambiguities that can be read into these memetic jokes, the following two sections deconstruct the visual humor used in a selection of disseminated images. The sample reflects tendencies toward an (internal) anti-imperialist critique of the sector, while also highlighting the limitations of this satire and the problems inherent in using this Western popular culture lexicon in the presentation of development actors and (potentially) racialized "others."
Decoding the Memes: Aid and Empire
The first #HSW meme featured in the introduction to this article (Figure 1) clearly associates development programming with prerogatives of empire. In the Star Wars universe Luke Skywalker emerges as a talismanic hero fighting for the Rebel Alliance against the Galactic Empire. In the meme above, his rise from obscurity on the barren backwater planet of Tatooine might have been prevented had the Empire put more resources into "livelihoods" programming, giving Luke opportunities for productive distraction and inoculating him against the romantic allure of cosmic resistance.
This sci-fi fantasy counterfactual speaks directly to interrogations of the aid industry in the real world, which portray it as being intimately intertwined with projects of "global governance." In Duffield's influential account, this critique is spatialized with reference to the management of global "borderlands," or, more precisely, the dangers that emanate from those regions to threaten a Western or "international order" of globalized market capitalism (Duffield, 2001(Duffield, , 2008. As such, humanitarian programming becomes securitized (and is justified this way to funders in the global north) in terms of the prevention of terrorism, piracy, organized crime, unregulated migration, the risk of epidemics, and so on. This intersects with a wider historical trend that Chouliaraki argues has precipitated the emergence of new forms of (post)humanitarian communications-an intensified "instrumentalization" of the aid industry and development field toward political and economic goals that are in the interest of developed world donors (Chouliaraki, 2013, p. 2;Donini, 2012). Agamben's (1998) theorization of the management of "bare life" in the study of development and humanitarianism has increasingly pushed such critiques to engage with the "biopolitics" of these forms of global governance. "Bare life" is life that is reduced to its most basic biological functions or mere existence, and is maintained through humanitarian action outside of the polis of national citizenry, for instance through the archetypical spatial enclosure of the refugee camp (Turner, 2005). Some would argue that such biopolitics have, in turn, precipitated general shifts in a global humanitarian agenda away from targeted interventions for the preservation of individualized human life in the context of "disasters," to analyses of risk that increasingly focus on species survival and the adaptation (or mal-adaptation) of certain crisis-prone regions to recurring emergencies (Reid, 2010).
The so-called humanitarian wars of the early 2000s-in the context of the "Global War on Terror"-featured the ever closer entanglement of humanitarian agencies with invading Western armies (De Torrente, 2004). Such development actors-infamously described by former US Secretary of State Colin Powell as "force multipliers" (Lischer, 2007)-often struggled to maintain operational independence and deflect charges that they have served as palatable fig leaves for militarized global governance. On 21st-century battlefields in Iraq and Afghanistan, humanitarians became increasingly reliant on occupying forces to maintain their access, while military planners expected humanitarian programming to speed up reconstruction and contribute to the winning of local "hearts and minds." This trend is captured in another satirical #HSW meme, which depicts imperial forces marching through the desert. Here a storm-trooper ruminates: "we're going to build a school after this, right?" In this case the soldier (occupying a barren landscape that may evoke Iraqi or Afghan deserts) is musing on what will come next in the "stabilization" process, after kinetic operations have ceased. Once again a direct link is drawn to militarized humanitarianism and the critique of Western development actors as handmaidens of imperialistic Western governments and projections of geostrategic power.
Other associations in the memes between empire and the aid industry are somewhat less stark, and appear to relate to how individuals working within the industry perceive and experience the wider institutions they are a part of or interact with. A common theme positions the Galactic Empire as the United Nations "system," possibly in reference to how UN agencies relate to other humanitarian actors, like NGOs. Here we find links drawn between the UN's "cluster system" of inter-agency coordination and the Death Star: "that's no moon!" (referring to the organizational diagram of the main sectors of humanitarian action). Elsewhere, the character of Darth Vader reprimands underlings over funding issues; while a joke has it that the construction of the Death Star "faced [a] long procurement process and needed a no costextension," parodying procedural finance jargon. Although such memes don't feature the more direct critique of the aid sector's implication in imperial agendas, these more lighthearted associations drawn between agencies and the Galactic Empire hint at of how some people in the industry feel about the position and power of big institutions.
Decoding the Memes: Heroes and Aliens?
Returning to the more critical memetic engagements with concepts of humanitarianism and empire, one might consider how their somewhat radical stance can be reconciled with the aforementioned appropriation of the trend by mainstream development organizations on social media. This raises the question of the effectiveness of the satirical project, assuming of course, that one goal of the producers was to initiate some kind of conversation (even in jest) about power dynamics in the industry and critiques of the role of humanitarianism in projects of global governance.
Given the limited scope of the meme and its containment within the relevant professional culture, it may be fair to say that there were few wider discernable impacts of this autocritique-beyond generating "a few laughs" around the office water cooler, as the industry blogger put it above. Having said this, another way to evaluate the stance and effectiveness of the parody involves a deeper reading into the politics and assumptions built into the very format of the meme: that is, the use of a particular (Western) cultural product (the behemoth that is the Star Wars universe and brand) to speak to apparent truths about how (primarily Western) development professionals actually work with, for, and toward populations in the global south. Certain problematics inherent in these portrayals are worth exploring for their underlying assumptions and the possibility that they may reinforce some of the very power dynamics that the memes ostensibly critique.
My reading of the meme series in this way draws on critiques of the Star Wars mythology itself and the idea that the films can be interpreted through a racialized lens. Central here is the notion that certain characters and species evoke (human) ethnic or racial stereotypes, explicitly or implicitly written into their appearance and behavior by their (White) creators. Howe (2012) reviews these critiques and argues that the most persuasive example relates to the links that can be identified between the Tusken Raiders/Sand People of the desert planet of Tatooine and a stereotyped vision of real-life Bedoin peoples (or as proxy for nomadic pastoralist cultures more generally). Here it is alleged that there is an orientalisttype projection of a Middle Eastern "other"-exotic, violent, irrational, unintelligible (Said, 1978)-into the plotlines of a "galaxy far, far away." One #HSW meme used the Sand People as a representation of populations engaged with by external development actors: "Tusken Raiders do not make good enumerators" the joke went (enumerators are often "local" staff employed in data collection for development projects).
One difficulty, however, with such racialized readings of the fantasy space opera lies in the eye of the beholder, or "decoder" in Halls' terms. Do apparent similarities between a racialized (human) stereotype and a science-fiction alien race stem from the conscious or unconscious intentions of their creators, or do they merely reflect the audience's projection of that stereotype onto this creation? In short, who's being racist here? The creator or the critic? Hooper X's (fictional and comedic) racial critique of Star Wars-quoted from the Hollywood film "Chasing Amy" at the beginning of the article-takes a somewhat different approach by focusing on the way in which good and evil are depicted in the series and their discursive association with "Whiteness" and "Blackness" respectively. This also interrogates how humanity is represented, as well as the simple fact that all of the main (human) heroes of the original trilogy were White-epitomized by "Nazi poster boy" Luke Skywalker, as Hooper's line memorably has it. The fact that subsequent 2000s' reboots of the franchise have made efforts to include (and then foreground) heroic Black representatives of humanity (the casting of Samuel L. Jackson, and then, more significantly, John Boyega) can be read as responses to that type of critique (Watercutter, 2017). They also relate to a greater recognition by modern Hollywood filmmakers of the moral (and potentially commercial) value of "diversity" in the sci-fi fantasy and superhero genres, as the reception and success of Marvel's Black Panther film has demonstrated.
Given the cultural and commercial significance of the Star Wars franchise, both popular and scholarly deconstructions of racialized character identity and narrative have been both inevitable and important. It is therefore an interesting choice of popular culture material for meme creators parodying an industry plagued by its own set of racial politics and power imbalances. How then do we undertake a critical reading of this universe's re-assemblage and remixing in the #HSW meme? It is possible here to map some of these related critiques onto the ways in which identities and characters are deployed in the apparent satire of the industry. This requires reading different levels of the joke and some of the assumptions that may lay behind them. To start with, we can take for example the meme ( Figure 5) that riffs on Luke Skywalker's famous interaction with Yoda on the moral compass of a true Jedi. Aside from a surface joke that critiques (Western) development workers' problematic predilection for the "adventure" and "excitement" of the exotic "developing" world, the meme implicitly positions this same professional as the hero of the story-as Luke is in Star Wars. This, arguably, reproduces (and does not challenge) the "White savior" trope that can also be used to critique the development sector, further reinforced by Luke's own racial identity.
Another relevant example here is shown in Figure 6, a meme which depicts the dissatisfaction of the Ewoks of Endor in their characterization (presumably by humanitarians) as "beneficiaries." Again, the meme works by engaging with critiques of the power dynamics of language and jargon in the industry (Kerr, 2008;Win, 2004). "Beneficiary" is a catchall term used by development actors to refer to anyone directly (and often indirectly) who "benefits" from an intervention. This often forms a basis for impact metrics of actions and characteristically assumes that the outcomes of aid are invariably beneficial to individuals. Reducing these people to mere recipients of charity, the catchall term reinforces a prescribed power dynamic of givers and receivers. Here the Ewoks bridle at the label and, presumably, the way it compromises their agency as individuals.
On further reflection, however, it is noticeable that these "beneficiaries" are a non-human species in the Star Wars mythology. If they serve as proxies in the meme for non-Western recipients of development aid, then this dehumanization is potentially problematic. Taken on its own, this individual meme may be of limited significance to the wider #HSW discourse. However, taken together with several other memes in the series, a pattern emerges of development professionals (see Luke Skywalker above) being portrayed through heroic, human (and White) avatars, while non-White populations of the global south have a tendency to be represented by non-human Star Wars characters. Figure 7 critiques the practice of supposedly "progressive" Western development organizations in maintaining pay imbalances between "local" and "international" staff. Once again, this is a valid contention, and one grounded in the realities of inequality across the industry and the power dynamics that are part of the operation of large Western development organizations in the global south. It does, however, make a potentially uncomfortable association between non-White and non-human.
Analyzing the discourse of these memes is complicated by the fact that they are the product of multiple different creators, all engaged in mimicry and remixing within a shared vocabulary that draws on both professional and popular cultures. Furthermore, the memes don't amount to a coherent series of work with entirely consistent themes. Certain broad patterns can be identified (such as problematic associations of people in the global south with non-human species), but this isn't universally reflected in the memes. Recalling the first example on livelihoods programs on "Tatooine" (Figure 1), the potential "beneficiary" of imperial power is a young (White, human) Luke Skywalker. Elsewhere ( Figure 5) Skywalker is positioned as an avatar of the "White savior" development worker. Considering this, a precise reading of individuals' implicit and explicit intent in the creation and dissemination of these memes is difficult to ascertain. For instance, does the "local staff rates" meme above serve to critique the discriminatory practice of wage differentials between "international" and "local" staff in the INGO sector, or does it reproduce this dehumanization through its implicit association of these local staff with non-human characters in the Star Wars mythology? Although the level that the meme works on is left to the audience to decode, what emerges more generally is a striking ambiguity, itself illuminating important tensions being expressed, debated and "remixed" within this particular professional culture.
Conclusion
This article has analyzed the #HSW meme series to advance two points: one related to new media communicationsfocused theorization of meme culture, and one in regard to the critical potential of these memes in parodying the power relations in the global humanitarian industry. As such, it has argued that memes can be productively studied and theorized beyond the amorphous "Internet cultures" with which they have hitherto been most commonly associated. #HSW is not the product of a message board community connected online through logics of Internet humor. Instead, it is the product of a dispersed but professionally orientated community, bounded in an offline world of institutional jargon and practices. It's an insider joke that follows memetic dynamics of (semi)anonymous participation, production, remixing, and re-appropriation. It can be read as a type of auto-critique of the industry undertaken by insiders who position themselves, in my adaptation of Chouliaraki's (2013) terms, as "ironic spectators." These are individuals who seemingly understand a range of the problems that affect humanitarian and development work, while remaining both aloof in their humorous representations of these issues and still employed in the industry. The significance and scope of the meme remained within this professional context but rose to the extent that it was legitimized by official social media accounts of certain mainstream development organizations themselves. These practices all occur in a professional space where social media use by practitioners is ubiquitous and boundaries between professional and private profiles and communications are increasingly blurred. The meme both reflects and feeds into these dynamics, which themselves have wider implications for both the operational and discursive character of modern humanitarian communications.
Reading these memes as a form of auto-critique of the industry, the article has interrogated the role of the parody and explored certain underlying assumptions that may be considered to undermine its critical potential. The memes often play astutely to anti-imperialist arguments ranged against the industry from the fields of critical development studies and critiques of "global governance." Nevertheless, there remains a trend among some of the memes to present a binary that associates development professionals with White savior heroes and (non-Western) "beneficiaries" with exotic alien species of the Star Wars universe. This could be argued to reinforce some of the very power dynamics that the memes ostensibly critique. It is not my intention here to label these memes as universally "racist"-they are the product of various different creators, and while they share a basic vocabulary and memetic logic of remixing they may be inspired by a range of different experiences and perspectives (albeit largely from within the industry itself). What the article has done is to problematize the use of a Western popular culture lexicon in a memetic depiction of a (Western-dominated) industry. This has illuminated some of the ideological complexities inherent in audiences' "decoding" of the material: on a meta level, for instance, do the dehumanizing tendencies of some of the memes actually serve to critique these very effects in professional discourses and practices of humanitarians, or reproduce them in new forms of expression?
Given the general anonymity of the creators this is largely unknowable, but this should remind us that understanding the nature of the platforms of production is important, and can help us dig deeper into different and ambivalent meanings conveyed through participatory parodies. Recall here the "Chasing Amy" movie line (1997) quoted in the introduction. Although this provides a succinct example of the racialized critiques that have been leveled against Star Wars mythology, we should recognize that the line was penned by a White movie writer/director and delivered by a character (Hooper X) who is disingenuous in his Black nationalist persona: the film portrays him "in reality" as an effeminate gay man who uses his fiery speech and (literally) faked militancy to sell his comic books to impressionable teens. There's a clever nod to the intersectional prejudices he has faced; but does this portrayal undercut any validity of his cultural critique of Star Wars? Are these complexities of producer identity/intent amplified in the social media era, and do these environments of production, or the ideological stance of creators, matter? This article argues that they do, particularly considering wider scholarly attention currently being paid to "post-truth" socio-communicative environments where memetic forms of parody can often be seen to both challenge injustice, while also potentially mainstreaming certain previously unacceptable discourses. As such, #HSW provides an interesting example through which to examine some of the ambiguous and ambivalent ways in which memes work in a new media environment. This case exemplifies the blurring of distinctions between private and public communications, and is grounded in an (offline) professional culture that has power and influence in the "real world."
|
2019-11-28T12:36:28.059Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2375e40299b874b252b248df16d81d3a54eaa906",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2056305119888655",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "1034373beb1a5bf346d2a7cc2ed963502f773fba",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"History"
]
}
|
13555596
|
pes2o/s2orc
|
v3-fos-license
|
The Challenges of CASE Design Integration in the Telecommunication Application Domain
The magnitude of the problems facing the telecommunication software industry is presently at a point at which software engineers should become deeply involved. This paper presents a research project on advanced telecommunication technology carried out in Europe, called BOOST (Broadband Object-Oriented Service Technology). The project involved cooperative work among telecommunication companies, research centres and universities from several countries. The challenges to integrate CASE tools to support software development within the telecommunication application domain are discussed. A software process model that encourages component reusability, named the X model, is described as part of a software life cycle model for the telecommunication software industry.
In Europe, a group of companies, universities, and research centres from several countries have gathered to take up the challenge of improving software development productivity within the telecommunication industry. Together with the Commission of the European Communities (CEC), they have sponsored a programme called RACE (Research into Advanced Communications in Europe), and in particular, the BOOST (Broadband Object-Oriented Service Technology) project. BOOST is a CASE environment that supports the development of service engineering software. Among its aims was to encourage software reusability and tools integration to be used by the telecommunication software industry.
This paper is organized as follows: the next section presents important features that should be satisfied by a CASE environment for the telecommunication application domain. In the third section a CASE tool set called the BOOST environment is described. Integration issues, which emerge when a set of tools are put to work together, are discussed in section four. The X model, an alternative software process model that enforces software reuse, is presented in section five; and conclusions on the BOOST project are outlined in the sixth section.
Tools for the Telecommunication Application Domain
There is a widespread tendency for almost everyone to use some network services, both in business and at home. This is because the information one needs may not reside inside a local computer or local network. As business enterprises grow there is also a need for more complex services. Some of them are used on a regular basis, such as airline reservation systems, and others are used temporally, e.g. automatic election systems. Given that the service engineering market is very demanding, with frequent changes in its constraints, it is imperative to find easy, fast, and cost-effective ways to meet such demands. The reuse of already implemented services provided by existing networks in such a way that the quality of the new service is guaranteed, becomes a feasible solution.
The high cost and complexity of software development and maintenance and the growing need for reusable software components are some of the factors stimulating research into better software development methodologies and CASE environments (Capretz and Capretz, 1996). Of course, the aim of a methodology is to improve some of the quality, reliability, cost-effectiveness, maintainability, management, and engineering of software. Thus one requirement for CASE tools is that they support and promote a software development methodology by sustaining and enforcing the steps, rules, principles, and guidelines dictated by that methodology.
Even though a set of distinct tools is required, it is clear that the information manipulated by each tool is going to be interrelated. A set of tools isolated from each other will not be conducive to supporting the software development process. Thus the requirement is that the tools are integrated to permit the designer to interchange information freely between one tool and another, and also to allow the designer to switch easily between tools as the needs arise.
There is also a desire for CASE tools integration through expansible environments, driven by the demand for ever-faster development of software systems. Integrated CASE environments can help software engineers to deliver software systems on time. However, to meet these demands, an integrated CASE environment must be based on a flexible framework that provides a cost-effective tool integration mechanism, encourages portable tools, facilitates the exchange of development information, and adapts to future methodologies. In such an environment, software engineers can coherently mix and match the most suitable tools that support selected methodologies. They can then plug some tools into the environment and begin working with them.
Design is normally an iterative process, and the ability to easily navigate between different tools and notations is important to permit the designer to view concurrently different facets of software development. The ability for a designer to navigate around is also vital, as reusability is something that a CASE environment must promote. A designer must be able to browse through already-captured parts of previous designs to try to see whether any components from prior work can be reused.
BOOST -Broadband Object-Oriented Service Technology
As broadband systems evolve, existing infrastructure is getting a new lease on life through Integrated Service Digital Network (ISDN). AT&T has deployed wideband ISDN, which can deliver high-quality colour images simultaneously with voice and data across the United States, and in many other countries. Across the North Atlantic, some European companies have formed a consortium to finance the BOOST project to face the challenge of rapidly creating broadband services.
In order to equip the service engineering industry to meet market challenges, BOOST has aimed at providing a software environment for service engineering by taking the pragmatic approach of enhancing existing technology by evaluating and improving it in a series of usage trials. Specifically, the major objectives of BOOST are: • to deliver a service engineering environment, based on the rapid enhancement of existing software engineering tools; • to evaluate and demonstrate the feasibility of the environment through a series of trials; • to satisfy the complex needs of the service engineering application domain; • to ensure the early availability of service engineering tools for use by the application pilots and other interested RACE projects; • to ensure the uptake of the environment by the telecommunication software industry.
Within the BOOST Consortium, the partners have realized that some of them provide software for the telecommunication industry and others need such software to solve their problems. Therefore they divided themselves into problem owners and software providers. The BOOST partners, their origin, and their role as problem owner or software provider, are presented in Table 1 and Table 2, respectively. The structure of the project has been basically split into three main work-packages (WP): • Foundations WP: performed the research and was responsible for inter-and intra-project cooperation. • Trials WP: evaluated the tools separately during the creation (by problem owners) of real services running on real networks. • Tools WP: developed the BOOST environment by integrating various software providers' tools. The foundations work-package was mainly concerned with the definition of requirements (from both the problem owners' and software providers' viewpoints), the architecture to be used in the BOOST environment, and a relevant process model to create services. The first part of the trials workpackage was concerned with producing detailed specifications of the trial scenarios: 1. DeTeBerkom trial involved a medical conferencing service on the BERKON network in Berlin (Germany). This service enabled general practitioners and consultants to communicate remotely using video-telephony and the simultaneous display of medical images (3D tomography) on each one's workstation. 2. SEL Alcatel trial provided multimedia services on an Alcatel network in Stuttgart (Germany). This trial used multimedia workstations developed in a previous project that supported video-telephony, joint cooperative working and joint document editing. 3. University of Aveiro built a Pay-per-View TV service on a local network in Aveiro (Portugal). 4. CET and Telefonica worked with creation of services on an intelligent network. They have developed a Distributed Functional Plane Platform, which was used to test and validate services prior to deployment.
In the early stages of this project, trials were mainly concerned with distribution of information related to the tools which software providers were bringing into the project, and with selection of tools by the problem owners (trial partners). The resulting selection was based on demonstration and presentation of plans for tools enhancements. There was a significant effort in packing and distributing the tools and offering help and training in the use of those tools. Based on discussions with the trial partners and their experience with the tools, the tools work-package started making some initial enhancements to the tools set.
A number of candidate tools to be included in the BOOST environment are shown in Table 3. The tool names are in the vertical axis. The horizontal axis represents the various software development activities covered by each tool. It is important to notice that at least one tool tackles each important aspect of the software development process, from requirements to configuration management.
Integration Issues
CASE technology has made significant advances, but its potential is limited by integration difficulties. An integrated CASE environment must have a flexible architecture that can adapt to new methods and extend to other areas. However, to meet these demands, an integrated CASE environment must be based on a framework that provides a cost-effective tool integration mechanism, encourages portable tools, facilitates the exchange of development information, and adapts to future methodologies. SEPTEMBER 2003, Vol. 7, No. 3, pp. 5 The information that a tool can capture should ideally be stored in a single database to be shared by other tools in the same environment. However, this kind of integration is limited to tools from the same provider, and, generally, data within the same project. It is also possible to loosely integrate a CASE environment with translators that import and export information between tools. In a few cases, it is even possible to link tools released by different software providers if they are agreed on data formats and interfaces. However, to get to a level higher than syntactic integration (lexical checks), it is necessary to encode the semantic of the methodology into the tools; this translates into considerable implementation effort. The idea that it is possible to simply put together tools from different sources and they will be fully integrated is clearly dangerous if not impossible, unless perhaps the tools are semantically integrated before being gathered together.
Transactions of the SDPS
Three forms of integration must be borne in mind within the BOOST context: 1. Data integration: it is supported by a unified data model and a common database. The goal of data integration is to ensure that all software information in the BOOST environment is managed as a consistent whole, regardless of how parts of it are manipulated. 2. Interface integration: it is contemplated by a uniform user interface. The goal of interface integration is to improve the efficiency and effectiveness of the man-machine interaction by being as friendly as possible. 3. Control integration: it is assured by a monitor that manages inter-operation and communication among tools. The goal of control integration within BOOST is to allow a flexible combination of functionality from different tools, driven by the underlying methodologies the environment supports. The EAST environment has been conceived to meet the following requirements: tight integration between a variety of tools, customization of working procedures and overall management of large projects. It supports data integration, interface integration and control integration. It provides the essential services to sustain other important aspects of the software development process, such as: process management, project management, configuration management, and code generation -among others. The EAST environment also provides some benefits such as: completeness and consistency checks, concurrent multi-user access and automatic document generation. On top of that, it offers process-to-process communication facilities and an encapsulation mechanism for environment expansibility.
As the EAST environment is open to the addition of outsider tools, this capability makes that environment a good platform for integration of methodologies to cover the whole software life cycle. The integration of any additional tool into the EAST environment requires writing a tools script (called a capsule), which activates the new tool and allows the communication between that tool and the environment. There is a written procedure for software engineers who wish to encapsulate their own tools. Because of such features, EAST has been chosen as an ideal platform to integrate the BOOST tool set, which allows expansibility in that it is possible to glue tools together semantically. This is an important feature, given that there are several tools from different software providers.
The X Model
Software reuse can be broadly defined as the use of engineering knowledge or assets from existing software systems to build new ones. This is a natural technique to increase software development productivity and holds the promise of shortening software delivery time. There are COM+ from Microsoft, Enterprise JavaBeans from SUN, Component-Broker from IBM, and CORBA, among other projects in that vein.
Nowadays, software product lines (Clements and Northrop, 2002;Donohoe, 2000) are also expected to have a significant impact on the software development productivity. Software product lines usually start from analysis of the common and the variable features supporting a product-line development, and then define a set of reusable elements that can be customized and combined into new products (Kang et al., 2002;Jaaksi, 2002). A product line can be built around a set of reusable components by analysing the products to determine the common and variable features using a technique called domain analysis. The software engineer develops a product structure and implementation strategy around a set of reusable components that can be glued within an architecture used as a platform for several products. If done properly, this shift can help establish a sustainable modernization practice within the software industry.
The use of software product lines as a platform of larger systems is becoming increasingly commonplace. Shrinking budgets, accelerating rates of software component enhancements, and expanding systems requirements are all driving that idea. The shift from custom development to software family is occurring in both new development and maintenance activities. Product lines are application domain focused, based on a controlled process model, and concerned primarily with the reuse of higher-level software assets, such as requirements, designs, frameworks and components. A product line based on component-based software development has broad implications for how software engineers develop and evolve software systems, so this technique is here to stay.
Using parameters to account for differences in the products that are not expressible by just selecting alternative components, a component could be simply customized. In other cases a general component could be substantially modified to create a unique component for a specific product. More abstract components can be specialized to express key variability. Abstract components that have been implemented by using an object-oriented language often can be extended through inheritance to create a concrete component that meets a particular need. This customizability greatly expands the number of applications for which a component can be re-used; therefore a re-user should exploit different mechanisms to customize a component to a particular application domain. And, therefore a software system could be regarded as being comprised of two parts: • An information model that represents the general aspects of a software system. • A behaviour model that represents the application-specific parts of a software system. The information model is composed of a global view of the static representation of components of the software system, and is built during a stage that can be named generic design. The behaviour model is concerned with the dynamic relationships between components, showing what objects are instantiated, how objects are composed, and how they interact in the specific application. This model is created during what can be termed specific design. BOOST allows the distinction between the general and the specific parts of a software system. This distinction between the generic and specific aspects of a software system is an important characteristic. The idea of being able to classify parts of a design as generic (and hence potentially reusable assets) is a powerful reason for maintaining this distinction and indeed for spending more time on the general aspects of a software than might really be needed for a particular application.
Traditional software process models do not encourage reusability within their phases. Therefore a software process model that emphasizes the importance of reuse during software development was needed. The BOOST project introduced an alternative software process model, named the X model (Markopoulos, 1993) and depicted in Figure 1. This model enforces software reuse while software development is being carried out. The model focuses on a collection of assets that can be taken from reusable libraries as well the production of potentially reusable assets.
The X model links analysis, design, implementation, unit test, integration test, and system delivery into a framework which takes into account software development with reuse of existing components as well as production of assets for future reuse. This model addresses the mechanisms used when assets are taken from and stored into reusable libraries; it supports development with reuse through component assembly, as well as development for reuse through component cataloguing. It has been proposed as an ideal software life cycle model for the telecommunication software industry.
The search for a component in a reusable library can lead to one of the following possible results: • An identical match between the target and an available component is reached.
• Some closely matching components are collected, then adaptations are necessary.
• The design is changed in order to fit available components.
• No reusable component can be found; if so, the target component should be created from scratch. While searching for components, it is necessary to address the similarity between the required (target) component and any near-matching components. The best component selected for reuse may also require specialization, generalization, or adjustment to the requirements of the new software system in which it will be reused. Sometimes, it is preferable to change the requirements in order to reuse the available components. The adaptability of the components depends on the difference between the requirements and the features offered by the existing components, as well as the skill and experience of the software designer. The process of adapting components is the least likely to become automated in the software reuse process.
One of the major problems that software engineers are faced with in trying to reuse software is the difficulty of finding reusable components once such components have been produced. This is primarily because few mechanisms are available to help identify and relate components. In order to provide more convenient reuse, the question of what kinds of mechanisms might help solve this problem arises. The answer is typically couched in terms of finding components that provide specific functionality, from a library of potentially reusable components linked through relationships that express their semantics and functionality (Capretz, 1998).
So far, most browsing tools assume that component retrieval is a simple matter of matching wellformed queries to a reusable library. But forming queries can be daunting. A software engineer's understanding of the problem evolves while searching for a component, and large reusable libraries often use an esoteric vocabulary or jargon dependent on the application domain. Therefore there is still demand for new tools to support incremental query construction to yield a flexible retrieval mechanism that satisfies ill-defined queries and reduces the terminology problem.
Tools can manipulate reusable libraries by storing, selecting and browsing the reusable components in these libraries. Selection involves browsing to find a component, retrieving it, and deploying it to the developing software system, after domain analysis is carried out. On the other hand, if a newly developed component does not exist in the reusable library, a decision has to be made as to whether the new component should be classified as a reusable component. However, before a component can be added to a reusable library, it must be validated and frozen. The validation is applied only to that particular component, not to the whole software system and should include treatment of exceptional conditions. Storing a component involves classifying it first, getting it from the developing software system, relating it to other components, and putting it into a reusable library as an asset.
Conclusions
The key issue in designing or selecting an integrated CASE environment as a platform for customized tools is how to strike the balance between integration and flexibility; tighter integration usually means less openness and expansibility. It has been taken for granted that extensible environments are a reality brought about by CASE tools and toolkit interfaces. It is true that it is possible to attach tools to particular parts of a database and share their functionality, but CASE environments only make integration easier, not simple. They are not a panacea, and they do not offer integration without a cost.
The focus of CASE research within BOOST has shifted from making sure that each tool works to ensuring that several tools can work together. Hence there can be several independent tools from different providers, but with functionality linked in such a way that these tools cover the whole software development process. The integration between such tools has been achieved through the use of a unified representation model, a uniform interface, and a monitor that establishes the communication protocol among the tools.
Finally, although many network services share a lot of functionality that can easily accommodate the requirements of a new service, software reusability has not been very common in the telecommunication industry. The BOOST project has looked at the methodologies available for service engineering creation and has attempted to find a software process model that would be suitable for telecommunication software development with reuse. The X model has been used and evaluated by major European telecommunication service providers in the context of the BOOST project. The model has proved to cope with the inherent complexity of telecommunication software. It appears to cover the likely phases of large software development and strongly supports software reuse. The experience gained in this project has been of paramount importance because component-based software design and software product lines are believed to be, in the next few years, key factors in improving software development productivity and quality.
|
2015-07-24T21:00:19.000Z
|
2003-08-01T00:00:00.000
|
{
"year": 2015,
"sha1": "23c0d5ca318930c26e91848c4f28247d6f42d385",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ff751d21c99907e58eb6c7c943a0cead24aaccb6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.