id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
134572284
|
pes2o/s2orc
|
v3-fos-license
|
Man-induced transformation of mountain meadow soils of Aragats mountain massif (Armenia)
The article considers issues of degradation of mountain meadow soils of the Aragats mountain massif of the Republic of Armenia and provides the averaged research results obtained for 2013 and 2014. The present research was initiated in the frames of long-term complex investigations of agroecosystems of Armenia’s mountain massifs and covered sod soils of high mountain meadow pasturelands and meadow steppe grasslands lying on southern slope of Mt. Aragats. With a purpose of studying the peculiarities of migration and transformation of flows of major nutrients namely carbon, nitrogen, phosphorus in study mountain meadow and meadow steppe belts of the Aragats massif we investigated water migration of chemical elements and regularities of their leaching depending on different belts. Field measurement data have indicated that organic carbon and humus in a heavily grazed plot are almost twice as low as on a control site. Lysimetric data analysis has demonstrated that heavy grazing and illegal deforestation have brought to an increase in intrasoil water acidity. The results generated from this research support a conclusion that a man’s intervention has brought to disturbance of structure and nutrient and water regimes of soils and loss of significant amounts of soil nutrients throughout the studied region.
Introduction
It is generally agreed today that human activities interfere with most processes produced by nature and finally lead to disturbances in ecosystem development, soil and vegetation pollution with diverse substances, loss of most essential biogenic substances from soils, decreasing contents of humus, deterioration of soil fertility and productivity of plants and as a consequence to soil degradation and desertification.
One of man-induced factors of transformation of the Aragats massif's soils has been heavy grazing and illegal deforestation. Heavy grazing across the study region has already brought to destruction of soil cover, sod slip and pasture degradation namely replacement of perennials by annual plants, decrease of root penetration depth, soil compaction, deterioration of soil water and air balance that finally leads to soil erosion [1,2].
In respect of deforestation one should mention that such activities disagree with national forest protection standards and measures and largely contribute to acceleration of eluvial processes and consequent removal of biogenic elements by intrasoil runoff. A change in elemental composition of intrasoil runoff proves that actually there occurs a significant disturbance of balance between major biogenic substances. It is also essential, that in natural conditions the balance is maintained by a dynamic equilibrium of eluvial and illuvial processes running in forest soils [3,4].
Due to manmade activities in the Aragats massif like other mountain regions of Armenia loss of stored biogenic elements (carbon, nitrogen, phosphorus) from soils is followed by mineralization of a considerable amount of organic matter. Simultaneously biogenic elements change from organic into inorganic form, thus contributing to their leaching from soil to ground and surface waters. One of factors determining loss of biogenic substances is intrasoil runoff [4]. One should mention that lysimetric studies of intrasoil runoff are one of best informative methods when implementing complex landscape-biogeochemical investigations into condition and functioning of mountain ecosystems. Lysimetric solutions characterizing vertical migration of a flow of substances not only partially reflect biogeochemical cyclisity, but also provide direct information about geochemical specificity and functioning of ecosystems [5].
This research was done in the frames of long-term complex investigations of geoecological problems of agroecosystems of Armenia's mountain massifs. We studied high-mountain meadows and pastures found in meadow steppe and alpine belts of the Aragats massif.
With a purpose of studying the peculiarities of migration and transformation of flows of major nutrients namely carbon, nitrogen, phosphorus in mountain meadow and meadow steppe belts of the Aragats massif we studied water migration of chemical elements and regularities of their leaching depending on different belts.
The article considers the averaged research results obtained in 2013 -2014.
Material and methods
The research covered alpine mountain meadow sod soils (2700-3250m a.s.l., pastureland) and meadow steppe (2080-2700m a.s.l., grasslands) belts of southern slope of Mt. Aragats. Soil sampling and analyses were carried out by accepted methods of landscape-geochemical investigations [6,7].
Atmospheric deposition and lysimetric waters were sampled from monitoring stations located in the noted belts at 3250m a.s.l. (the Aragats station) and 2080m a.s.l. (the Hamberd station).
The initial stage of analytical treatment of samples included a solid-liquid phase separation of solution.
Vertical soil runoff was studied by lisymetric method that both provide information about the capacity of soil runoff, chemical composition and migration of soil water elements and allows to assess loss of nutrients in the result of leaching [5]. The lysimeters were installed plane into 0-10 and 0-50 cm deep soil layer to minimally damage soil structure and composition.
Macro components in all water samples were determined through accepted hydrochemical methods [8,9].
Results and discussion
According to research results, concentration of solutes in atmospheric precipitation vary within wide limits that is typical of high-mountain regions and is associated with the dynamics of a number of meteorological factors namely temperature, air humidity, wind direction and velocity as well as intensity and amount of atmospheric precipitation [10].
Recent studies of composition of atmospheric precipitation in mountain ecosystems have indicated that over the years the contents of chemical substances in atmospheric precipitation and rates of mineralization (at pH -7.3-7.4) have been increasing. Meadow steppe vs. alpine belt exhibited high contents of chemical elements which might possibly be connected with high levels of air pollution in the given belt [1].We have also established that in atmospheric precipitation hydrocarbonates and sulfates are present in large quantities. By averaged contents cations were dominated by calcium and magnesium, anions -by sulfates. A descending order of cations is represented by the following descending series: Ca 2+ >Mg 2+ >K + >Na + , anions -by SO4 2->HCO3 ->Cl -. It is known that the basic ecosystem's equilibrium maintaining mechanism depends on soil cover, where the flow of atmospheric substances is distributed between vegetation and ground waters [3].
The soils of Aragats massif differ by thickness of humus horizon and contents of organic matter. Such differences are mainly determined by the height a.s.l, exposition and steepness of slopes and vegetation cover [10]. Collation between newly obtained and earlier data [11] has indicated that the contents of humus in meadow steppe soils vary 5 to 8%, those of total nitrogen -between 0.20 and 0.33%. The content of total phosphorus is high (0.19-0.26%), and potassium -low (1-1.4%). Those soils are not rich in available nitrogen and phosphorus and are moderately and well supplied with potassium. Mountain meadow sod soils of the alpine belt are characterized by high contents of humus (10-12%) and total nitrogen (0.30-0.80%). As compared with meadow steppe soils, the contents of total phosphorus in mountain meadow sod soils are higher (0.20-0.40%) which is determined by its more intense bioaccumulation in the humus horizon, potassium being significantly lower in mountain meadow sod soils. The latter vs. meadow steppe soils of meadow steppe belt are well lower in total and mobile potassium. This phenomenon is explained by a "lighter" granulometric composition of mountain meadow sod soils of alpine belt. From meadow steppe towards mountain meadow soils the content of humus, total nitrogen, phosphorus and soil acidity increases, whereas that of potassium decreases.
High contents of humus, total nitrogen and phosphorus in mountain meadow sod soils of alpine belt are determined by peculiarities of soil formation processes running under humid climatic conditions and relatively low temperatures. These conditions contribute to the accumulation of organic matter and make it difficult to decompose, which prevents the removal of organic matter from the ecosystem [10].
The important role of soil organic matter is that it improves soil structure when transformed by soil microorganisms. The organic and mineral acids released by microorganisms promote their cementation into water-resistant aggregates. Formation of soil aggregates is supported by microbial and chemical products of organic matter transformation, humine substances, polysaccharides and microorganic cells [3,12,13].
Soil organic matter being the major factor of development of a sustainable ecosystem has a double orientation. On the one hand, it serves as a feed source supporting the activities of microorganisms and therefore determining the intensity of redox processes in soil. On the other hand, soil organic matter involved in redox reactions, has a biochemical effect on soil conditions. Humic substances are also exposed to different transformations leading to destruction of water-resistant aggregates [3,12,13].
Earlier researches [2,4] have indicated that organic carbon and humus in a heavily grazed plot are almost twice as low as on a control site. A decrease in soil organic matter is determined primarily by destruction of physical structure of a surface soil horizon, which -accounting for active reaction of excrements in the upper sod layer of soil -consequently leads to violation of stability of biochemical compounds i.e. soil degradation followed by loss of nitrogen and carbon compounds in gaseous form and leaching of elements.
When assessing deforestation-caused consequences as an ecological risk factor two aspects characterizing condition of soils must be taken into consideration. The first aspect includes disturbance of stability of vegetation cover in the result of negative deforestation-caused changes in soils and a possible significant and rapid change in physico-chemical properties of soil under humid climatic conditions and a sharply sloped relief. The second aspect includes assessment not only of changes in ecological conditions, but also associated ecological risks given the use of the natural resources of the study region. These two aspects underlie a full-scale assessment of natural potential and resource power of the study region and ecological changes in it [3,4]. Hence, the eco-geochemical situation, which emerges on deforested areas leads to loss of nutrients, being determined by disturbance of dynamic equilibrium of soil formation and ecosystem processes.
Lysimetric data obtained from this research (table 1) have indicated that deforestation entails an increase in intrasoil water acidity by 1.4-1.7 units.
The influence of intrasoil water acidity accelerates eluvial processes and therefore contributes to the removal of biogenic macro and micro components by runoff. This in turn leads to a dramatic decrease of soil nutrients at root nutrition [4,5].
In intrasoil water at a depth of 0-50cm concentration of Ca 2+ , Mg 2+ , K + , PO4 3decreases by 1. It is known that under direct solar exposure humus substances are built up in the uppermost layer, but in mountain conditions the intensity and duration of solar radiation associated with illuvial migration of substances determine the influence of photochemical destruction on humus composition and lower horizons [3,12], and later on acid solutions influence the mineral composition of soil. A change in elemental composition of lysimetric waters suggests that actually there occurs a significant disturbance of balance of substances including major components of mineral nutrition of plants: nitrogen, phosphorus, potassium. Under natural conditions the balance of elements is maintained by dynamic equilibrium of eluvial and illuvial processes [2].
Hence, over a relatively short period (15 years) the ecosystem responds to man-induced impacts (heavy grazing, illegal deforestation).
The changes mentioned above caused by an increase in soil acidity have a negative character as migration of substances is followed by disturbance of dynamic equilibrium of eluvial and illuvial processes in soils, changes in condition and physico-chemical properties of soils and intensity of eluvial processes.
The obtained results support a conclusion that in mountain ecosystems, in meadow steppe zone under conditions of intense water exchange on deforested plots peculiarities of transformation of soil and its further development is determined by peculiarities of soil formation processes and a relatively high activity of ecosystems [2,3,4].
The results obtained from this research provide exhaustive characteristics of migration and leaching of major nutrients from different soil types of the Aragats mountain massif.
So, man-made activities affect soil nutrient and water regime and trigger changes in mountain forest soils and consequent leaching of significant quantities of nutrients that finally brings to nutrient deficiency and destruction of soil structure, whereas duration and intensity of such activities determine further changes in ecological conditions of the study region.
|
2019-04-27T13:09:11.304Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "5eb230aabfa4a019137c43b6511563d71bf307f7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/107/1/012112",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3405ec730863e0367940824836e00521180d443f",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Geography"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
17393956
|
pes2o/s2orc
|
v3-fos-license
|
Human papillomavirus DNA and p16 expression in Japanese patients with oropharyngeal squamous cell carcinoma
Human papillomavirus (HPV) is a major etiologic factor for oropharyngeal squamous cell carcinoma (OPSCC). However, little is known about HPV-related OPSCC in Japan. During the study, formalin-fixed, paraffin-embedded OPSCC specimens from Japanese patients were analyzed for HPV DNA by the polymerase chain reaction (PCR) and for the surrogate marker p16 by immuno-histochemistry. For HPV DNA-positive, p16-negative specimens, the methylation status of the p16 gene promoter was examined by methylation-specific PCR. Overall survival was calculated in relation to HPV DNA and p16 status and was subjected to multivariate analysis. OPSCC cell lines were examined for sensitivity to radiation or cisplatin in vitro. The study results showed that tumor specimens from 40 (38%) of the 104 study patients contained HPV DNA, with such positivity being associated with tumors of the tonsils, lymph node metastasis, and nonsmoking. Overall survival was better for OPSCC patients with HPV DNA than for those without it (hazard ratio, 0.214; 95% confidence interval, 0.074–0.614; P = 0.002). Multivariate analysis revealed HPV DNA to be an independent prognostic factor for overall survival (P = 0.015). Expression of p16 was associated with HPV DNA positivity. However, 20% of HPV DNA-positive tumors were negative for p16, with most of these tumors manifesting DNA methylation at the p16 gene promoter. Radiation or cisplatin sensitivity did not differ between OPSCC cell lines positive or negative for HPV DNA. Thus, positivity for HPV DNA identifies a distinct clinical subset of OPSCC with a more favorable outcome in Japanese.
Introduction
Head and neck cancer is the sixth most common cancer worldwide, with an estimated annual incidence of approximately 600,000 cases [1]. Although the incidence of such cancer overall has fallen in recent years, consistent with the decrease in tobacco use, that of oropharyngeal squamous cell carcinoma (OPSCC) has increased in both the United States and Europe. In 2009, the International Agency for Research on Cancer recognized human papil-lomavirus (HPV) type 16 as a causal agent of OPSCC [2]. Individuals with HPV-positive OPSCC show significantly better overall survival and disease-free survival, associated with a 20-80% reduction in the risk of death, compared with those with HPV-negative OPSCC [3,4]. Knowledge of HPV status in patients with OPSCC is thus expected to play an increasing role in the management of this disease. Epidemiological evidence from several countries indicates that the proportion of OPSCC cases caused by HPV varies widely, however. Although the proportion of
Cancer Medicine
Open Access OPSCC cases attributable to HPV ranges from 40 to 80% in the United States and is around 90% in Sweden [3,5], little is known about HPV-related OPSCC in Asian populations.
The aim of this study was to evaluate the prevalence, clinical features, and outcome of OPSCC positive for HPV DNA in the Japanese population. We also assessed the concordance between the presence of HPV DNA in tumor specimens and expression of the host cyclindependent kinase inhibitor p16 as detected by immunohistochemistry (IHC), given that p16 is commonly examined as a surrogate marker for HPV positivity in OPSCC [3,6], and we further investigated possible mechanisms underlying any discordance. Moreover, to evaluate the biological impact of HPV infection, we examined the sensitivity of OPSCC cell lines positive or negative for HPV DNA to radiation and to cisplatin.
Patients and tissue
With approval of the appropriate institutional review board, we analyzed formalin-fixed, paraffin-embedded (FFPE) tissue from 118 consecutive patients with newly diagnosed and histologically confirmed OPSCC who were treated at Kinki University Hospital from November 2000 through December 2011. Tumor specimens for all cases were obtained during surgery or diagnostic biopsy, and one representative paraffin block was selected for each case. Several 6-lm paraffin sections were used for analysis of HPV DNA, and one 3-lm section was used for p16 IHC. Patients without sufficient tumor tissue available for both analysis of HPV DNA and p16 staining were excluded, leaving 104 patients in the study (Fig. 1).
Clinicopathologic characteristics and outcome data for patients were obtained from the medical records. Treatment modality was selected for each patient individually on the basis of the official published guidelines. Most individuals underwent radiation therapy or radiochemotherapy according to a standard fractionated regimen, receiving 60-70 Gy with or without concomitant platinum-based chemotherapy. Adjuvant radiotherapy (54-64 Gy) was administered with standard fractionation.
Analysis of HPV DNA
The FFPE specimens were depleted of paraffin and then subjected to macrodissection in order to select a region of cancer tissue. Genomic DNA was extracted from the cancer tissue with the use of a QIAamp DNA Micro Kit (Qiagen, Hilden, Germany), and the DNA concentration of each extract was determined with a NanoDrop 2000 spectrophotometer (Thermo Scientific, Waltham, MA). DNA for HPV types 16, 18, 31, 33, and 35 was detected with the use of a TaqMan real-time polymerase chain reaction (PCR)-based method (Applied Biosystems, Foster City, CA) that was designed to amplify the E6 region or E7 region (or both) of the viral genome. The primer and probe sequences for amplification have been described previously [7][8][9]. Samples of genomic DNA that had sufficient amplifiable b-globin DNA (>1 human genome/lL) were considered to be evaluable, and HPV type was determined for b-globin gene-positive and HPV DNA-positive specimens. We defined active HPV DNA involvement as PCR detection at the level of at least one copy per 10 cell genomes [7]. PCR analysis was performed in duplicate.
IHC for detection of p16 expression
Immunohistochemistry for p16 was performed with the use of a CINtec Histology Kit (MTM Laboratories AG, Heidelberg, Germany) based on the monoclonal antibody E6H4. A tonsil squamous cell carcinoma with a high level of p16 expression was used as a positive control, and the primary antibody was omitted as a negative control. Expression of p16 was scored positive if strong and diffuse nuclear and cytoplasmic staining was present in >70% of the tumor cells [10], and p16 scoring was performed without knowledge of HPV status. Representative p16 IHC images are shown in Figure 2.
Methylation-specific-PCR analysis
For assessment of DNA methylation at the p16 gene promoter, genomic DNA samples were subjected to sodium bisulfite modification with the use of a MethylEasy Xceed Rapid DNA Bisulfite Modification Kit (Human Genetic Signatures, Randwick, NSW, Australia). The modified DNA was then used as a template for methylationspecific (MS)-PCR with primers specific for methylated or unmethylated sequences [11]. The sizes of the MS-PCR products were previously described [12]. Real-time MS-PCR analysis was performed in a 25-lL reaction mixture with the use of an EpiScope MSP Kit (Clontech, Mountain View, CA). EpiScope Methylated HCT116 gDNA and EpiScope Unmethylated HCT116 DKO gDNA (Clontech) were used as positive and negative controls, respectively.
Clonogenic survival assay
Exponentially growing cells in 25-cm 2 flasks were harvested by exposure to trypsin and counted. They were diluted serially to appropriate densities, plated in triplicate in 25-cm 2 flasks containing 10 mL of complete medium, and exposed at room temperature to various doses of radiation with a 60 Co irradiator at a rate of~0.82 Gy/min. The cells were cultured for 14-21 days, fixed with methanol:acetic acid (10:1, v/v), and stained with crystal violet. Colonies containing >50 cells were counted. The surviving fraction was calculated as: (mean number of colonies)/(number of plated cells 9 plating efficiency). Plating efficiency was defined as the mean number of colonies divided by the number of plated cells for corresponding nonirradiated cells.
Cell growth inhibition assay
Cells were transferred to 96-well flat-bottomed plates and cultured for 24 h before exposure to various concentrations of cisplatin for 72 h. Cell Counting Kit-8 solution (Dojindo, Kumamoto, Japan) was then added to each well, and the cells were incubated for 3 h at 37°C before measurement of absorbance at 490 nm with a Multiskan Spectrum instrument (Thermo Labsystems, Boston, MA). Absorbance values were expressed as a percentage of that for nontreated cells, and the median inhibitory concentration (IC 50 ) of cisplatin for inhibition of cell growth was determined.
Statistical analysis
Patient characteristics were compared between individuals positive or negative for HPV DNA with Student's twotailed t-test or the chi-square test. Survival curves were constructed by the Kaplan-Meier method and were compared with the log-rank test. The impact of various factors on survival was evaluated by multivariate analysis according to the Cox regression model. Concordance between HPV DNA and p16 assay results was assessed with the kappa statistic (j) and Spearman correlation. Statistical analysis was performed with the use of IBM SPSS statistics software version 20 (SPSS Inc., IBM, Chicago, IL). A P-value of <0.05 was considered statistically significant.
Patient characteristics
The characteristics of the 104 studied patients are listed in Table 1. The median age of the patients was 64 years, Comparison of patients who never smoked versus patients with a smoking history. 2 Comparison between tonsil and other sites. 3 RT(+), treatment with radiation, including radiation therapy alone (n = 3), chemoradiotherapy alone (n = 13), or surgery followed by radiation therapy (n = 46) or by chemoradiotherapy (n = 23); RT(À), treatment without radiation, including surgery alone (n = 11), surgery followed by chemotherapy (n = 1), chemotherapy alone (n = 4), and best supportive care (n = 3).
with a range from 35 to 80 years, and most of them were male patients (78%) and had stage IV disease (74%).
Presence of HPV DNA and p16 expression in OPSCC
Of the 104 tumor specimens, 40 (38%) were positive for HPV-16 or HPV-18 DNA by PCR analysis (Fig. 1). These 40 tumors included 37 positive for HPV-16 alone, two positive for HPV-18 alone, and one positive for both HPV-16 and HPV-18. HPV DNA was detected more frequently in the tonsils (P = 0.002) than in other regions ( Table 1). Patients positive for HPV DNA presented significantly more often with lymph node metastasis (85 vs. 64%, P = 0.021) and included a higher proportion of never-smokers (55 vs. 30%, P = 0.010) compared with those negative for HPV. There was no significant association between HPV DNA status and gender, age, T classification, or disease stage. Expression of p16 was detected by IHC in a total of 39 tumors (Fig. 2). Of the 40 cases positive for HPV DNA, 32 (80%) were positive for p16, whereas 57 (89%) of the 64 cases negative for HPV DNA were also negative for p16 (Fig. 1). There was thus good agreement between HPV DNA positivity and p16 positivity (j = 0.65; 95% confidence interval [CI], from 0.50 to 0.80; r = 0.631; P < 0.001).
DNA methylation at the p16 gene promoter in OPSCC
To identify the underlying mechanism of p16 gene silencing in tumors positive for HPV DNA but negative for p16 expression, we examined the DNA methylation status of the p16 gene promoter region with the use of MS-PCR analysis. Among the eight such cases, DNA methylation at the p16 gene promoter was detected in six (cases 66, 69, 71, 82, 96, and 106) (Fig. 3).
Survival analysis
Oropharyngeal squamous cell carcinoma patients positive for HPV DNA showed a significantly better overall survival compared with those negative for HPV DNA [hazard ratio (HR), 0.214; 95% CI, from 0.074 to 0.614; P = 0.002] (Fig. 4A). For OPSCC of stages I to III, HPVpositive patients tended to have a better overall survival compared with their HPV-negative counterparts, but the difference was not statistically significant (P = 0.129), possibly because of the small sample size (n = 27) (Fig. S1A). On the other hand, for OPSCC of stage IV (n = 77), patients with HPV DNA showed a significantly better overall survival than did those without it (P = 0.002) (Fig. S1B). Stratification based on p16 expression also revealed a significantly better outcome for OPS-CC patients positive for p16 than for those negative for this marker (HR, 0.245; 95% CI, from 0.085 to 0.705; P = 0.005) (Fig. 4B). To rule out potential confounding effects for the presence of HPV DNA and other factors, we performed multivariate analysis for overall survival ( Table 2). The presence of HPV DNA was revealed to be an independent and significant prognostic factor for overall survival (HR, 0.248; 95% CI, from 0.080 to 0.766; P = 0.015) after taking into account gender, age, T and N classification, smoking history, tumor location, and radiation therapy.
Sensitivity of OPSCC cell lines with or without HPV DNA to radiation and cisplatin
We next investigated the biological impact of HPV DNA status with OPSCC cell lines positive (UPCI-SCC-090, -152, and -154) or negative (UPCI-SCC-003, -036, and -089) for HPV DNA. A clonogenic survival assay performed after exposure of the cells to various doses of radiation revealed no significant difference in survival between the cell lines positive or negative for HPV DNA (Fig. 5A). We also examined the effect of cisplatin on the growth of the cell lines, again detecting no difference in the IC 50 value of cisplatin between those positive or negative for HPV DNA (Fig. 5B, Table 3).
Discussion
In this study, we applied PCR-based detection of viral DNA and IHC-based detection of p16 to tumor specimens from Japanese patients with OPSCC, given that this combination of approaches is the most reliable means to determine HPV status, with a sensitivity of 97% and specificity of 94% [13]. We found that 38% of the patients were positive for HPV DNA, consistent with recent studies that detected HPV DNA in 30-50% of OPSCC patients in Asian countries [14][15][16]. In the United States, the incidence of HPV-positive OPSCC increased by 225% from the late 1980s to the early 2000s [17], with 40-80% of OPSCCs now being caused by HPV [3]. This increase is thought to have resulted from the decrease in tobacco use and increased oral HPV exposure due to changes in sexual behavior among recent birth cohorts [3,4]. As in other Asian countries, the prevalence of smoking in Japan is much higher than that in the United States, especially among men (32 vs. 17%) [18]. The lower proportion of OPSCC cases associated with HPV in Asian countries compared with Western countries might therefore be attributable, at least in part, to the difference in tobacco exposure. Given that the proportion of active smokers has recently been decreasing each year in Japan, the proportion of OPSCCs related to HPV in the Japanese population is likely to increase. We found that overall survival for Japanese OPSCC patients positive for HPV DNA was significantly better than that for those negative for HPV DNA. The presence of HPV DNA was associated mostly with tumors of the palatine tonsils, lymph node metastasis, and nonsmoking. HPV-positive OPSCC was more frequent in younger individuals than was HPV-negative OPSCC, but the difference was not significant, possibly due to the relatively small sample size. These results are consistent with those for OPSCC in the United States and Europe [3,4], suggesting similarity in the features of HPV-associated OPSCC between Japan and Western countries.
The reason for the more favorable prognosis of HPVassociated OPSCC remains unclear, although it may be related to a younger age at onset, minimal exposure to established risk factors such as cigarette smoking, or a better response to therapy [3,19]. Indeed, recent studies have provided evidence that HPV-positive OPSCC shows a better response to chemotherapy [20,21] or to radiotherapy either alone [22,23] or in combination with chemotherapy [20,21,24,25]. Although these findings are suggestive of an inherent radio-or chemosensitivity of HPV-positive OPSCC, we did not detect a difference in sensitivity to radiation or cisplatin in vitro between OPS-CC cell lines positive or negative for HPV DNA. This apparent discrepancy between the in vitro and clinical RT(+), treatment with radiation, including radiation therapy alone (n = 3), chemoradiotherapy alone (n = 13), or surgery followed by radiation therapy (n = 46) or by chemoradiotherapy (n = 23); RT(À), treatment without radiation, including surgery alone (n = 11), surgery followed by chemotherapy (n = 1), chemotherapy alone (n = 4), and best supportive care (n = 3). data might be due to the limitations of in vitro assays, which do not accurately reflect the tumor microenvironment in vivo. Further study is thus needed to determine the molecular mechanism underlying the favorable outcome of patients with HPV-positive OPSCC, with the prospect that such knowledge might inform the development of therapeutic approaches to improve the poor prognosis of those with HPV-negative OPSCC.
In HPV-positive OPSCC, production of the viral oncoprotein E7 results in inactivation of the retinoblastoma (RB) protein and consequent upregulation of p16 expression [3,[26][27][28]. IHC positivity for p16 is thus associated with HPV-positive OPSCC, being regarded as a surrogate marker for HPV infection in such tumors [3,6]. We also found a significant correlation between positivity for HPV DNA and IHC-based detection of p16 in Japanese patients with OPSCC, and the results of survival analysis based on p16 status as a stratification factor were similar to those of such analysis based on HPV DNA status.
Although most HPV-associated OPSCC tumors express p16, we found that 20% of HPV DNA-positive tumors (eight cases) were negative for p16 by IHC. A similar level of discordance was observed in previous studies based on the same approaches for detection of HPV DNA and p16 [7,13,29], although the underlying mechanism remains largely unknown. Given that DNA methylation at the p16 gene promoter has been identified as a key mechanism of p16 gene silencing in various types of primary tumor [30], we analyzed the methylation status of the p16 gene promoter in the eight tumors positive for HPV DNA but negative for p16 in this study with the use of MS-PCR analysis. We found a high frequency (6/8, 75%) of DNA methylation at the p16 gene promoter in these cases. As far as we are aware, this is the first demonstration of DNA methylation at the p16 gene promoter in OPSCC tumors positive for HPV DNA but negative for p16 by IHC. A recent meta-analysis showed that heavy cigarette consumption was associated with p16 gene methylation in patients with non-small cell lung cancer [12]. In this study, among the HPV DNA-positive subgroup, patients with tumors negative for p16 expression had a significantly more extensive smoking history than those with tumors positive for p16 (P < 0.001, Student's two-tailed t-test), suggesting that heavy smoking might be responsible, at least in part, for DNA methylation at the p16 gene promoter and a consequent loss of p16 expression. Consistent with the results of a previous study [7], we also found that the survival of patients with HPV DNA-positive, p16negative tumors was not as good as that of those with HPV DNA-positive, p16-positive tumors (data not shown). These data thus suggest that IHC-based detection of p16 provides suboptimal prognostic information unless combined with PCR-based detection of HPV DNA.
Seven (11%) of the 64 HPV DNA-negative tumors in this study were positive for p16 by IHC. Given that the HPV DNA analysis was initially restricted to HPV types 16 and 18, we further investigated the possible presence of DNA for other high-risk types of HPV (types 31, 33, and 35), which, together with types 16 and 18, account for most cases of HPV-associated OPSCC [8,13,31]. However, none of the seven HPV DNA-negative, p16-positive tumors was found to be positive for these other high-risk types of HPV (data not shown). Similar results have been obtained in previous studies based on detection of HPV by PCR or in situ hybridization [19], with a discordance rate of~10-20%. Expression of p16 in such HPV DNA-negative tumors might reflect disturbances of the RB signaling pathway unrelated to HPV infection, as has been found to be the case in malignant lymphoma and small cell lung cancer [32]. The mechanism of p16 expression in the absence of detectable HPV DNA in OPSCC warrants further investigation. Two prophylactic HPV vaccines against HPV types 6, 11, 16, and 18 (quadrivalent) or HPV types 16 and 18 (bivalent) have shown clinical efficacy for prevention of HPV-related cervical cancer [33] and anal cancer [34]. Both vaccines thus target HPV type 16, which accounts for >90% of HPV-associated OPSCCs [4]. Given the causal relation between HPV infection and OPSCC, clinical evaluation of the potential efficacy of HPV vaccines for reducing the incidence of HPV-associated OPSCC is warranted.
In conclusion, we found that 38% of Japanese patients with OPSCC are positive for HPV DNA, with such positivity being an independent prognostic factor for overall survival. Given that expression of p16 can be affected by genetic or epigenetic changes in addition to HPV infection, our results suggest that IHC-based detection of p16 provides suboptimal prognostic information if not combined with detection of HPV DNA. Further clinical studies are warranted to characterize the mechanism underlying the survival benefit conferred by HPV positivity in patients with OPSCC as well as to identify optimal treatments for this patient population.
|
2018-04-03T01:38:58.983Z
|
2013-10-27T00:00:00.000
|
{
"year": 2013,
"sha1": "b07853d6393dbfec85e123cfa97c5ce4d6a290e6",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.151",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b07853d6393dbfec85e123cfa97c5ce4d6a290e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258171846
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: New trends in biomimetic tissue and organ modelling
COPYRIGHT © 2023 Luni, Urciuolo, Crook and Gentile. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Editorial on the Research Topic New trends in biomimetic tissue and organ modelling
New and emerging technologies for 3D cell culture and tissue engineering are reshaping biomimetic tissue and organ modelling. Through this Research Topic, contributors provide new insight to a variety of technologies relating to different tissue and organ targets and which control and analyse cellular and extracellular architecture and function for reliable biomimetic modelling and future potential therapy. It is well recognized that tissue architecture is intimately related to tissue function in vivo. It follows that engineering the extracellular microenvironment via mechanical, topographical, as well as molecular factors dictate cell-matrix and cell-cell interaction and behaviour. To these ends, different natural and synthetic biomaterials, acellular matrices, and micropatterning are being applied towards better reproduction and/or modulation of cell-cell and cellmatrix contacts.
The important role of stem cells as the preferential source of cells for in vitro model development is also highlighted. This is due in part to their availability and potential for personalised medicine. In particular, it is recognised that human pluripotent stem cells (hPSCs) can be used to recapitulate in vitro the cellular heterogeneity of native tissues. Nevertheless, additional considerations are required to precisely control the environment that stem cells experience to achieve the desired in vivo-like phenotypes, further underscoring the important role of the extracellular microenvironment and cellextracellular matrix (ECM) interaction for optimal tissue ultrastructure, biomechanical features, and bioinductive capabilities.
Mutepfa et al. provided a review of neural stem cell therapy, including the potential for using engineered functionalized biomaterials to treat spinal cord injury (SCI). Despite recent advances in medicine for SCI patients, there have been no definitive findings toward complete functional neurologic recovery. The cellular and structural complexity of SCI underlies the challenge, together with current limitations to using stem cells due to poor control of cell differentiation fates and survival, and integration of transplant cells in the host. Nonetheless, the combination of stem cells with biomaterials presenting mechanical and electroactive properties typical of the spinal cord represents one of the most promising strategies for treating SCI.
The review article by Hong provides an overview of approaches to enhance stem cell performance for tissue engineering using scaffolds, bioinks, membranes, as well as natural and synthetic biomaterials. They highlight the key properties of biomaterials for building a target tissue from stem cells by better engineering the complex in vivo cell microenvironment. More specifically, the authors consider biomaterials for engineering skin, bone, spinal cord, vascularisation, trachea and reproductive tract, as well as introducing nanotechnologies for finer architectural engineering and 3D bioprinting for clinical translation.
Yang et al. describe micropatterning technology to investigate tissue patterning, germ layer specification and cell sorting of hPSCs. They showed that hPSCs self-organize to form a radially regionalized neural and non-central nervous system (CNS) ectoderm able to model in vitro human ectodermal patterning. Appearance and spatial distribution of the different ectodermal populations derived from hPSCs can be regulated by modulating BMP and WNT signalling within the micropatterning cell culture platforms. Finally, they used their in vitro model to dissect the selective cell-sorting behavior of human meso-endoderm cells once seeded onto a pre-patterned ectoderm. They concluded that endoderm, but not mesoderm, segregates from the neural ectoderm, preferentially occupying regions of the non-CNS ectoderm. These findings provide new insight to studying cellcell interactions occurring during human embryogenesis.
Carraro et al. reviewed the current role of 3D in vitro models within the context of skeletal muscle-related pathologies and how they differ from traditional 2D monolayer cultures. The authors described the different cell types present in skeletal muscle and how their spatial organization is recapitulated within in vitro 3D constructs. Moreover, they stress the role of the ECM as an essential constituent to engineer biomimetic muscles. The article provides an in depth analysis of the technological challenges for developing 3D in vitro models of skeletal muscles. This includes: i) the availability of a reliable cell source; ii) the role played by hydrogels in promoting cellular self-organization, iii) the progress of 3D bioprinting for designing tissue architecture, and iv) the need for mimicking mechanical and electrical cues. In conclusion, the article emphasises the importance of 3D in vitro models in reproducing not only the cellular component of the skeletal muscle but also to recapitulate the ECM context for studying specific myopathies.
Finally, the research article by Palmosi et al. reports on the isolation and characterisation of decellularised small intestinal submucosa (dSIS)-derived ECM from pigs to promote cardiac cell function. The dSIS-ECM was tested with human umbilical vein endothelial cells (HUVECs) for live/dead response, as well as to assess tube formation and their ability to promote endothelial cell networks. Finally, proteomic analysis indicated a role played by dSIS on angiogenesis and cell adhesion molecules (i.e., fibronectin). Future in vivo studies will be required to further determine the potential translation of dSIS-ECM from the bench to the bedside.
In summary, this Research Topic comprises both novel research and review articles relating to the most recent advances in high-fidelity in vitro human tissue and organ modelling. Notwithstanding progress, there remains a need for new strategies to better engineer the complex tissue microenvironment. To this end, a new range of synthetic and/or semi-synthetic biomaterials may help to better tailor features typical of native tissues and organs at the nanoscale, with the potential to also better control their function and application in vitro and in vivo, being critical for clinical translation.
|
2023-04-17T13:10:36.654Z
|
2023-04-17T00:00:00.000
|
{
"year": 2023,
"sha1": "2dc0d2d9885f0165ed1715b771e14438c11a3965",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2dc0d2d9885f0165ed1715b771e14438c11a3965",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250114479
|
pes2o/s2orc
|
v3-fos-license
|
High Pathogenicity of a Chinese NADC34-like PRRSV on Pigs
ABSTRACT NADC34-like porcine reproductive and respiratory syndrome virus (PRRSV) has been reported to be prevalent in China since 2018 and became one of the main epidemic strains in some areas of China. Yet, the pathogenicity of NADC34-like PRRSV tested by experimental infection has seldomly been investigated. In this study, we infected pigs with JS2021NADC34 PRRSV, a Chinese NADC34-like PRRSV isolated in Jiangsu province in 2021, to study the pathogenicity of this virus strain. Pigs infected with this virus had lasting fever and reduced body weight with high morbidity and mortality. Histopathological changes, including interstitial pneumonia, lymphocyte depletion, acute hemorrhage, and infiltration of neutrophils in the lymphoid tissues, were observed with the viral proteins detected by immunohistochemistry staining using PRRSV-specific antibody. These results suggested that JS2021NADC34 PRRSV is highly pathogenic to pigs. As it is the latest emerging PRRSV strain in China, the prevalence and pathogenicity of NADC34-like PRRSV need to be further investigated. IMPORTANCE NADC34 PRRSV was initially reported in the United States in 2018. Subsequently, this virus strain spread to other countries, including Peru, South Korea, and China. The virus was first found circulating in Northeast China and then spread to more than 10 provinces in China. NADC34 PRRSV causes severe abortion of sows and high mortality of piglets, which lead to huge economic losses to the Chinese pig industry. However, the pathogenicity of NADC34 PRRSV was rarely experimentally evaluated on pigs. In this study, pigs were infected with JS2021NADC34 PRRSV, a Chinese NADC34-like PRRSV isolated in Jiangsu province in 2021. The infected pigs had lasting fever and reduced body weight with high morbidity and mortality. Interstitial pneumonia, lymphocyte depletion, acute hemorrhage, and infiltration of neutrophils were observed in the lymphoid tissues, and high virus load was proved by immunohistochemistry staining. The above results indicated that NADC34 PRRSV has high pathogenicity on pigs.
Although causing big problems under clinical situations, the pathogenicity of NADC34like PRRSV was seldomly tested by experimental infection. Song et al. reported that HLJDZD32-1901, an NADC34-like PRRSV, was mildly pathogenic in piglets (6). In contrast, another Chinese NADC34-like PRRSV was reported to have high pathogenicity to pigs (9). In our previous study, we successfully isolated an NADC34-like PRRSV, designated JS2021NADC34, from a pig farm in Jiangsu province where sows suffered increased abortions and piglets had high mortality (8). Therefore, in this study, we tested the pathogenicity of this virus on pigs by experimental infection.
RESULTS
JS2021NADC34 PRRSV replicated efficiently in PAMs with high virus titer. JS2021NADC34 strain was previously isolated from the lung samples of disease pigs and purified by plaque assay for three rounds (8). Compared to other reported Chinese NADC34-like PRRSVs, the virus could replicate in porcine alveolar macrophages (PAMs) with apparent cytopathic effect (CPE) (Fig. 1A) but not in Marc-145 cells (data not shown). The highest virus titer of JS2021NADC34 in PAMs was 6.46 Â 10 6 50% tissue culture infective dose (TCID 50 )/mL at 48 h postinfection (hpi) ( Fig. 1B and C). The virus titer dropped to below 5.55 Â 10 6 TCID 50 /mL at 60 hpi. Therefore, the virus was harvested at 48 days postinfection (dpi) for further use.
Clinical presentations of pigs after JS2021NADC34 PRRSV infection. The pathogenicity of reported NADC34-like PRRSVs differs greatly, with mortality rates ranging from 0% (0/5 pigs) to 14.3% (2/14 pigs) due to the different origins (4,6). To test the pathogenicity of JS2021NADC34 PRRSV, 8 2-month-old pigs were randomly divided into two groups with 4 pigs in each group. Pigs in one group were infected with JS2021NADC34 PRRSV at 3 Â 10 6 TCID 50 /pig via intranasal (0.5 mL/nasal) and intramuscular (2 mL) routes simultaneously. Pigs in the control group received Dulbecco modified Eagle medium (DMEM) as a placebo. After virus infection, the pigs in the infected group showed high body temperatures above 40.5°C at 1 dpi (Fig. 3A). The body temperatures of the infected pigs dropped slightly in the next few days and returned above 41°C until 10 dpi. Two infected pigs were euthanized due to their moribund conditions at 8 dpi and one was found dead at 10 dpi. The last pig in the infected group survived and was euthanized at the end of the study, which was terminated at 14 dpi. The body temperatures of pigs in control group were normal throughout the study.
The daily body weight gain of pigs was calculated as described previously (10). As shown in Fig. 3B, the JS2021NADC34 PRRSV-infected pigs lost more than 0.4 kg of body weight per day. In contrast, the mock-infected pigs had more than 0.2 kg of body weight gain per day. Clinically, JS2021NADC34 PRRSV-infected pigs showed dehydration, respiratory distress, and shivering. At necropsy, the infected pigs had severe pulmonary consolidation and necrosis in the lung and hemorrhage and necrosis in the tonsil and lymph nodes (Fig. 4, upper row). In contrast, the above-described tissues of pigs in the control group looked normal (Fig. 4, lower row).
Histopathological and immunohistochemistry examinations. Histopathological examination was next performed to evaluate the tissue damage caused by viral infection. As shown in Fig. 5, interstitial pneumonia associated with hemorrhage, which was characterized by thickening of alveolar septa, and infiltration of mononuclear cells were observed in JS2021NADC34 PRRSV-infected pigs. Lymphocyte depletion, acute hemorrhage, and infiltration of neutrophils were observed in lymph nodes and tonsils. No pathological lesions were observed in the above-described tissues of pigs in the control group.
Monoclonal antibody specific to nucleocapsid protein of PRRSV was used in immunohistochemistry (IHC) staining to reveal the presence of viral antigen in the pig tissues. As shown in Fig. 6, positively stained epithelial cells and macrophages were observed in the lung, tonsil, and lymphoid samples of pigs infected with JS2021NADC34 PRRSV. In contrast, no positive staining was observed in the above-described tissues of pigs in control group.
Viremia examination and serological test. Pig serum samples were collected at 3, 7, 9, and 14 dpi for viremia examination. As shown in Fig. 7A, viremia could be detected at 3 dpi and reached maximum at 7 dpi. After that, viremia of pigs decreased slightly, and viremia of one survived pig dropped to 1.19 Â 10 7 copies PRRSV RNA/mL at 14 dpi. As for the pigs in control group, no viremia was detected as expected.
PRRSV-specific antibodies after viral infection were measured using a commercial IDEXX enzyme-linked immunosorbent assay (ELISA) kit. As shown in Fig. 7B, all pigs in JS2021NADC34 PRRSV-infected group had PRRSV-specific positive antibodies at 7 dpi. The antibody titers of survived pigs in this group kept increasing until 14 dpi. As expected, no PRRSV antibodies were detected in pigs in the control group. (11,12). In China, one 1-7-4 lineage of PRRSV was isolated in 2018 and was designated NADC34-like PRRSV due to the highest genomic similarity with IA/2014/NADC34 (NADC34) (3). Since then, NADC34-like PRRSVs have spread to at least nine provinces, including Heilongjiang, Jilin, Liaoning, Hebei, Henan, Shandong, Jiangsu, Sichuan, and Fujian (13,14). Song et al. evaluated the pathogenicity of a Chinese NADC34-like PRRSV (6). In their study, pigs infected with HLJDZD32-1901, an NADC34-like PRRSV isolated in Heilongjiang in 2019, displayed mild clinical signs, including mild cough and anorexia without fever (below 39.5°C), and death. The results indicated that Chinese NADC34-like PRRSV HLJDZD32-1901 is a mildly pathogenic strain in piglets. Compared to the animal study results performed by van Geelen et al., pigs infected with IA/2014/NADC34 PRRSV had a persistent fever (.40°C) from 3 to 12 dpi and a mortality rate of 14.28% (2 of 14 pigs, died at 9 and 12 dpi, respectively), which demonstrated the high pathogenicity of IA/2014/NADC34 PRRSV (4). Since these two PRRSVs share the high genomic similarity without any recombination with other PRRSV strains, the discrepancy of pathogenicity between U.S. NADC34 PRRSV and Chinese NADC34like PRRSV could be attributed to the origin of viruses and ages of pigs (3-week-old pigs versus 5-week-old pigs).
To further explore the pathogenicity of Chinese NADC34-like PRRSV, we tested the pathogenicity of JS2021NADC34 PRRSV, a Chinese NADC34-like PRRSV isolated from Jiangsu province of China in 2021 (8). Similar to IA/2014/NADC34 and HLJDZD32-1901 PRRSVs, JS2021NADC34 strain has no recombination with other PRRSVs (data not shown). PAMs infected with JS2021NADC34 PRRSV had apparent CPE at 24 hpi with high virus titers ( Fig. 1A to C), which indicated that this strain of virus has adapted well to PAMs and could be highly pathogenic to pigs. As expected, pigs infected with JS2021NADC34 PRRSV had lasting high fever with severely clinical signs, including dehydration, respiratory distress, and shivering. Two of four pigs had moribund conditions at 8 dpi, and one pig died at 10 dpi. Gross and histopathological examination results also supported the high pathogenicity of JS2021NADC34 PRRSV on pigs. Of note, 2-month-old pigs were used in our study, compared to 3-week-old and 5-week-old pigs used in the two abovereported studies, which further revealed the high pathogenicity of JS2021NADC34 PRRSV on older pigs. As mentioned above, both JS2021NADC34 PRRSV and IA/2014/NADC34 PRRSV had higher pathogenicity on pigs than HLJDZD32-1901 did. In addition to considering the difference of pig ages, other factors, including the breed lineage and genetic traits of experimental pigs, could be attributed to the disparity of results. Mixed breed pigs were used in IA/2014/NADC34 PRRSV infection experiments, and Large White-Duroc crossbred PRRSV-free pigs were used in our study. There was no information about pig breed in Song's study (6). Several Chinese NADC34-like PRRSVs reported before 2022 were found to have recombination with U.S. PRRSV strains including IA/2014/NADC34, ISU30, and NADC30 (2). Most recently, one Chinese NADC34-like PRRSV was found to have recombination with local PRRSV strain QYYZ, indicating the quick evolution of it to better adapt to domestic pigs (9). In addition to recombination events, patterns of aa deletions different from the 100-aa deletion in NSP2 were also observed, which make NADC34-like PRRSVs more complex in the field (2). Consistent with its genomic changes to have better fitness for domestic pigs, NADC34-like PRRSV has become one of the main epidemic strains in some areas of China (13). Therefore, the characteristics and pathogenicity of NADC34-like PRRSVs warrant further study. FIG 4 Gross pathology of lung, lymph node, and tonsil of pigs. In the upper row, severe pulmonary consolidation and necrosis were observed in the lung samples of pigs infected with JS2021NADC34 PRRSV. Hemorrhage and necrosis in the lymph node and tonsil were also observed. In the lower row, normal lung, lymph node, and tonsil of pigs in control group.
MATERIALS AND METHODS
Virus and cells. JS2021NADC34 PRRSV was isolated as described previously (8). The virus was purified by plaque assay for three rounds. Porcine alveolar macrophages (PAMs) were obtained from 4-week-old specific pathogen-free (SPF) pigs and cultured in RPMI 1640 medium (Gibco BRL Co., Ltd., USA) Pathogenicity of New-Emerging Chinese NADC34-like PRRSV Microbiology Spectrum supplemented with 10% fetal bovine serum at 37°C in 5% CO 2 . The viral supernatants were collected from infected cells at the indicated time points and titrated by TCID 50 and quantitative reverse transcription PCR (RT-PCR) as described previously (15). PRRSV phylogenetic analysis. For phylogenetic analysis, the ORF5 gene sequences of PRRSV strains in different lineages/sublineages (Table S1) were aligned by MUSCLE using MEGA-X. A phylogenetic tree was constructed using the neighbor-joining method with a bootstrap value of 1,000 replicates.
Animals and experimental design. Eight 2-month-old pigs free of pseudorabies virus (PRV), PRRSV, classical swine fever virus (CSFV), and porcine circovirus 2 (PCV2) were randomly divided into two groups with 4 pigs in each group. Pigs in the first group were infected with JS2021NADC34 PRRSV at 3 Â 10 6 TCID 50 /pig via intranasal (0.5 mL/nasal) and intramuscular (2 mL) routes simultaneously. Pigs in the 2nd group received the same volume of DMEM via the same routes as the placebo. Pigs were monitored daily for rectal temperature and clinical signs after viral infection. The pigs were humanely euthanized when they had moribund conditions or at the end of the study, which was terminated at 14 dpi. All animal experiments were approved by the Institutional Animal Care and Use Committee, and conventional animal welfare regulations and standards were taken into account.
Histopathology and immunohistochemistry staining. Lung, lymph node, and tonsil samples were collected at necropsy. These samples were fixed in 10% buffered neutral formalin for hematoxylin and eosin and immunohistochemistry staining as described previously (10). The staining was operated automatically by Leica fully automatic dyeing machine. The anti-PRRSV N (4A5) antibody (MEDIAN, Republic of Korea) was used for immunohistochemistry staining. The slides were visualized by 200Â microscope photographs.
Viremia and serological test. Blood samples of pigs were collected at 0, 3, 7, 9, and 14 dpi for detection of viremia and PRRSV-specific antibody. Total RNA was extracted from serum samples by using an RNeasy minikit (Qiagen, Germany) according to the manufacturer's instructions. The quantitative PCR (qPCR) was performed as described previously (15).
PRRSV-specific ELISA antibody titers were measured using Herdcheck PRRSV X3 antibody test (IDEXX Laboratories, Westbrook, ME) as described by the manufacturer. PRRSV-specific antibody titer was reported as sample-to-positive (S/P) ratios. The serum samples having an S/P ratio of 0.4 or higher were considered positive.
Ethical approval. Animal experimental protocol was approved by the Institutional Animal Care and Use Committee with the reference number 202202001. All animal experiments were performed following relevant guidelines and regulations.
Data availability. The data that support the findings of this study are available from the corresponding author upon reasonable request.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.1 MB.
|
2022-06-30T06:16:49.562Z
|
2022-06-29T00:00:00.000
|
{
"year": 2022,
"sha1": "30ca044a96def7290c4cb3c9d8442a72a14a12fe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ASMUSA",
"pdf_hash": "de90255f3acbe00ce1a433a3981a0fdcd9412d66",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233866142
|
pes2o/s2orc
|
v3-fos-license
|
Behcet’s disease manifesting as esophageal variceal bleeding: A case report
BACKGROUND Behcet’s disease (BD) is a chronic disease characterized by oral and vulvar ulcers as well as eye and skin damage and involves multiple systems. It presents as an alternating process of repeated attacks and remissions. Esophageal venous rupture and bleeding caused by BD is rarely reported at home and abroad. This paper reports a case of bleeding from oesophageal varices caused by BD, aiming to provide an additional dimension for considering the cause of bleeding from esophageal varices in the future. CASE SUMMARY A 38-year-old female patient was admitted due to a gradual increase in shortness of breath and chest tightness after the activity, and was admitted to our hospital for treatment. After admission, relevant examinations showed that the patient had multiple blood clots. Four days after admission, she suddenly experienced massive hematemesis. Emergency esophagogastroduodenoscopy revealed bleeding from esophageal and gastric varices. The patient had no history of viral hepatitis or drinking habits, and no history of special genetic diseases or congenital vascular diseases. There is no obvious abnormality in liver function. After reviewing the medical history, it was found that the patient had recurred oral ulcers since childhood, ulcers were visible in the perineum during menstruation, and there was an intermittent red nodular rash and uveitis. The current skin acupuncture reaction is positive, combined with the evaluation of the external hospital and our hospital, the main diagnosis is BD. She received methylprednisolone, cyclophosphamide, immunomodulation, acid suppression, gastric protection, and anticoagulation and anti-infection treatments, and was discharged from the hospital. During the 1-year follow-up period, the patient did not vomit blood again. CONCLUSION This case highlights bleeding from esophageal varices caused by BD, aiming to provide an additional dimension concerning the cause of bleeding from esophageal varices in the future.
INTRODUCTION
Behcet's disease (BD) is a chronic disease characterized by oral and vulvar ulcers as well as eye and skin damage and involves multiple systems. It presents as an alternating process of repeated attacks and remissions. It is a type of vasculitis. It can cause venous and arterial blockage and aneurysm formation; when combined with gastrointestinal ulcers, bleeding, perforation, etc., it is considered the gastrointestinal type, but it is often missed or misdiagnosed due to the lack of specificity. BD causes rupture of esophageal varices, and hemorrhage is rarely been reported at home or abroad. This paper reports a case of bleeding from esophageal varices caused by BD, aiming to provide an additional dimension concerning the cause of bleeding from esophageal varices in the future.
Chief complaints
A 38-year-old female patient had chest tightness and shortness of breath after repeated activities for 3 mo, which then worsened for 7 d.
History of present illness
The patient's symptoms started as chest tightness and shortness of breath for 3 mo after repeated activities that were then aggravated for 7 d.
History of past illness
More than 1 year ago, the patient went to another hospital due to a "left neck mass and abdominal discomfort" and underwent related examinations including esophagogastroduodenoscopy, which revealed "pseudoaneurysm and portal hypertensive gastropathy", but the cause was not identified. A left carotid artery stent was implanted, and aspirin and clopidogrel were taken regularly after the operation. She went to another hospital for "lower limb thrombosis" 7 mo ago and underwent "right inguinal saphenous vein transplantation" (specificity unknown). Three months ago, due to hemorrhage of large blood vessels in the abdominal cavity, she underwent "exploratory laparotomy", and a balloon was inserted (specifically unknown). She denied a history of bad habits of smoking and alcohol consumption, hepatitis, or family genetic diseases.
Personal and family history
The patient's personal history and family history are nothing special. April 26, 2021 Volume 9 Issue 12
Physical examination
The patient's vital signs were within the normal range, with clear mind, body check-up cooperation, and mild facial edema. The jugular vein was irritated. Double lung breathing sound was clear, unheard, and wet. The heart rhythm was homogeneous, the heart bound was normal, and the third and fourth ribs of the left edge of the sternum were heard. An about 10 cm surgical scar in the middle of the abdomen and an about 3 cm surgical scar in the bilateral groin were seen. The patient had a flat abdomen with slight compression pain, no obvious backbeat pain, no weakening or enhancement of intestinal sound (3 points per minute), and no liver touch. She had mild edema of both lower limbs.
Laboratory examinations
Blood analysis revealed the following: Hemoglobin, 104 g/L; platelets, 57 × 10 9 g/L; Ddimer, 1.8 μg/mL; platelet antibody test, positive; and C-reactive protein, 92.5 mg/L. Hepatitis B and C virus tests and antinuclear antibody spectrum showed no abnormalities. The patient's liver function, kidney function, coagulation function, blood sugar, blood lipids, and other tests showed no obvious abnormalities.
Imaging examinations
Electrocardiography revealed sinus tachycardia and incomplete right bundle branch block. Chest computed tomography (CT) revealed a small amount of pneumonia and fibrosis in the lower lobes of the lungs, as well as some atelectasis in the lower lobes of the lungs. Heart color Doppler ultrasound at rest demonstrated moderate to severe tricuspid regurgitation; thickening and contracture of the anterior tricuspid valve; mild stenosis of the tricuspid valve; enlarged right ventricle and right atrium; thickened ventricular wall; and abnormal muscle bundles. Bilateral jugular artery and vein color Doppler ultrasound revealed postoperative left carotid artery stent occlusion; left common carotid artery near-mid subtotal occlusion; and right internal jugular vein near heart thrombosis and partial recanalization. Bilateral upper extremity artery and vein color Doppler ultrasound showed that the right subclavian vein, axillary vein, and axillary vein near the heart thrombosis were partially recanalized, and the remaining blood vessels were not abnormal. Arteriovenous color Doppler ultrasound of both lower extremities revealed thrombosis of the left common femoral artery, bilateral common femoral veins, and proximal femoral vein and proximal deep femoral vein thrombosis and partial recanalization. CT of the whole abdomen revealed splenomegaly, as well as a small amount of fluid in the abdominal cavity. The inferior vena cava filter was present. Abdominal angiography showed embolism of the portal vein, splenic vein, and superior mesenteric vein, formation of collateral circulation, and esophageal and gastric fundus varices. After the inferior vena cava filter was implanted, the lumen was significantly narrowed or occluded. Dual-source CT of the pulmonary artery showed that there was no obvious abnormality on pulmonary artery angiography. Limb angiography demonstrated that the aneurysm at the beginning of the left common iliac artery at the lower end of the abdominal aorta was enlarged, the left external iliac artery was narrowed and dilated to varying degrees, and emboli formed at the distal end of the left external iliac artery and the beginning of the femoral artery. The femoral artery of the right external iliac artery and the deep femoral artery were occluded.
Further diagnostic work-up
After 4 d of hospitalization, the patient suddenly vomited blood with a volume of approximately 300 mL. She underwent emergency esophagogastroduodenoscopy to stop the bleeding from ruptured esophageal varices. Four severe esophageal varices were seen during the operation, and rupture was seen ( Figure 1). The bleeding stopped after surgery. A large amount of fresh blood and blood clots were seen in the stomach, and the observation was unclear. There were no varicose veins or bleeding found in the fundus of the stomach near the cardia ( Figure 2). The history of the disease indicated that oral ulcers, a nodular red skin rash, ocular uveitis, and ulcers can be seen in the perineum during menstruation. The skin acupuncture reaction was positive, and combined with the external hospital examination and our hospital examination, the main diagnosis was BD.
MULTIDISCIPLINARY EXPERT CONSULTATION
Combining multiple aneurysm-like changes throughout the body, multiple venous collateral circulation, and heart disease changes throughout the body, the imaging doctor first considered "BD"; the nephrologist inquired the medical history and found that the patient had oral ulcers in the past. Pussy ulcers and eye damage included uveitis and frequent herpes on lower limbs; combined with a positive acupuncture test, BD can be clearly diagnosed. The hematologist also agreed to diagnose the disease and recommended anticoagulation treatment; the gastroenterologist believed that the patient's gastrointestinal bleeding was a secondary disease, and the primary disease should be actively treated first.
FINAL DIAGNOSIS
The final diagnosis of the presented case was BD.
TREATMENT
After treatment with methylprednisolone, cyclophosphamide, acid suppression, stomach protection, anticoagulants, anti-infection treatment, etc., the patient was discharged from the hospital.
OUTCOME AND FOLLOW-UP
This patient is a young woman with a chronic course of disease and multiple thromboses throughout the body. Portal hypertensive gastropathy was found in the past but was not taken seriously. Hematemesis occurred during hospitalization. She underwent emergency band ligation hemostasis for bleeding from esophageal varices under esophagogastroduodenoscopy. CT of the abdomen showed a clear liver contour, no localized nodular hyperplasia, portal vein, splenic vein, and superior April 26, 2021 Volume 9 Issue 12 mesenteric vein thrombosis, collateral circulation formation, and esophageal and gastric fundus varices. The liver was not perfused uniformly due to portal vein thrombosis, and there was no obstructive disease of the inferior vena cava (Figure 3). Portal hypertension (PH) was considered to be caused by portal vein thrombosis, which led to rupture and bleeding of esophageal varices. The postoperative medical history was followed, and there were repeated oral and vulvar ulcers and ocular uveitis, multiple blood clots throughout the body, and positive acupuncture reactions. According to the diagnostic criteria for BD [1] , the diagnosis was "BD". The patient was in critical condition, so no liver biopsy was performed. She was treated with methylprednisolone, cyclophosphamide, immune adjustment, acid suppression, stomach protection, anticoagulation, and anti-infection treatment and was discharged from the hospital. I was planning to review the esophagogastroduodenoscopy one month later, but the patient refused to undergo esophagogastroduodenoscopy due to poor cardiac function. After 1 year of follow-up, she did not vomit blood again. Due to the patient's multiple thromboses, general condition, and poor cardiac function, the gastroscopy and abdominal CT results were not reviewed; she visited the doctor for edema, abdominal distension, and anorexia many times. The mean echo was filled, the umbilical vein was open, and this indicates PH and the formation of lateral branch circulation. The patient is now regularly using infliximab, methylprednisolone, pantoprazole, rivaroxaban, and methoxazole to control the disease.
DISCUSSION
BD is a systemic immune system disease characterized by repeated oral and vulvar ulcers as well as eye and skin damage and is a type of vasculitis. Most women in China with this disease are predominantly between the ages of 16 and 40 [2] . The cause is unknown and may be related to genetic factors and pathogen infection. BD has strong regional specificity and ethnic differences, with a high incidence in China, Japan, the Middle East, and the Mediterranean region [3] , and it is also known as the "silk road disease". It is a rare disease. According to surveys [4] , in North American and European countries, 1 out of every 15000 to 500000 people suffers from BD. The pathological manifestations are inflammatory cell infiltration around the blood vessels. In severe cases, there is necrosis of the vascular wall. Large, medium, small, and micro April 26, 2021 Volume 9 Issue 12 vessels (arteries and veins) can be affected, with luminal stenosis and aneurysmal changes. The disease is divided into gastrointestinal type, vascular type, nerve type, etc., according to the damage to internal organs. The vascular type refers to those with involvement of large and middle arteries and/or veins; the nerve type refers to those with central or peripheral nerve involvement; and the gastrointestinal type refers to those with gastrointestinal ulcers, bleeding, perforation, etc. In my country, women account for the majority of cases, but male patients with uveitis and visceral involvement are 3 to 4 times more prevalent than female patients. BD with cardiac lesion involvement is relatively rare, the incidence of which is reported to be 0.5%-8.1% [5] , mainly involving valves, the myocardium, the conduction system, coronary arteries, the pericardium, wall thrombus, aneurysms, and so on.
The gastrointestinal type appears in many patients with episodes. According to the frequency of symptoms, abdominal pain is the most common symptom, and right lower abdominal pain is common, accompanied by local tenderness and rebound pain, followed by nausea, vomiting, abdominal distension, anorexia, diarrhea, and dysphagia. The basic pathology of the digestive tract is multiple ulcers, which can be seen in any part from the esophagus to the descending colon, and the incidence can be as high as 50%. Severe cases involve complications such as ulcer bleeding, intestinal paralysis, intestinal perforation, peritonitis, fistula formation, esophageal stenosis, and even death. Ulcers caused by intestinal BD can appear in the whole digestive tract, and the typical part of the common occurrence is the ileocecal part [6,7] . Endoscopic ulcers are scattered, present as multiple or single lesions, are mostly lateral, and are often located on the opposite side of the mesentery. Ulcer margins are more regular, diffuse invasion is rare, and the morphology can be roughly divided into three types: Volcanic type, map type, and aphthous ulcer type [8] . However, due to the impact of various factors, such as the stage of the lesion, the severity of the biopsy, the depth of the biopsy, the specimen preparation process, and other factors, many lesions do not show the typical pathological manifestations but show nonspecific chronic inflammatory changes, which are related to inflammatory bowel diseases (Crohn's disease and ulcerative colitis), and intestinal tuberculosis is difficult to distinguish. According to the endoscopic manifestations, although the two diseases have their own characteristics, such as longitudinal ulcers and fissure-like, paving stone-like changes in Crohn's disease, BD is often confined to the ileocecal part of single or multiple ulcers, and ulcers can be crater, map-like and mouth sore-like [9] .
Studies have found that [10] vascular types involving large blood vessels mainly April 26, 2021 Volume 9 Issue 12 include venous obstruction, arterial obstruction, and aneurysm formation, and the proportion of involvement is 5.6% to 6.3%. In China, men with BD are more likely to develop macrovascular disease. Chinese case data suggest that BD has a large vascular disease component of approximately 12.8%, with a male-to-female ratio of 3.86:1 and an average age of onset of 29.5 ± 11.3 years; 70.6% of cases show venous involvement. In venous disease, deep vein thrombosis of the lower limbs is the most common feature, followed by thrombosis of the inferior vena cava, superior vena cava, and cerebral venous sinus [11] . Vascular lesions mostly occur within 1 year of the initial symptoms [12] . Studies have shown that damage to endothelial function may be the main cause of thrombosis in patients with BD. Endothelial damage leads to decreased vascular smoothness and blood flow stasis. The production of anti-endothelial cell antibodies further aggravates vascular damage. At the same time, the dysfunction of vascular endothelial cells leads to an increase in the secretion of various tissue factors and adhesion molecules and induces platelet activation and aggregation, which is also a factor in patients with BD [13] . This patient was a woman who was admitted to the hospital due to chest tightness and shortness of breath. The symptomatic correction of heart failure was not effective. The imaging data showed multiple thromboses throughout the body. The medical history showed repeated occurrence of oral and genital ulcers since childhood. The acupuncture test was positive. According to the ICBD [4] criteria, this case involved oral and genital lesions and positive combined acupuncture tests, with a score greater than 3, so the clear diagnosis was "BD". Sudden hematemesis during hospitalization and emergency gastroscopy showed rupture of esophageal varices, and abdominal CT showed PH. The patient had no basis for hepatitis or cirrhosis, no drinking habits, and no congenital vascular malformations or exclusion of external pressure caused by PH. PH was clearly defined and caused by portal vein thrombosis, which was caused by BD. Tavakkoli et al [14] once reported a case of "downhill" esophageal variceal bleeding caused by Behcet's superior vena cava syndrome.
PH is a clinical disease that is a comprehensive clinical manifestation of portal venous blood circulation caused by various causes of portal venous blood circulation. All types can cause portal vein blood flow disorder and/or increase blood flow and can lead to PH. Liver cirrhosis is the main cause, followed by schistosomiasis (developing countries), portal and splenic vein thrombosis, Budd-Chiari syndrome, and, less commonly, pre-sinus or post-sinus obstructive diseases. Studies have found that the factors of venous thrombosis include blood hypercoagulability, hemodynamic changes, and vascular endothelial injury. The above factors can occur separately or simultaneously due to different causes [15] . Studies have shown that portal vein thrombosis is caused by a variety of causes and risk factors, of which cirrhosis is the most common cause [16] . Other causes include the following: (1) Portal vein thrombosis after splenectomy or splenic embolization. Surgical splenectomy or splenic artery embolization are commonly used to improve hypersplenism, but splenectomy may lead to severe portal vein thrombosis [17] . The diameter of the splenic vein can be an important predictor of postoperative PVT formation [18,19] ; and (2) portal vein thrombosis is related to non-chronic liver disease and non-malignant tumors, and nonneoplastic, non-chronic liver disease causing portal vein thrombosis is mostly related to systemic procoagulant factors and local risk factors [15] . This patient's PH was caused by portal vein thrombosis, which was caused by BD. Liver function was basically normal. It was not related to portal vein thrombosis after splenectomy or splenic embolization, nor was it related to portal vein thrombosis after splenectomy or tumorrelated portal vein thrombosis. Vasculitis causes portal vein thrombosis, which causes PH and further leads to bleeding from esophageal varices. It is rarely reported that BD causes esophageal variceal rupture and bleeding. Analysis of the diagnosis and treatment process of this patient can provide one more possibility for the etiology of PH.
CONCLUSION
BD can involve multiple blood vessels throughout the body and may only manifest as oral or vulvar ulcers in the early stage; thus, it is easily ignored, and the optimal time for diagnosis and treatment can be missed. Bleeding due to esophageal variceal rupture caused by BD is rare and easy to ignore and misdiagnose in clinical work. Therefore, for unexplained portal hypertensive gastropathy and esophageal varices with aneurysms or multiple vascular lesions, we should be vigilant in determining whether BD is present. Once diagnosed, immunotherapy is the main focus for
|
2021-05-07T05:22:21.683Z
|
2021-04-26T00:00:00.000
|
{
"year": 2021,
"sha1": "05c93c3c4f80eb9c38eecbf738b6de32cd2d3f35",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v9.i12.2854",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05c93c3c4f80eb9c38eecbf738b6de32cd2d3f35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55313702
|
pes2o/s2orc
|
v3-fos-license
|
Necessary and Sufficient Conditions of Oscillation in First Order Neutral Delay Differential Equations
and Applied Analysis 3 If qσe ⩾ 1 − pe, we suppose furthermore that q/θ+qσ < 1−p (otherwise, all solutions of (1) are oscillatory by the above conclusion); that is, θ > q/(1 − p − qσ). Since qσe is a minimum value of the function (q/μ)eμσ at μ = 1/σ, we have that
In this paper, we consider a class of neutral DDEs [ () − ( − )] + ( − ) = 0, ⩾ 0 , where 0 is a positive number and , , , and are positive constants.Generally, a solution of (1) is called oscillatory if it is neither eventually positive nor eventually negative.Otherwise, it is nonoscillatory.It can be seen in the literature that the oscillation theory regarding solutions of (1) has been extensively developed in the recent years.
In [18], Zhang came to the following conclusion.
This result in Theorem I improves the corresponding result in [19].Afterward, many authors have been devoted to studying this problem and have obtained many better results.For details, Gopalsamy and Zhang [20] obtained the improved result shown in Theorem II.
Further, Zhou and Yu [21] proved the following theorem.
Continuing to improve the research work, Xiao and Li [22] obtained the following.
Finally, Lin [23] obtained the result shown in Theorem V.
Theorem V. Assume that ∈ (0, 1) and > 1 − /(1−−) ; then all solutions of (1) are oscillatory.However, all the conclusions mentioned above are limited to sufficient conditions in the case 0 < < 1.The aim of this paper is to establish systematically the necessary and sufficient conditions of oscillation for all solutions of (1) for the cases 0 < < 1 and > 1.
Main Results
It is well known [24] that all solutions of (1) are oscillatory if and only if the characteristic equation of ( 1) has no real roots.
Theorem 1. Assume that ∈ (0, 1) and let Then all solutions of (1) are oscillatory if and only if where is a unique zero of () in (0, 1/).
In addition, Thus, we get that function () has a unique zero in (0, 1/).
From Theorem 1, we obtain immediately the following.
From Theorem 1, all solutions of (1) are oscillatory if and only if one of (H 1 ) or (H 2 ) holds.
Theorem 4. Assume that ∈ (0, 1); then all solutions of (1) are oscillatory if one of the following conditions holds: where is a unique zero of () in (0, 1/).
Proof.If / + ⩾ 1 − , we have that From the proof of Theorem 1, all solutions of (1) are oscillatory.
If ⩾ 1 − /(1−−) , we suppose furthermore that / + < 1 − (otherwise, all solutions of (1) are oscillatory by the above conclusion); that is, > /(1 − − ).Since is a minimum value of the function (/) at = 1/, we have that and the result follows.So far, for ∈ (0, 1) we have discussed the necessary and sufficient conditions of oscillation for all solutions of (1).Our results have perfected the results in [23] (see Theorem 4).Next, we will discuss the behavior of oscillation of solutions of (1) in the case > 1.
Lemma 5. Let > 1; then all solutions of (1) are oscillatory if and only if the equation has no real roots in (− ln /, 0).
Proof.It is similar to the proof of Theorem 1; () is the maximum value of () for ∈ (−∞, 0).This and Lemma 5 imply the result.
Theorem 7. Assume that > 1 and < ; then all solutions of (1) are oscillatory if and only if where is a unique zero of (3) in (−∞, 0).
From Theorem 9, we obtain the following corollary immediately.
so that all the solutions of (28) are oscillatory from Theorem 9.
|
2018-12-12T09:06:10.782Z
|
2014-04-27T00:00:00.000
|
{
"year": 2014,
"sha1": "9a4ddc54047be01eff13b311e021d6fab995ca3d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/aaa/2014/623713.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9a4ddc54047be01eff13b311e021d6fab995ca3d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
208561793
|
pes2o/s2orc
|
v3-fos-license
|
Consistent Relationship between Field-Measured Stomatal Conductance and Theoretical Maximum Stomatal Conductance in C3 Woody Angiosperms in Four Major Biomes
Premise of research. Understanding the relationship between field-measured operating stomatal conductance (gop) and theoretical maximum stomatal conductance (gmax), calculated from stomatal density and geometry, provides an important framework that can be used to infer leaf-level gas exchange of historical, herbarium, and fossil plants. To date, however, investigation of the nature of the relationship between gop and theoretical gmax remains limited to a small number of experiments on relatively few taxa and is virtually undefined for plants in natural ecosystems. Methodology. We used the gop measurements of 74 species and 35 families across four biomes from a published contemporary data set of field-measured leaf-level stomatal conductance in woody angiosperms and calculated the theoretical gmax from the same leaves to investigate the relationship between gop and gmax across multiple species and biomes and determine whether such relationships are widely conserved. Pivotal results. We observed significant relationships between gop and gmax, with consistency in the gop∶gmax ratio across biomes, growth habits (shrubs and trees), and habitats (open canopy and understory subcanopy). An overall mean gop ∶ gmax ratio of 0.26 ± 0.11 (mean ± SD) was observed. The consistently observed gop ∶ gmax ratio in this study strongly agrees with previous hypotheses that an ideal gop ∶ gmax ratio exists. Conclusions. These results build substantially on previous studies by presenting a new reference for a consistent gop ∶ gmax ratio across many levels and offer great potential to enhance paleoclimate proxies and vegetation-climate models alike.
Introduction
Stomatal conductance is the exchange of carbon dioxide for photosynthesis and water vapor via transpiration through microscopic pores called stomata on the areal parts of plants, principally the leaf surface. Diffusion of water vapor through stomata is 1.6 times that of carbon dioxide; therefore, transpirational water loss from the leaf is a costly but unavoidable trade-off between plants' photosynthetic gain and productivity (Farquhar and Sharkey 1982) and their instantaneous water use efficiency (ratio of rate of transpiration to CO 2 uptake; Katul et al. 2009;Manzoni et al. 2011;Buckley and Schymanski 2014;Franks et al. 2015).
Stomata are highly sensitive to fluctuating environmental conditions such as light, temperature, and CO 2 . The stomatal pore is surrounded by two guard cells that are highly sensitive to environmental signals such as changes in light intensity, temperature and humidity, soil moisture and nutrient status, and internal guard cell and mesophyll signals. Resulting changes in turgor pressure in the guard cells adjust the stomatal opening to regulate gaseous exchange, maximize CO 2 uptake, and minimize water loss (Farquhar and Sharkey 1982;Schulze et al. 1994;Hutjes et al. 1998;Hetherington and Woodward 2003;Mott 2009;Franks et al. 2013;Lawson and Blatt 2014;Mc-Ausland et al. 2016). Through their short-term critical openingclosing response to rapid environmental change, as well as the longer-term developmental downregulating in response to rising atmospheric CO 2 , stomata have potential to greatly influence ecosystem function and the global carbon and hydrologic cycles. Therefore, they play a pivotal role in Earth system and plantclimate feedbacks (Hetherington and Woodward 2003;Gedney et al. 2006;Betts et al. 2007;Berry et al. 2010;Keenan et al. 2014;Schlesinger and Jasechko 2014;Lin et al. 2015;Ukkola et al. 2015;Engineer et al. 2016;Li et al. 2016) and are critical in determining vegetation response to environmental change (Leakey et al. 2009;Medlyn et al. 2011).
Stomatal conductance, referred to here as "operational stomatal conductance" (g op ; McElwain et al. 2016b), is a function of the stomatal density (D) and the depth and degree of openness of the stomatal pore (pa max ) in response to internal and environmental signals (Berry et al. 2010;Drake et al. 2013). The theoretical maximum stomatal conductance (g max ) is calculated from measurements of the stomatal density and geometry according to a diffusion equation (eq. [1] in "Material and Methods"; Parlange and Waggoner 1970;. These same stomatal traits ultimately determine g op ), yet the nature of the relationship between g op and g max remains largely unquantified beyond a small number of growth chamber and greenhouse studies McElwain et al. 2016b).
It has been observed that measured g op in field conditions rarely achieves the maximum theoretical g max limits, as defined by leaf anatomical traits (Körner 1995;Lawson and Morison 2004;. Furthermore, because it is a purely theoretical measurement, theoretical g max is usually greater than the observed g op by a large degree (Sack and Buckley 2016). This disparity has propelled many areas of botanical research into establishing the basis for this McElwain et al. 2016b). For example, studies have explored the extreme variability in stomatal distribution across a leaf surface (Casson and Gray 2008) and how stomatal development and, therefore, stomatal density are heavily influenced by environmental conditions, particularly light (Lake et al. 2001;Lomax et al. 2009) and CO 2 (Woodward 1987;McElwain and Chaloner 1995;Woodward and Kelly 1995;Wagner et al. 1996). Alternatively, the mismatch between g op and g max might be due to the short-term behavioral responses of stomata to minimize transpiration and increase water use efficiency by rapidly reducing their aperture, particularly when evaporative demands are high during drought conditions (Buckley 2005;Katul et al. 2012;Kollist et al. 2014). In addition, over the longer term, g max of a leaf can be altered via changes in size and density in response to protracted drought (Franks et al. , 2015 and/or rising atmospheric carbon dioxide concentrations (Woodward 1987;Ainsworth and Rogers 2007;Lammertsma et al. 2011;Gray et al. 2016). This, in turn, imposes constraints on g op (McElwain et al. 2016b). The sheer diversity of species-specific g max and g op responses to abiotic factors and their relationship to one another also prompts us to ask whether a consistent relationship between g max and g op exists. A coordinated trade-off between physiological (g op ) and anatomical (g max ) control of stomatal conductance has been suggested (Haworth et al. 2013), imply-ing that, if there is coordination, defining a relationship between theoretical g max and physiological g op should be possible. Studies have observed that measured g op is between 20% and 25% of theoretical g max , or in other words, the g op ∶g max ratio is between 0.2 and 0.25 (Franks et al. , 2014McElwain et al. 2016b). It has been speculated that this is an ideal level of g op , at which stomata are enabled to respond rapidly to environmental flux by opening or closing as conditions dictate (Franks et al. 2012;.
Over the past 10 years, experiments to determine a reliable relationship between g op and g max , or the g op ∶ g max ratio, have yielded broadly consistent results Franks et al. 2014;McElwain et al. 2016b); however, these studies have been taxonomically limited and rarely included both measured g op and calculated theoretical g max parameters from the same leaves. The aim of this study was to advance our current understanding of the nature of the relationship between g op and g max across multiple species and biomes to determine whether such relationships are widely conserved. More simply put, we asked whether theoretical g max , which is calculated from stomatal anatomy according to the diffusion equation (eq. [1]; Parlange and Waggoner 1970;, is a good predictor of g op measured in the field. We explored the relationship between g op and g max by measuring g op in a wide range of woody angiosperm species in natural ecosystems and then calculating g max from the same leaves on which the g op measurements were taken to establish the nature of the relationship at biological and ecological levels. Therefore, we tested the relationship across many species, plant growth habits (trees and shrubs), habitats (open canopy and understory subcanopy), and biomes (boreal forest, temperate rain forest, tropical rain forest, and tropical seasonal [moist] forest). If we can establish consistency in the nature of the relationship between g op and g max , this would be valuable for historical herbarium studies and deep-time fossil studies because it would allow estimation of physiological stomatal conductance from observations of anatomical stomatal traits. It would also have an important application for climate and Earth system models in which g op can be estimated from the stomatal traits and, in turn, open up the possibility of studying vegetation feedbacks on the hydrologic cycle.
Biome and Species Selection
For this study, we used a published field data set of stomatal conductance measurements of C 3 woody angiosperm species from seven biomes called STraits (Murray et al. 2019). We chose the following four out of the seven biomes included in the STraits data set for our current study on the basis that they spanned wide geographic, climatic, and species ranges and are the least well represented in the literature: boreal forest, temperate rain forest, tropical seasonal (moist) forest, and tropical rain forest. We selected 74 species from a total 136 species included in the STraits data set across these biomes (Murray et al. the Chloranthales, gymnosperms, monocots, and ferns (APG et al. 2016). One species, Sambucus racemosa, occurred in both the boreal forest and the temperate rain forest and was therefore counted as two separate species occurrences, resulting in 75 separate species analyzed (table 1).
Leaf-Level Operational Stomatal Conductance Data
The term "operational stomatal conductance" (g op ) used here refers to stomatal conductance as it performs under natural field conditions, following the definition of McElwain et al. (2016b). The g op data used in this study are taken from the published STraits data set of Murray et al. (2019). In Murray et al. (2019), stomatal conductance measurements were obtained by the author using an SC-1 steady-state leaf porometer (Decagon Devices, Pullman, WA) over the course of three summer growing seasons between 2013 and 2015, when atmospheric CO 2 concentrations ranged from 396.5 to 400.8 ppm. Measurements were made on the abaxial surface of sun leaves located at the canopy edge or, in the case of naturally occurring understory shrub species, on the abaxial surface of leaves exposed to sun flecks. Mean species g op was calculated from an average total of 12 g op measurements per species (i.e., a single g op measurement taken from one leaf of each of three individuals on three or four consecutive days). This yielded a total 854 measurements on 243 individual leaves (table 1). Measurements were taken between 0830 and 1400 hours at each site under ambient environmental conditions to capture natural day-to-day variability in photosynthetically active radiation (PAR), temperature, and vapor pressure deficit (VPD), a modification of the variance protocol described in McElwain et al. (2016b). Detailed methods are available in Murray et al. (2019).
All mean conductance values reported in Murray et al. (2019) were subsequently corrected using a relationship established between stomatal conductance measurements taken by porometry and measurements on the same individuals taken by infrared gas analysis (IRGA).
Measurement of Morphological Traits and Calculation of Theoretical g max
The same 243 leaves on which g op was measured were used for measurement of stomatal morphology (density and size) and for calculation of theoretical g max . A leaf section of 1-cm 2 area was cut from approximately the same location on the leaf where g op measurements were made, yielding a total 243 leaf sections. These were fixed abaxial side up on glass slides without mounting medium and gently secured with a cover slip and tape. Six photomicrographs per leaf section were captured using a Leica DFC300 FX digital color camera mounted on a Leica DM2500 microscope with a #20 objective lens (#200 magnification; Leica Microsystems, Wetzlar, Germany). Visualization of the stomatal anatomy of most species was achieved via autofluorescence of stomatal complexes under epifluorescence using a range of excitation fluorescence filters (green: 500-570 nm; yellow and orange: 570-610 nm). In the very few instances in which epifluorescence did not yield clear images, leaf epidermal impressions were made by applying clear nail varnish to the abaxial leaf surface of each leaf, approximately where g op measurements were taken. The resulting epidermal impression was then peeled off the leaf using clear Sellotape, transferred directly to microscope slides, and photomicrographed under transmitted light. Leaves on which stomata were obscured by dense trichomes, thick cuticle wax, and/or papillae that could not easily be removed and leaves with stomata not clearly visible under microscopy were not included in the study. Micrographs were generated using Auto-Montage Pro Syncroscopy software (Synoptics, Frederick, MD). A 0.09-mm 2 grid and scale bar were superimposed on each micrograph using AcQuis (ver. 4.0.1.10, Syncroscopy, Cambridge, UK). Stomatal density was estimated using the Cell Counter in ImageJ version 1.49 software (http:// imagej.nih.gov/ij) following Poole and Kürschner (1999). Stomatal dimensions-pore length (mm) and guard cell width (mm)-were measured on 10 open stomata randomly selected from the six photomicrographs of each species using ImageJ and converted to meters for g max calculation. Calculations of theoretical g max were then made using the following equation (Parlange and Waggoner 1970;: where d w , diffusivity of water vapor at 257C (0.0000249 m 2 s 21 ), and v, molar volume of air (0.0224 m 3 mol 21 ), are constants; D is stomatal density (m 22 ); pa max constitutes maximum stomatal pore area (m 2 ) calculated as an ellipse (Lawson et al. 1998) using stomatal pore length (m) as the long axis and l/2 as the short axis; and pd is stomatal pore depth (m), assumed to be equivalent to the width of one fully turgid guard cell (Franks and Beerling 2009b). Because the dried leaves for this study were not rehydrated by any means, it is possible that leaf area reduced in some species because of shrinkage caused by the drying process (Blonder et al. 2012). The degree of leaf shrinkage varies with plant functional type (PFT; Blonder et al. 2012). We tested for shrinkage in the two PFTs in this study-woody angiosperm evergreen and deciduous-by applying the correction mean shrinkage suggested by Blonder et al. (2012) for these PFTs of 15% for evergreen leaves and 27% for deciduous leaves to the individual leaf stomatal morphological (guard cell width and pore length) and density measurements. We then calculated the new g max (table S1; tables S1-S8 are available online). It is worth noting that the mean area shrinkage for evergreen types is also the reported mean for all woody species (15%; Blonder et al. 2012). A Kruskal-Wallis test for equal medians determined no significant difference between the g max used in this study and the g max calculated from the applied shrinkage factors (table S1). Therefore, all analysis was carried out using g max calculated from the original uncorrected stomatal morphological and density data.
Statistical Analysis
All statistical analysis was carried out using IRGA-corrected species mean g op (as outlined above). Each species mean g op value in a given biome was weighted against the total number of individual g op measurements for that biome according to the following: n species g op =n biome g op ⋅ species g op , 146 where n species g op is the total number of individual g op measurements per species, n biome g op is the total number of species g op measurements for a given biome, and species g op is the mean g op for a given species. The g op ∶ g max ratios were thus calculated from the weighted mean g op and mean g max values (weighted g op /g max ). Normality tests (Shapiro-Wilk W-test and Anderson-Darling A-test) and post hoc tests (Levene's test for homogeneity of variance from means, Tukey's honest significant difference test for normal data, and the Kruskal-Wallis test for equal medians for nonnormal data) were carried out as necessary on all data and data groups. Reduced major axis (RMA) regressions were performed to investigate the relationship between g op and g max and to determine r 2 and statistical significance (P < 0:05). Boxplots were generated to determine data distribution and differences between groups. All statistical analyses were performed using Past version 3.14 (http://folk.uio.no /ohammer/past/). Figures were generated using R statistical package version 3 (R Core Team 2015).
Results
The g op : g max Ratio across Biomes Overall, across 74 species and four biomes, the g op ∶ g max ratio was 0.26 (see table 2 for a comparison of recent investigations into the g op ∶g max ratio). The tropical seasonal (moist) forest displayed the smallest mean g op ∶g max ratio (0.23), while the highest g op ∶g max ratio was found in the tropical rain forest (0.31; table 1). High variability in species-level g op ∶g max ratio was observed between species within and across all biomes, from a minimum 0.08 in Neea buxifolia from the tropical seasonal (moist) forest to a maximum 0.6 in Sambucus racemosa from the boreal forest (table 1). There was no significant difference in median biome g op ∶ g max ratios x 2 p 4:976, P p 0:17) with mean and median values among biomes in close agreement ( fig. 1A; tables 3, S2).
The g op : g max Ratio in Habitat Groups
Species data were categorized according to two habitat groups: open canopy and understory subcanopy. Overall, the biome-wide mean g op ∶g max ratio in both the open-canopy (n p 26) and the understory-subcanopy (n p 49) habitats was the same, with a calculated ratio of 0.28 (P p 0:319; fig. 1; tables 3, S3).
In the open-canopy habitat, there was no significant difference in overall mean g op ∶ g max ratio between biomes (F p 0:157, P p 0:924; fig. 1; table S4). In the understory-subcanopy habitat, there was a significant difference in mean g op ∶ g max ratio between the tropical rain forest and both the temperate rain forest and the tropical seasonal (moist) forest biomes (P p 0:005 and P p 0:026, respectively; Tukey's honest significant difference test), with the tropical rain forest displaying the highest mean g op ∶g max ratio in both habitats across all biomes at 0.32 ( fig. 1; table S5).
In the boreal forest, tropical seasonal (moist) forest, and tropical rain forest, there was no significant difference between the mean g op ∶g max ratio of the open-canopy habitat and that of the understory-subcanopy habitat (P > 0:05; fig. 2). Only the temperate rain forest displayed a significant difference between habitat groups (F p 6:692, P p 0:02).
The g op : g max Ratio in Growth Habit Groups Species data were also categorized according to plant growth habit (tree and shrub) within each biome. The overall mean g op ∶ g max ratio was 0.25 for shrubs (n p 34) and 0.27 for trees (n p 41; table S6). Overall, there was no significant difference in the g op ∶g max ratio between shrub and tree growth habits x 2 p 0:509, P p 0:476; table S6).
No significant difference was observed in either the mean shrub g op ∶ g max ratio or the mean tree g op ∶ g max ratio between biomes (ANOVA P p 0:2789 and Kruskal-Wallis x 2 p 3:768, P p 0:288, respectively; fig. 1C; tables 3, S7). Within biomes, there was no statistically significant difference between mean/ median shrub and tree g op ∶ g max ratios (P > 0:05).
Relationship between g op and g max
Linear regressions were performed using RMA to account for errors in both x and y variables. Across the total 75 C 3 woody angiosperm species and four biomes, the best-fit linear relationship between g op and g max was g op p 0:26 ⋅ g max 2 5:56 (r 2 p 0:304, P < 0:001; table 3; fig. 1). Within each of the four study biomes, there was a significant positive relationship between g op and g max , with no significant difference between slopes x 2 p5:375, P p 0:146; table 3; fig. 2).
Stomatal Traits
Stomatal density. There was wide species variation in the range of estimated D across all four biomes, from a minimum average D of~65 mm 22 in the boreal forest (S. racemosa) to a maximum average of 928 mm 22 in the tropical seasonal forest (Eugenia axillaris; table 1). There was no statistically significant difference in mean D between boreal forest and temperate rain forest species (P p 0:172). There was also not a significant difference in D between the tropical rain forest and the tropical seasonal (moist) forest (P p 0:72). A significant difference was observed between the boreal forest and both the tropical rain forest (P p 0:0002) and the tropical seasonal (moist) forest (P p 0:0004) and, likewise, between the temperate rain forest and both the tropical rain forest and the tropical seasonal (moist) forest (P p 0:001 and P p 0:002, respectively; table 1).
Stomatal pore area. Overall, stomatal pore length ranged from a mean minimum 2.9 mm (E. axillaris) in the tropical seasonal (moist) forest to a mean maximum 18.1 mm (Populus MURRAY ET AL.-CONSISTENT RELATIONSHIP BETWEEN g op and g m ax ACROSS FOUR BIOMES 147 p balsamifera) in the boreal forest. Stomatal pore length differed significantly between all biomes except between the tropical rain forest and the tropical seasonal (moist) forest, which shared the same mean and median stomatal pore length values (P p 0:6796). Calculated mean maximum stomatal pore area (pa max ) values reflected mean stomatal pore length values and ranged from a mean minimum pa max of 3.3 mm 2 (E. axillaris) in the tropical seasonal (moist) forest to a mean maximum of~129 mm 2 (P. balsamifera) in the boreal forest. There was a significant difference in pa max between most biomes (P p 3:25#10 28 ) except Fig. 1 Boxplots showing the ratio of operational stomatal conductance to theoretical maximum stomatal conductance (g op ∶g max ) for biomes (A), habitats (B), and plant growth habits (C). Boxes represent the interquartile range (IQR), horizontal lines within the boxes represent medians, red circles represent means, whiskers extend to 1.5 times the IQR, and black circles are outliers. In B, letters above boxplots indicate pairwise comparison for the understory-subcanopy habitat across biomes (Tukey's honest significant difference test), and letters below boxplots indicate significant differences in the two habitats for temperate rain forest. All other comparisons show no significant difference across or within biomes. between the tropical rain forest and the tropical seasonal (moist) forest where there was no significant difference (P p 0:51).
Relationship between Anatomical
Measurements and Calculated g max A significant strong relationship between g max and D was found among tropical rain forest taxa (g max p 1:2185 ⋅ D 1 121:91; r 2 p 0:684, P < 0:0001), and a moderately strong and significant relationship between g max and D was found in the temperate rain forest (g max p 3:7912 ⋅ D 2 122:01; r 2 p 0:518, P p 0:001; fig. 3). No significant relationship between g max and D was observed in either the boreal forest (g max p 3:526 ⋅ D 1 176:87; r 2 p 0:009, P p 0:77) or the tropical seasonal (moist) forest (g max p 0:817 ⋅ D 1 226:13; r 2 p 0:15, P p 0:085; fig. 3). Overall, when all taxa from all biomes were lumped together, no relationship was evident. There was no difference in slopes between the boreal forest and the temperate rain forest or between the tropical rain forest and the tropical seasonal (moist) forest (P p 0:84 and P p 0:113; fig. 3).
There was a moderately strong but significant relationship between g max and pa max in the boreal forest (g max p 7:439 ⋅ pa max 1 46:884; r 2 p 0:452, P p 0:012); however, no relationship between g max and pa max was established in the other biomes: temperate rain forest (r 2 p 0:073, P p 0:262), tropical rain forest (r 2 p 0:022, P p 0:508), and tropical seasonal (moist) forest (r 2 p 0:0192, P p 0:55; fig. 3). There was no difference in slopes between the boreal forest and the temperate rain forest or between the tropical rain forest and the tropical seasonal (moist) forest (P p 0:56 and P p 0:99, respectively; fig. 3).
Relationship of g max to Environmental Data
Correlation regressions between all species' g max , g op , and g op ∶g max ratios and environmental variables of temperature, PAR, and VPD showed no significant relationships ( fig. S1, available online).
Discussion
g op : g max Ratios and Relationships We find a consistent relationship between theoretical g max calculated from stomatal anatomy and field-measured g op , Fig. 2 Scatterplots showing the scaling relationship between species' averaged operational stomatal conductance (g op ) and maximum theoretical stomatal conductance (g max ) of C 3 woody angiosperms for biomes (A), habitats (B), and plant growth habits (C). Lines corresponding to the legend color are the fitted reduced major axis regressions. The dashed line is the 1∶1 relationship (refer to table 3 for the regression equations and P values). Only in C is there significant difference in slope between shrub and tree, but all other comparisons in A and B show no significant difference in slopes (P < 0:05).
with an overall mean g op ∶ g max ratio of 0.26. At the biome level, woody angiosperm species in the field tend to operate between 23% and 31% of their calculated g max , which is in good agreement with previous, but less taxonomically extensive (~15 species), studies in a mix of glasshouse, chamber, and field experiments (Franks et al. , 2014McElwain et al. 2016b ; see table 3 for the most recent studies). This is significant, considering the diversity in species and climate/environments covered in this study and between all studies to date. It confirms the existence of an apparent ideal g op ∶g max ratio, as was suggested in previous studies Franks et al. 2014;McElwain et al. 2016b). The wide-ranging interspecific variation in g op ∶g max ratios we observed (between 0.08 and 0.57) is also consistent with reported maximum g op ∶g max ratios of between 0.15 and 0.98 across species using a variance protocol (McElwain et al. 2016b). Despite such wide-ranging g op ∶g max ratios across species within each biome, no statistical difference between overall biome-level g op ∶g max ratios was observed.
Habitat Groups
This pattern of consistency in the g op ∶g max ratio was also noted in two habitat groups: open canopy and understory subcanopy. Considering the different environmental conditions experienced by plants in these two habitats, including lower PAR and VPD values exhibited in the understorysubcanopy habitat than in the open-canopy habitat, as well as lower g op demonstrated by the understory-subcanopy plants (Murray et al. 2019), the consistency in the g op ∶g op ratio between these two habitats is noteworthy. It is surely interesting that such consistency has emerged from this study despite high environment-driven species variability in each and further supports the theory that plants operate at an ideal g op ∶g max ratio.
Plant Growth Habit
The g op ∶g max ratio was again demonstrated between tree and shrub plant growth habits. Previous studies have investigated in total around 20 different species comprising different growth habits, including herbaceous plants, woody shrubs, and trees (table 3). It is not clear from these studies, however, whether growth habit had any influence on the g op ∶g max ratio. This study of 33 shrub and 42 tree species determined that growth habit does not appear to have any influence on overall g op ∶g max ratio. This once again reinforces our discovery of a consistent macrolevel g op ∶ g max ratio.
Stomatal Morphological Traits
In the cool higher-latitude biomes of the boreal forest and the temperate rain forest, stomatal pore size influences g max to the greatest extent ( fig. 3B). On the other hand, in the warmer biomes of the tropical rain forest and the tropical seasonal forest, this is not the case, and stomatal density is most influential in these biomes ( fig. 3A). The much larger pore size observed in the boreal forest may reflect greater overall genome size in the boreal biome taxa than in the other biomes, as guard cell size frequently scales with genome size (Beaulieu et al. 2008). Our results may reflect the pressures that climate exerts on leaf stomatal development in each biome. For instance, in the hotter biomes, which require greater evaporative cooling, this is clearly attained via higher D and smaller stomata ( fig. 3): smaller stomata have been observed to respond more rapidly to environmental stimuli (Drake et al. 2013). Our results from the tropical rain forest corroborate findings in Eucalyptus globulus, in which higher rates of gas exchange were achieved by a greater density of small stomata ). The opposite is true for the most northern latitude biomes, where fewer larger stomata ensure high Fig. 3 Scatterplots of theoretical maximum stomatal conductance (g max ) and stomatal density (D; A) and maximum stomatal pore area (pa max ; B) for biomes. Lines corresponding to the legend color are the fitted reduced major axis regressions. In both A and B, there are no significant differences in relationships between the boreal forest and the temperate rain forest (D: P p 0:84; pa max : P p 0:56, respectively) or between the tropical rain forest and the tropical seasonal (moist) forest (D: P p 0:11; pa max : P p 0:99, respectively). g max to exploit the short window of opportunity for carbon gain experienced in the boreal forest.
Species-Level Variability in the g op : g max Ratio In competition and in association with neighboring species, plants can optimize physiological processes, such as stomatal conductance, toward proper growth, development, and reproduction; this results in their occupying a particular niche space (Sterck et al. 2011;McElwain et al. 2016b). This might account for the diversity of species-specific g op ∶g max ratios that we find within each biome investigated here. While a single species experiment in the "natural" environment may yield a low g op ∶ g max ratio, such a monocultural ecosystem may function very differently from the truly natural environment of very mixed vegetation types in unmanaged forests. From our results, such ecosystems yield widely diverging species g op ∶g max ratios, which may also be constantly changing in dynamic response to environmental fluxes. The minimum g op ∶ g max ratio we observed in our study was 0.08 (Neea buxifolia) in the tropical seasonal (moist) forest, and the highest value was 0.57 (Sambucus racemosa) in the boreal forest. Despite wide species-level variability, however, at the biome level, the average g op ∶ g max ratio is highly consistent across all four biomes investigated. The variety of stomatal density and size combinations among species appears to facilitate each species' g max requirements in response to localized community composition and microenvironmental fluxes and, perhaps, enables the coexistence of diverse species (Mc-Elwain et al. 2016), as in the tropical rain forest.
The g op ∶g max data presented here is a broad representation of C 3 woody angiosperm species common within each biome (Murray et al. 2019). We set out to investigate the nature of the relationship between g op and g max in as many biomerepresentative species as possible within the limits of the study; however, a complete picture of g op may not have been captured, since it was not possible to measure the diurnal courses of g op for every measured leaf. Nonetheless, despite these limits to our sampling and the wide interspecies variability in the relationship between g op and g max , there is consistency in the g op ∶g max ratio across biomes, habitats, and growth habits presented here, providing an important new reference for studies at the biome, habitat, and growth habit levels of woody angiosperm species of unknown g op ∶g max ratio in the natural environment. A potential future study might incorporate relative abundance data to quantify a community-weighted g op ∶g max ratio to further under-stand whether there is any departure from the g op ∶g max ratio so far observed.
Conclusion
Until now, there were few reference points for the relationship between g op and g max and no studies in natural ecosystems. This study using the variance protocol (McElwain et al. 2016b;Murray et al. 2019) presents in one data set the g op ∶g max ratios of 74 woody angiosperm species in their natural habitats from across four biomes. We have shown compelling evidence for consistency in the ratio between physiological g op and anatomical g max among biome-representative woody angiosperms at the levels of biome, habitat, and plant growth habit. This new data set provides a valuable contemporary calibration reference for woody angiosperms in vegetation-climate and paleoclimate models. For paleobotanists striving to understand plant macroevolutionary patterns and paleoecophysiological function from measurable fossil traits (Franks et al. 2014;McElwain et al. 2016a) where no modern equivalents exist, our results now offer a valuable reference for the g op ∶ g max ratio at the biome, habitat, and plant growth habit levels for woody Eudicots. In such cases, the discovery of a best estimate of the g op ∶g max ratio is a good starting point for the foundation of sound paleoclimate proxies for further understanding plants' role in mediating climate past and present. In their chapter on the capture of CO 2 by leaves and stomata, Williams et al. (2004), while conceding a large degree of uncertainty, suggested that species-level differences, though great, may not ultimately be important considering the observed conformity in g max response found at the PFT level (Williams et al. 2004). We argue the same for the relationship between g op and g max : while there is almost the full breadth of disparity among species, at the levels of growth habit, habitat, and biome, the relationship is consistent.
|
2019-10-03T09:03:14.251Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1be73134f84680b0796f1aca7c41061fc6100444",
"oa_license": "CCBYNC",
"oa_url": "https://www.journals.uchicago.edu/doi/pdf/10.1086/706260",
"oa_status": "HYBRID",
"pdf_src": "UChicago",
"pdf_hash": "21109b64dd97a610a41186b106d2a553259f7269",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
218922594
|
pes2o/s2orc
|
v3-fos-license
|
Studies on Tryptophan Metabolites in Patients of Major Monopolar Depression
Plasma levels of tryptophan metabolites were compared between healthy volunteers and patients of major monopolar depression at various ages and genders. An ultrahigh-speed liquid chromatography/mass spectrometry has been used for analysis. There are significant gender and age differences in TRP metabolites of healthy volunteers. At the upper stream of metabolism, metabolites of young women and old men are higher, but at the lower stream of metabolism, their levels are higher in young men and old women. Such differences disappear in plasma of patients of major monopolar depression except for kynurenine (KYN). Daily variation of blood serotonin (5-HT) levels showed that 5-HT levels were low in the morning and increased toward evening, but blood levels of 5-HT were higher in healthy people than depressive people in the morning and decreased to ward evening. Significant age and gender differences of plasma levels of tryptophan metabolites in healthy volunteers disappear in patients of major monopolar depression. Blood levels of 5-HT were higher in healthy people than depressive patients.
Introduction
Recently, it has been shown that both the responses to placebo and antidepressant increased [1]. Kirsch has claimed that pharmaceutical companies did not include mildly and moderately depressed patients in trials of efficacy after finding that these patients did not benefit beyond placebo [2]. He consistently insists that antidepressants are not more effective than placebos in moderately depressed patients [3]. Drug-placebo differences are considered to be small in efficacy trials, and most of the response to antidepressants seems due to expectancy [3].
Major depressive disorder is one of the most common psychiatric disorders which is burdensome and costly worldwide in adults. Although pharmacological and non-pharmacological treatments are available, because of inadequate resources, antidepressants are used more frequently than psychological interventions.
By using a meta-analysis, all antidepressants were shown more efficacious than placebo in adults with major depressive disorder. Smaller differences between active drugs were found when placebo-controlled trials were included in the analysis [4].
Serotonin (5-HT) has been indicated to be involved in etiology of depression [5]. The roles of various metabolites of kynurenine (KYN) pathway are reviewed [6], so we do not discuss these roles in detail.
As to relationships between serotonin levels and depression, we analyzed plasma levels of TRP metabolites in patients of depression.
Although the concentration of 5-HT has been considered to be low in depressive patients [7], 5-HT concentration in the brains of suicide victims were not low [8]. Therefore, it is not known if 5-HT concentration is decreased in the brains of depressive patients.
We now report age and gender differences of various TRP metabolites in patients of major monopolar depression and healthy volunteers.
Patients
Outpatients of depression were recruited in this study. Fasting blood samples were taken early in the morning. Their severity of depression was checked by clinical global impression-severity scale (CGI-S), SRS, and Hamilton depression rating scale (HDRS). The history of prescriptions of drugs such as antidepressants, anxiolytics, mood stabilizers, and other drugs were asked.
Sample numbers are 55 (male, 15; female, 40; average age, 45.4 ± 11.9). The number of MDD is 38 and BD is 17. Further characteristics of patients are described below.
Plasma factors were measured after plasma was separated from blood (3000 rpm/min at 4°C). Ethylenediaminetetraacetic acid (EDTA) was used as an anticoagulant.
The simultaneous measurements of TRP metabolites in plasma
An ultrahigh-speed liquid chromatography/spectrometry was used for the assay. Although detailed methodology was described elsewhere [5][6][7][8][9], the important improvement of the assay method is described here.
Reagents and instrumentation
The simultaneous analytical method developed can be adapted to major metabolites of TRP including melatonin in clinical sample.
Metabolite analysis was performed by a liquid chromatograph tandem mass spectrometer, the LCMS-8060 quadrupole mass spectrometer combined with Nexera X2 liquid chromatograph system (Shimadzu Corporation, Kyoto, Japan).
The targets are separated by reversed-phase chromatography using C18 analytical column, L-Columns ODS2 (2.1 mm × 150 mm, CERI, Tokyo, Japan) with a gradient elution. Mobile phases were 0.1% formic acid solution and acetonitrile with the gradient elution by 5% concentration of acetonitrile in 3 min and then 5-95% in 6 min, followed by 5% in 3 min at a total flow rate of 0.4 mL/ min. The temperature of the column was 40°C. Electrospray ionization (ESI) was used as mostly positive ionization with multi-reaction monitoring (MRM) detection.
Flow rate of the neutralizer and the drying gas were 2 L/min and 10 mL/min, respectively. Temperature of desolvation line (heated capitally tube) was 250°C. ESI interface was used at 400°C with 10 L/min of heating gas flow. Each MRM transition was optimized using each standard solution. Optimized results were shown in Table 1.
Melatonin -The Hormone of Darkness and Its Therapeutic Potential and Perspectives
All mother solutions of 1 mg/mL had been stocked under −80°C, and standard samples for calibration curve were prepared prior to use as mixture solution by consideration of each range of measurement concentration.
Analysis of human plasma
Aliquot of 50 μL human plasma was used for each sample analysis. The procedure including deproteinization is shown in Figure 1.
TRP metabolic pathways are shown in Figure 1. Figure 1 shows metabolic pathways of TRP. Metabolites were measured by an ultrahigh-speed liquid chromatography/mass spectrometry.
One-way ANOVA was used for evaluating statistical significance. A, b, c, and d indicate values of young and old men and women. Tukey's test was used for post hoc test. Table 2 shows that there are significant gender and age differences in plasma levels of TRP of healthy volunteers. Generally speaking, plasma levels of 5-hydroxyindoleacetic acid (5-HIAA), indole-3-acetic acid (IAA), KYN, and AA are higher in young women and old men than in young men and old women. Plasma levels of XA and 3HK are higher in young men and old women than in young women and old men.
One-way ANOVA was used for evaluating statistical significance. A, b, c, and d indicate values of young and old men and women. Tukey's test was used for post hoc test. Table 3 shows that in contrast to cases in healthy people, age and gender differences disappeared in MMD except for KYN. Melatonin -The Hormone of Darkness and Its Therapeutic Potential and Perspectives 6
Discussion of part 1
The availability of endogenous 5-HT as a neurotransmitter is crucial in many physiological processes. Serotonergic neurons in the central nervous system are involved in regular behavioral states and physiological processes including arousal, sleep, appetite, pain, releases of hormone, and mood. Dysfunction of 5-HT neurons may lead to depression and other mental disorders.
Many scientific research has been done to know roles of 5-HT in pathophysiology of depression.
Since pathological changes are investigated in the brain and cerebrospinal fluid of suicides, it is claimed that 5-HT neurotransmission is implicated in the causes of suicide [13,14]. Low levels of 5-HIAA in cerebrospinal fluid were shown in suicide attempters of depression [15]. Although the brainstem of suicide attempters had less 5-HT and 5-HIAA, most postmortem studies report no differences in cortical 5-HT or 5-HIAA of suicides [16].
Furthermore [17] patients with MDD have been reported to have higher 5-HIAA in jugular venous blood and have been argued to reflect higher brain 5-HT neurotransmission and turnover [18].
So the roles of 5-HT in depression is still confusing. We simultaneously analyzed plasma levels of TRP metabolites in healthy people and patients of MMD. As shown in Tables 2 and 3, significant age and gender differences disappear in patients of MMD.
It is difficult to speculate reasons of such changes in MMD. Probably, hormonal changes may be implicated.
These results suggest that much attention has to be paid to age and gender if we want to analyze TRP metabolites, especially 5-HT and 5-HIAA.
Statistical differences of TRP metabolites between MMD or BD and healthy people will be reported elsewhere.
The diurnal variation of 5-HT in the blood of patients of depression
As stated above, serotonin (5-HT) plays roles in a state of depression since selective inhibitors of the uptake of 5-HT and the blockers of 5-HT 1A receptors are effective in its treatment [19,20].
There is some evidence indicating that in patients of affective disorders, the regulation of circadian rhythms is disturbed [21].
We have shown that plasma levels of 5-HT were very low in patients of depression, but the levels of 5-HIAA or KYN were not different from the levels of control persons suggesting that 5-HT was immediately converted to 5-HIAA in patients of depression [9]. Due to the presence of 5-HT transporter in platelet membranes, most of 5-HT are believed to be stored in platelets in the blood [22].
We have shown that whole blood 5-HT concentration showed marked changes throughout daytime, with maximum values in the evening and lowest values in the morning, whereas its metabolite 5-HIAA followed contrary [23].
So we wanted to measure 5-HT levels in the blood of patients of depression and controls.
We examined at five timepoints whole blood 5-HT levels in depressive patients of Hamamatsu University Hospital and control volunteers. The number of depressive patients was 18 and 30 volunteers.
Patients were in depressive states as confirmed by a mean score of 18.7 (range 12-24) on the 24-item scale of Hamilton depression rating scale [24]. None of them were administered with any drug except for small doses of benzodiazepines, for at least 10 days before blood was taken.
Blood levels of 5-HT were measured using HPLC as described by Anderson et al [25]. Analytical recoveries were 85% (SD 4.5%, CV 5.6%). Amount and response were nearly related.
In the group of depression, the lowest value was shown at 8:30 and the level progressively increased to 14:30.
Discussion of part 2
Platelet 5-HT content is most likely regulated by the platelet transport activity. Variations of 5-HT uptake in depressed patients have been reported by several groups [26][27][28].
Seasonal changes of serotonin (5-HT) uptake in blood platelets from depressed patients and normal controls were studied over a 2-year period to know if seasonal variations were present [26]. A measure of the number of 5-HT uptake sites in normal controls and depressed patients was significantly higher in fall and winter than in spring and summer. The number of 5-HT uptake in the depressed patients was lower than in normal controls throughout the year. Normal controls showed lower number in April and June. A similar trend was present in the depressed patients but the lowest values were found in the month of December.
Blood levels of melatonin, 5-HT, cortisol, prolactin, and serotonin uptake by platelets were measured at 08:00 to 08:00 hours of the following day in healthy men in age from 27 to 35 years [27]. The active transport of 5-HT by platelets was shown to be significantly correlated with melatonin blood levels. This finding suggests either a direct effect of melatonin on 5-HT active transport or the influence of the suprachiasmatic nucleus on serotonin uptake by platelets.
So far depressive disorders are considered to be associated with various neurobiological alterations like hyperactivity of the hypothalamic-pituitary-adrenal axis, altered neuroplasticity, and altered circadian rhythms. Unfortunately, the causal connections between depressive disorders and disturbed circadian rhythms have not been completely clarified. Chronobiological therapy is based on these disturbed processes. For the treatment of the circadian symptoms, various scientifically tested chronotherapeutics are available with different effectiveness and evidence like light therapy or sleep deprivation. The successful treatment of depression also frequently leads to an improvement in altered circadian rhythm.
Further studies of circadian variation of 5-HT system may help to understand the control of serotonergic nervous system and the treatment of depression.
|
2020-05-21T00:08:56.417Z
|
2020-05-04T00:00:00.000
|
{
"year": 2020,
"sha1": "e9061466515ba3cf6f024404c298f0999ef6f529",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/71664",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "fb92fad1a79bc323a6936f1d48b85fdb147ce65c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253255476
|
pes2o/s2orc
|
v3-fos-license
|
Non-line-of-sight imaging with arbitrary illumination and detection pattern
Non-line-of-sight (NLOS) imaging aims at reconstructing targets obscured from the direct line of sight. Existing NLOS imaging algorithms require dense measurements at regular grid points in a large area of the relay surface, which severely hinders their availability to variable relay scenarios in practical applications such as robotic vision, autonomous driving, rescue operations and remote sensing. In this work, we propose a Bayesian framework for NLOS imaging without specific requirements on the spatial pattern of illumination and detection points. By introducing virtual confocal signals, we design a confocal complemented signal-object collaborative regularization (CC-SOCR) algorithm for high-quality reconstructions. Our approach is capable of reconstructing both the albedo and surface normal of the hidden objects with fine details under general relay settings. Moreover, with a regular relay surface, coarse rather than dense measurements are enough for our approach such that the acquisition time can be reduced significantly. As demonstrated in multiple experiments, the proposed framework substantially extends the application range of NLOS imaging.
Introduction
The technique of imaging objects out of the direct line of sight has attracted increasing attention in recent years . A typical non-line-of-sight (NLOS) imaging scenario is looking around the corner with a relay surface, where the target is obscured from the vision of the observer. NLOS imaging aims to recover the albedo and surface normal of the hidden targets with the measured photon information. Potential applications of NLOS imaging include but are not limited to robotic vision, autonomous driving, rescue operations, remote sensing and medical imaging.
To achieve NLOS reconstruction, laser pulses of high temporal resolution are used to illuminate several points on the relay surface, where the first diffuse reflection occurs.
After that, photons enter the NLOS domain and are bounced back to the visible surface again by the unknown targets. The hidden targets can be reconstructed with the timeresolved photon intensity measured at several detection points on the visible surface.
The imaging system is confocal if the illumination point coincides with the detection point for each spatial measurement, otherwise being non-confocal. Besides, we call the measurements regular if the illumination and detection points are uniformly distributed in a rectangular region.
According to how the hidden surface is represented, existing imaging algorithms are divided into three categories: point-cloud-based 28 , mesh-based 29 and voxel-based methods 1,8,9,[30][31][32][33][34][35] . Among these categories, voxel-based algorithms yield to be the most efficient ones with low time complexity 32 and fine reconstruction results 34 . For voxelbased methods, the reconstruction domain is discretized with three-dimensional grid points and the albedo is represented as a grid function.
The first voxel-based NLOS reconstruction method is the back-projection algorithm proposed by Velten et al. 1 . The measured photon intensity is modeled as a linear operator applied to the albedo, and the targets are reconstructed by applying the adjoint operator to the measured data. Further improvements of the back-projection method include rendering approaches for fast implementations 2,16 Despite these breakthroughs, two major obstacles of existing methods toward practical applications are the need for a large relay surface and dense measurement.
When there are limitations on the shape and size of the relay surface, these algorithms may fail due to the lack of data. Besides, dense measurement results in a long acquisition time, which poses a significant challenge for applications such as auto-driving where the observer may move at high speed.
In this work, we propose a Bayesian framework for NLOS reconstruction that is not limited by the spatial pattern of illumination and detection points. By introducing the virtual confocal signal at rectangular grid points, we design joint regularizations for the measured signal, virtual confocal signal and the hidden target. We put forward a confocal complemented signal-object collaborative regularization (CC-SOCR) framework, which reconstructs both the albedo and surface normal of the hidden target.
The proposed method works quite well under the most general setting, allowing regular and irregular measurement patterns in both confocal and non-confocal scenarios.
Besides, our approach provides sparse reconstructions of the targets with clear boundaries and negligible background noise, even in cases with very coarse and noisy measurements. Notably, the proposed method suggests a paradigm shift, liberating the research of NLOS imaging from relying heavily on the assumption of a large-size relay surface with regular shape and entire region (wall, ground) ever since the technique was first proposed. To the best of our knowledge, this work demonstrates high quality NLOS reconstruction for the first time, in the scenarios with the relay surfaces having discrete scattering regions, irregular shape, or very limited size, enabling the hidden object reconstruction with far more types of realistic relay surfaces such as window shutter, window frame, and fence, which significantly broadens the scope of NLOS imaging applications. As shown in Fig. 1, the illumination and detection patterns are irregular but manifest in ubiquitous scenes of daily lives. Reconstruction results of the bunny with synthetic confocal signals 38 detected at the entire relay surface and these four scenarios are provided in Supplementary Figures 1 -5. Besides, our method can significantly reduce the acquisition time and accelerate the imaging process by using sparse measurements for the conventional scenario of a large relay surface.
Results
The NLOS physical model. The goal of NLOS imaging is to take a collection of measured transient data and find the target that comes closest to fitting these signals. In this work, we adopt the physical model proposed in SOCR 34 . Let and be the illumination and detection points on the visible surface, and we call an active measurement pair, or simply a pair in the following. The photon intensity measured at time is given by (1) in which is the three-dimensional reconstruction domain, denotes the albedo value of the point , is the unit surface normal at that points toward the visible surface. The unit vector can be arbitrarily chosen for points with zero albedo value. By denoting , equation (1) is written equivalently as (2) Noting that the intensity is linear with , the physical model can be written as in the discrete form. The albedo and surface normal can be obtained directly from . and is considered to be generated with the ideal nonlinear physical model. The simulated signal is generated using equation (1) and the hidden target. The measured signal is inevitably corrupted with noise, which is considered as certain deterioration of the ideal signal. The degradation is related to detection efficiency and background noise, whose distribution is hard to estimate and may vary from one scenario to another. To tackle this problem, we introduce the approximated signal , which serves as a better approximation of the ideal signal than the measured signal.
When the number of measurement pairs is small, the solution to the reconstruction problem may not be unique due to the lack of data. To overcome the rank deficiency of the measurement matrix, we introduce the virtual confocal signal at regular focal points.
Suppose that the reconstruction domain is discretized with voxels in the depth, horizontal and vertical directions. We denote by the collection of confocal measurement pairs, which are the orthogonal projections of the voxels to a virtual planar surface perpendicular to the depth direction. The corresponding ideal, simulated and approximated signals for are denoted by , and , respectively.
A Bayesian framework. We treat the reconstructed target , the measured signal and the approximated signals , as random vectors and formulate the imaging task as an optimization problem using Bayesian inference. The target and signals , and are obtained simultaneously by maximizing the joint posterior probability.
Three assumptions are made to formulate this as a concrete optimization problem. Fig. 2 The proposed CC-SOCR method. a The CC-SOCR framework. For high quality reconstructions, the measured signal, approximated signal and virtual confocal signal are treated as random variables and solved simultaneously using Bayesian inference. The term includes the sparseness of the approximated signal, as well as the sparseness and non-local selfsimilarity prior of the target. The term corresponds to an empirical Wiener filter, in which the simulated signal of the target serves as the pilot estimation. The term contains the sparseness of the virtual confocal signal, as well as the joint sparse representation of the local structures of the simulated signal and the virtual confocal signal. b The approximated signals and the reconstructed target of the instance of the statue (confocal, measured data). The measured data is provided in the Stanford dataset 8 . We assume the relay surface to be the region consisting of four letters 'N', 'L', 'O', and 'S'. The measured signal, approximated signal, virtual confocal signal and the reconstructed albedo are shown at the bottom.
Firstly, the conditional distribution of the measured signal given the joint probability distribution of , and is (4) in which is related to the joint prior distribution of , and . With this assumption, does not provide additional information to predict when is known.
Secondly, the joint prior distribution of and is (5) in which describes the prior distribution of and . With this regularization term, we search for the target only in the set of real-world objects. Besides, is less noisy than the measured data and is closer to the ideal signal of a certain real-world target, which helps to enhance the reconstruction quality.
Thirdly, the conditional distribution of given and is (6) in which and are the subsets of the approximated signals and that share the same measurement pairs. is related to the joint prior distribution of the target and the virtual signal .
With these assumptions, we derive a concrete optimization problem using the Bayesian formula.
in which the third equality follows from equation (4) and the last equality holds with equations (4), (5) and (6). By designing appropriate regularization terms , and , we obtain high quality reconstructions of the targets even in scenarios with highly Table 1. To bring existing methods into comparison, we interpolate the signal with the nearest neighbor method 8,35 , which generates better results than zero padding 32 in extreme cases (See Supplementary Figure 24).
Results on synthetic data. Instead of using an entire planar visible surface, we assume the relay to be a square box which simulates the scenario of four edges of a window.
The hidden object is a regular quadrangular pyramid, whose base length and height are Albedo values that are less than 0.25 are thresholded to zero. The LOG-BP method fails to locate the target correctly, and contains misleading artifacts near the boundary of the is 2.86%, which is one order of magnitude smaller than that of the LOG-BP reconstruction (21.75%).
Results on measured data. For confocal experiments, we use the instance of a statue in the Stanford dataset 8 to test the performance of the proposed method. The target is 1
Discussion
We have proposed a novel framework towards the most general setting of NLOS imaging. In this section, we discuss its relationship with the original SOCR method, the complexity of the algorithm and possible directions for further improvements. Other types of virtual signals. In CC-SOCR, virtual confocal signals observed at planar rectangular grid points are used to complement the reconstruction process in the case of incomplete measurements. It is also possible to consider virtual non-confocal signals for stronger regularizations. Besides, virtual confocal signals at several planes may be introduced to use the spatial correlation better. However, the time and memory complexities will also increase.
Virtual confocal signals at coarse grids. In the CC-SOCR method, the time complexity is still due to the virtual signal introduced, even when the number of measurement pairs . To accelerate the reconstruction process, the virtual signal at coarser grids may be used. If the virtual confocal signal is considered at points, the time complexity reduces to . In Supplementary
Materials and methods
The joint regularizations. In equation (7), we formulate the CC-SOCR framework as an optimization problem. Here we show how the regularization terms , and are designed.
describes the prior distribution of the reconstructed target and the approximated signal of the measurement pairs. For the reconstructed target, we consider the sparsity and non-local self-similarity priors and directly follow the SOCR method 34,39,40 . We also use the zero norm to impose sparseness on the approximated signal . We set (8) in which , , and are fixed parameters. is the albedo of , is the block matching operator, with the index of a reference block. The summation is made over all possible blocks. and are two orthogonal matrices that capture the local structure and non-local correlations of the 3D albedo block. is the matrix consisting of transform coefficients of the block. denotes the zero norm, which represents the number of nonzero values of a tensor.
For the term , we also follow the original SOCR method and set (9) in which is a fixed parameter, is the patch extracting operator, with the index of a local patch. Noting that the signals may not be measured at regular grid points, the patch extracting operator only applies to the temporal direction of the signals.
is the measured signal. is the matrix of discrete cosine transform. Let be the matrix of transform coefficients of the patch, the regularization term is given by (10) in which and are two fixed parameters that control the weight of the simulated signal and the sparsity of the representation, respectively.
( 11) in which , and represent the collections of the transform-domain coefficients , and respectively. represents the identity matrix of order . The solution to the optimization problem is provided in Supplementary Note 2.
Data availability
The Zaragoza dataset is available in Zaragoza NLOS synthetic dataset Synthetic data of the instance of the pyramid are attached to the code.
Code availability
The code will be made freely available in the future.
Supplementary Note 1 Additional experimental results
For all experiments, we interpolate the signals with the nearest neighbor method where necessary to bring F-K 1 , LCT 2 , D-LCT 3 , PF 4 and SOCR 5 methods into comparison. The coordinates of the focal points of all experiments are provided in the code. Supplementary Figures 1 -5 compare the reconstruction results of the bunny under different relay settings with the synthetic confocal signal provided in the Zaragoza dataset 6 . These results indicate the capability of the proposed CC-SOCR method in providing clear reconstructions of the hidden targets, even in cases with highly irregular relay settings (See Supplementary Figures 4 and 5).
Supplementary Figure 23 shows the reconstruction results of the statues with different sizes of the virtual confocal signals introduced. The confocal signal is measured at 200 randomly distributed focal points in a square region of 2 × 2 m2. The reconstruction quality decreases with the size of the virtual confocal signal, which indicates the necessity of the dense virtual signal introduced. However, sparser virtual signals result in shorter execution time (See Supplementary Tables 4 -7), which shows the trade off between the reconstruction quality and computation runtime.
Supplementary Figure 24 compares the F-K, LCT, D-LCT and SOCR reconstructions of the statue with confocal signal measured at a heart-shaped region consisting of 258 focal points. The signal is preprocessed with zero padding and nearest neighbor interpolation techniques. It is shown that existing methods fail in this extreme case. Supplementary Figures 25 and 26 show reconstruction results of the statue with confocal signals measured at the letters 'N', 'L', 'O' and 'S' and a heart-shaped region. It is shown that the least squares reconstruction without regularizations is of poor quality. When the sparsity and non-local self-similarity priors of the target are introduced, the quality of the reconstruction enhances, but is still blurry or contains artifacts. The CC-SOCR method reconstructs the target faithfully.
The CC-SOCR algorithm
The proposed CC-SOCR optimization problem for NLOS reconstruction writes in which is the index of a local patch, is the patch generating operator, is the block matching operator. represents the matrix of the discrete cosine transform, with its filter denoted by . represents the element of the vector .
, and represent the collections of the transform-domain coefficients , and respectively. and are the subsets of the approximated signals and that share the same measurement pairs. , and are sizes of the local patches of the albedo. is the maximum number of neighbors kept in the block matching process. , and are the patch sizes of the virtual confocal signal in the horizontal, vertical and temporal directions. , , , in which is an orthogonal matrix. In order to solve this problem with convergence guarantee, it suffices to generalize the data-driven tight frame image denoising algorithm 7 to three dimensions and apply it to with the regularization parameter .
For the sub-problem (2.1), if the measurement pair of the measured signal does not appear in the virtual confocal signal, the solution is given by represents the operator that aggregates the patches back to the signal. is the collection of the Wiener coefficients in the frequency domain. is an abbreviation of , where is the set of indices of the spatial patches. The reconstructed target is updated by solving the sub-problem (2.3). In this subproblem, the term is omitted. Otherwise, the problem will be non-linear and difficult to solve. This problem contains a regularization term, which can be solved efficiently with the split Bregman method 8 in which and are the collections of transform-domain coefficients, and are the operators that aggregate the patch dataset or block dataset back to the signal and albedo. and are understood as and . Noting that is a three-dimensional volume that does not contain information of the surface normal, we use the technique introduced in SOCR 5 to construct a directional albedo with the surface normal provided by . See the supplement of SOCR 5 for more detail. Here, we abuse the notation and also use to represent this directional albedo. Minimizing the objective function (S.10) yields a least-squares problem without constraint, which can be solved with the conjugate gradient method. We remark that this sub-problem is solved approximately due to the omitted term and the treatment of . Nonetheless, extensive experimental results in Supplementary Note 1 indicate that high-quality reconstructions are obtained with these tricks. For the sub-problem (2.5), if the measurement pair of the virtual confocal signal does not appear in the measured signal, we have (S.12) Otherwise, the solution writes (S.13) in which and are the simulated signal of the measurement pair and the corresponding signal of , respectively. The sub-problem (2.6) is of the same type with (1.6) and can be solved using the same method discussed above.
Execution time
Execution time of the CC-SOCR algorithm for the instance of statue with 200 randomly distributed confocal measurements and virtual confocal signals of different sizes are shown in Supplementary Tables 4 -7. The code was run on an AMD EPYC 7452 server with 64 cores. It is shown that sparser virtual confocal signal result in shorter excution time. However, the reconstruction quality decreases with the size of the virtual confocal signal (See Supplementary Figure 23).
|
2022-11-03T01:15:35.591Z
|
2022-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8584a6fb164a3fe476efda77c53cf40dc37e6564",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-38898-4.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "8584a6fb164a3fe476efda77c53cf40dc37e6564",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Physics",
"Medicine"
]
}
|
64929240
|
pes2o/s2orc
|
v3-fos-license
|
is-A Contemplating approach for Hive and Map reduce for efficient Big Data Implementation
— In the reference current scenario, data is incremented exponentially and speed of data accruing at the rate of petabytes. Big data defines the available amount of data over the different media or wide communication media internet. Big Data term refers to the explosion in the quantity (and quality) of available and potentially relevant data. On the basis of quantity amount of data are very huge and this quantity has been handled by conventional database systems and data warehouses because the amount of data increases similarly complexity with it also increases. Multiple areas are involved in the production, generation, and implementation of Big Data such as news media, social networking sites, business applications, industrial community, and much more. Some parameters concern with the handling of Big Data like Efficient management, proper storage, availability, scalability, and processing. Thus to handle this big data, new techniques, tools, and architecture are required. In the present paper, we have discussed different technology available in the implementation and management of Big Data. This paper contemplates an approach formal tools and techniques used to solve the major difficulties with Big Data, This evaluate different industries data stock exchange to covariance factor and it tells the significance of data through covariance positive result using hive approach and also how much hive approach is efficient for that in the term of HDFS and hive query. and also evaluates the covariance factors after applying hive and map reduce approaches with stock exchange dataset of around 3500.After process data with the hive approach we have conclude that hive approach is better than map reduce and big table in terms of storage and processing of Big Data.
I. INTRODUCTION
IG data is comparable to tiny knowledge however it's larger in terms of volume, selection and rate.Massive knowledge may be the next big factor within the IT space.Massive knowledge generates price from the storage and process of terribly giant quantities of digital data that can't be processed by standard info systems.The larger a part of the knowledge is delivered, put away, listed and handled over the net, prompting the enlargement in size of data sys-B tematically.This substantial live of data introduce over the net is alluded to as "Big Data".Massive knowledge characterized by the info quantity (volume), data speed (velocity) and differing types of knowledge (variety).
Volume: Volume denotes the dimensions of knowledge over the web.Presently it's in petabytes and is predicted to be raised to zettabytes.Knowledge from the good phones, sensors embedded into everyday objects can presently lead to billions of recent knowledge.Velocity: Velocity inputs cover the speed of input generation and data managing.Online gaming systems support millions of concurrent users, each producing multiple inputs per second.[2].
Variety: Variety covers the type of input.Input can be constructed (text), unstructured (data generated from social networking sites and sensors) or semi-structured (data from web pages, web logs-mail etc).
Two more characteristics have also been included-Veracity and Value.
Veracity-It means how much the data is related to truth or facts.
Value-It covers the processing input and how the data can be combined with other data to extract meaningful information from it.
II. PROPOSED WORK
In the present paper, we have proposed distinctive apparatuses and strategies which are utilized to beat the regular is-sues identified with huge information.Term Big Data examination includes devices, calculations, and design that break down and change substantial and monstrous volumes of information [10].Big information investigation is an innovation empowered technique for empowering an association to have an aggressive edge over others by dissecting business sector and client patterns.Investigation on on-going information, online value-based information gives further experiences of the patterns to settle on opportune and exact choices.For the Computation reason for wide range volume of information [5]Big Data Computing is worried about the preparing, changing, dealing with and capacity of data.Frameworks, for example, Map Reduce, Hadoop, Grid Computing, and Big Table [8] have made composition and executing specially appointed huge information investigation and calculation simple.As web indexes have changed data get to, different types of enormous information registering can and will change the exercises like restorative and logical research, protection undertaking and so on.This paper focus on the following technologies:
A. Hadoop
Hadoop is actually a large scale batch data processing system.Hadoop developed as an establishment for huge information handling undertakings, for example, logical examination, business and deals arranging, and preparing huge volumes of sensor information, including from web of things sensors.Hadoop is supportable for distributed cluster system, parallel data processing system and worked as a platform for massively scalable applications.Facebook, Apple, Google, IBM, Twitter and hp are the famous hadoop users.Hadoop provide access to the file system called HDFS (hadoop Distributed File system).Basic capabilities of the hadoop include some packages like Apache Flume, Apache HBase, Apache Hive, Apache Pig, Apache Oozieand many more.Hadoop is beneficial in terms of cost efficient and reliable and scalable data processing.Different components of Hadoop system are explained below [10] Hive is techniques supported the SQL, it in addition uses a lot of customary and in some cases program secret writing that we might have to be compelled to implement Map Reduce programming.We have got to use Hive to interrupt down the stock and large knowledge set info, at that time we might have the advanced and entity relative calculus primarily based question to use the SQL skills of Hive-QL and connected info is overseen during a specific map and scale back mapping.it'll depreciated the advancement time and may administrate joins between the dataset (Eg.Stock info, Industrial data).Hive in addition has its main servers, by that we will gift our Hive queries from anywhere to the Hive server, that is employed to executes them.Hive SQL queries area unit being modified over into define employments by Hive compiler, and software system engineers have to be compelled to solve this advanced programming and solved the problems connected with massive knowledge and organization knowledge.For applying this methodology we have a tendency to could have to be compelled to use a dataset happiness to exchange and Dataset contains following properties: Data is being organized above all arrangement. It would judge joins to cipher Stock variance. It may well be sorted out into composition of various forms of be a part of. In neutral condition, info size would be extreme high. Used Hive setup on Cloudera.This can stack the dataset from the required space to the Hive table 'STACK' as created on top of but this dataset are place away into the Hive-controlled record framework namespace on HDFS, with the goal that it can be bunch ready more by MapReduce employments or Hive queries.VI.
Create Hive
Calculate the Covariance factor.VII.
We can figure the Covariance for the gave stock dataset to the inputted year as beneath utilizing the Hive select inquiry: VIII.From the variance issue, stock dataset recommend the subsequent conclusions: For Stocks QRR and QTM, these are having additional positive variance than negative variance, therefore having high chance that stocks can move along same means.1.For Stocks QRR and QXM, these are for the foremost half having negative variance.Therefore there exists an additional distinguished chance of stock prices acquiring a reverse course.2. For Stocks QTM and QXM, these are typically having positive variance for particularly else months, therefore these tend to maneuver an analogous means the bulk of the circumstances.So this discourse analysis comprehends the attendant 2 crucial objectives of giant data advances: (a) Storage: it's the deepest connected issue for huge stock data into HDFS, the arrangement provides considerably additional strong, strength, scalable, and elastic.(b) Processing: In several Hive composition it relies on a typical SQL information, we tend to could get the advantage of running SQL queries on the large dataset likewise and may method the massive quantity of GBs or TBs of data with basic SQL queries.
IV. CONCLUSION AND FUTURE SCOPE
We have conclude that map reduce approach is limited for small level data set and required a larger amount of storage to hold the map level and reduced data set recursively but we have used Hive approach to evaluate covariance among our considered data set and it shows the result that the covariance between QTM and QXM parameter is positive.Another factor is that the amount of storage over HDFS is limited under hive approach and processing is programmed with hive SQL Query which is used to take a shortest time for execution for petabytes amount of datasets.Legitimate and powerful examination of in-depth volumes of data can prompt speedier advances in varied logical teaches and enhance the profit and accomplishment of various enterprises.The difficulties incorporate the difficulty of in-depth volume, however additionally no uniformity, unclear structure, blunder coping with, protection, favorableness, security cradle, combination, and illustration.These specialized difficulties area unit found an immense assortment of use areas and consequently force an immense value.Besides, these difficulties would require would force transformative arrangements and can require an intensive type of apparatuses, systems, and applications to manage.With a particular finish goal to accomplish the bonded benefits of massive information, this stuff should be taken underneath thusly thought so most capability will be determined to select up an associate aggressive edge.
To take out the simplest have the benefit of Hadoop, the indepth analysis must be applied and revolutionary tools and techniques must be developed to rigorously comprehend and properly reply to numerous challenges.
Fig. 1 .
Fig. 1.Characteristics of Big Data : B. HDFS Architecture HDFS stands for Hadoop Distributed File System.It is an essential component of Hadoop which is used to store huge datasets.The main task of HDFS is to distribute the data to Various clusters of computers (machines) and then processing of this data is done.The advantage of using HDFS is that it coordinates the work among machines and if any one of them fails, Hadoop continues to operate by shifting the work from one machine to another without losing data or interrupting work [11].C. MapReduce MapReduce is a parallel programming framework that allows operations to be applied over large datasets.The main task of MapReduce is to divide the problem into smaller parts and then run those subparts in a parallel fashion.MapReduce consists of two functions: Map and Reduce.Map: This function generates a key/value pair and performs sorting and filtering of data.Reduce: This function combines all the intermediate values and gives the output.
Fig. 6
Fig.6 Stock Exchange Dataset(.csv)file Issues related with map reduce are solved with Hive:
Table :
Use 'make table' Hive command to create the Hivetable for our conideredcsv format dataset hive > create table STOCK (trademark
|
2019-02-17T14:19:36.141Z
|
2018-01-29T00:00:00.000
|
{
"year": 2017,
"sha1": "32455de1d99d0a0f31532dd946f29bb8b04c28d7",
"oa_license": "CCBY",
"oa_url": "https://annals-csis.org/proceedings/icitkm2017/drp/pdf/20.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "32455de1d99d0a0f31532dd946f29bb8b04c28d7",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
237100070
|
pes2o/s2orc
|
v3-fos-license
|
High Endothelial Venules: A Vascular Perspective on Tertiary Lymphoid Structures in Cancer
High endothelial venules (HEVs) are specialized postcapillary venules composed of cuboidal blood endothelial cells that express high levels of sulfated sialomucins to bind L-Selectin/CD62L on lymphocytes, thereby facilitating their transmigration from the blood into the lymph nodes (LN) and other secondary lymphoid organs (SLO). HEVs have also been identified in human and murine tumors in predominantly CD3+T cell-enriched areas with fewer CD20+B-cell aggregates that are reminiscent of tertiary lymphoid-like structures (TLS). While HEV/TLS areas in human tumors are predominantly associated with increased survival, tumoral HEVs (TU-HEV) in mice have shown to foster lymphocyte-enriched immune centers and boost an immune response combined with different immunotherapies. Here, we discuss the current insight into TU-HEV formation, function, and regulation in tumors and elaborate on the functional implication, opportunities, and challenges of TU-HEV formation for cancer immunotherapy.
INTRODUCTION Tumoral Angiogenesis and Immune Escape
Solid tumors are heterogeneous and complex cellular ecosystems in which cancer cells shape their microenvironment to their advantage by actively remodeling the local immune, vascular and stromal compartments (1). Thus, tumors have also been considered as "wounds that never heal" because they increasingly promote immunosuppression and neovascularization to sustain the rapid growth of cancer cells (2,3). Due to the anomalous proangiogenic signals, these tumors exhibit a continuously growing tumor vasculature with a chaotic composition of venules, postcapillary venules, arterioles, and capillaries. Consequently, angiogenic tumor vessels typically exhibit abnormal structural and functional characteristics of poor vessel maturation, leakiness, and staggered blood flow due to the elevated interstitial pressure (4-6) ( Figure 1). With these vascular aberrations, hypoxic, acidic, and necrotic regions appear in tumors that induce an additional wave of proangiogenic signals, exacerbating disease because they support metastasis by enabling tumor cell intravasation into the bloodstream and obstructing adequate delivery of anticancer drugs (4,7). Importantly, as part of the wound repair program, angiogenic factors including vascular endothelial growth factor (VEGF) and angiopoietins also convey immunosuppressive signals. They reduce the expression of ICAM1 and VCAM1 lymphocyte adhesion molecules in endothelial cells that limit vascular adhesion of lymphocytes and subsequent infiltration into the tumor (8,9). Further, VEGF can directly inhibit dendritic cell (DC) maturation and activate antigen-specific regulatory T-cells (8,9). Tumor-recruited innate immune cells, including macrophages, myeloid-derived suppressor cells (MDSC), and neutrophils, are an additional source of angiogenic and immunosuppressive factors to suppress immunosurveillance and promote vascular and matrix remodeling ( Figure 1) (3,10). Thus, tumors employ multiple mechanisms of the tissue repair program to keep their environment in a favorable, immunosuppressive and angiogenic state.
TLSs in Tumors
The in situ detection of tumor-infiltrating lymphocytes has been commonly used in the clinic because the degree of CD8 T cell infiltration often correlates with patient survival (11). Such histopathological studies revealed substantial lymphocyte aggregates in some tumors of patients who had a predominantly favorable outcome compared to those who did not. These structures display variably organized T-and B cell aggregates, sometimes even a T cell-rich zone with mature DCs juxtaposing a B cell follicle with germinal center characteristics. They are commonly located at the tumor interphase or in adjacent areas to the tumor and entail blood and lymphatic vessels and other stromal cells that are commonly observed in secondary lymphoid organs (SLOs). Indeed, due to their resemblance with SLOs, these ectopic lymphoid-like structures have been coined tertiary lymphoid structures (TLS) and have been observed in the pathological contexts of chronic inflammatory and autoimmune diseases (12,13); including rheumatoid arthritis (14,15), autoimmune thyroiditis (16), inflammatory bowel disease (17,18), and H. pylori gastritis infections (19,20). The reader can refer to (21)(22)(23) for their detailed description. Under these conditions, TLSs are abnormal structures of an active immune response against self-antigen, promote autoimmune reactions, and subsequently aggravate the disease. Since TLSs in solid tumors are mostly associated with improved tumor response, it is conceivable that they are also sites of activated lymphocytes generating an immune response (22). This raises the question as to how lymphocytes can preferentially infiltrate these locations despite the presence of an overall immunosuppressive vascular environment.
TLSs, Like SLOs, Contain High Endothelial Venules
While histopathological studies have extensively characterized immune infiltrates and defined tumoral TLSs in human cancer for the last 30 years (22), less is known regarding the vascular components of tumoral TLSs. TLS vessels present a resemblance to those in lymph nodes and other SLOs. Lymphatic vessels (LV) have been identified around multiple TLSs and are recognized by the typical lymphatic markers such as LYVE-1, PROX-1, and podoplanin (24). LVs remove interstitial fluid (containing plasma proteins, lipids etc.) that extravasate from blood capillary filtrates back into the blood circulation. They serve as the main route for dendritic cells, antigens, and inflammatory mediators into the lymph node (LN) and are essential players in peripheral tolerance, immunosurveillance, and resolution of inflammation (25). Only about a decade ago, Martinet and colleagues made the first observations of unusual blood vessels in human solid cancer samples which resembled high endothelial venules (HEV) in SLOs (26). HEVs are morphologically and functionally specialized blood vessels that deliver naï ve lymphocytes from the bloodstream into the LN, in which lymphocytes become primed and educated by antigenpresenting cells (APC) (e.g., DCs) ( Figure 1). Lymphocytes exit then through efferent LVs, which lead into the blood vascular system via the thoracic duct to circulate the cells through the body (27)(28)(29)(30)(31).
These observations beg the question as to whether HEVs and LVs in TLSs play comparable roles and are regulated similarly to those in LNs. In this review, we will focus on recent advances in HEV formation, function, and regulation in the tumoral context. From observations in human cancer, we will highlight studies of intratumoral HEVs in several mouse cancer models and describe the morphological and functional HEV alterations in premetastatic and metastatic LNs. Finally, we will discuss the functional implication, opportunities, and challenges of tumoral HEV formation for cancer immunotherapy.
HEVs Facilitate the Transmigration of Lymphocytes
The detailed migration process of lymphocytes across endothelial cells, including HEVs has been thoroughly studied by intravital microscopy (48,49). This multistep event of lymphocyte tethering, rolling, sticking, and transmigration is tightly regulated by a coordinated interplay of adhesion molecules, integrins, and chemokines (37,45,48,50). Migration of naïve and central memory T cells, as well as naïve B cells, starts with the binding of L-selectin to the 6-sulfo sialyl Lewis X on the HEV walls. This tethering interaction reduces lymphocyte rolling and enables binding to the chemokines CCL19, CCL21, CXCL12, and CXCL13, which are presented on the luminal surface of HEVs, via CCR7, CXCR4, and CXCR5 receptors (51)(52)(53).
The chemoattractant-chemoattractant receptor axes that predominately govern the trafficking of lymphocytes into and out of LNs are CCL19/CCR7 and sphingosine 1-phosphate (S1P)/sphingosine 1 receptor 1 (S1PR1), respectively (30,54). Blood-borne lymphocytes downregulate S1PR1 and use CCR7 signaling to adhere to HEVs for transmigration. During their LN residency, recirculating lymphocytes reacquire S1PR1 and attenuate their sensitivity to chemokines. Eventually, lymphocytes exit the LN by entering the cortical or medullary lymphatics, a process that depends upon S1PR1 signaling. Upon entering into the lymph, lymphocytes lose their polarity, downregulate their sensitivity to S1P due to the high concentration of S1P, and upregulate their sensitivity to chemokines (55). However, many of the details of lymphocyte transmigration across endothelial barriers remain poorly understood.
The integrin lymphocyte function-associated antigen 1(LFA-1/aLb2) on lymphocytes interacts with the ICAM1 and ICAM2 adhesion molecules on the HEV surface, which leads to a firm arrest and subsequent paracellular or transcellular lymphocyte transmigration into the LN parenchyma (56,57). Another notable characteristic of HEVs is their ability to form HEV pockets in which lymphocytes can be temporarily retained before their egress (56,58). Although their function remains obscure, it is tempting to speculate that they exhibit specific lymphocyte communication centers and/or form when an overflow of lymphocytes arrives.
HEV Regulation and Signaling in Lymph Nodes
The development of LNs is a well-organized event that involves the crosstalk between the hematopoietic lymphoid tissue inducer (LTi) cells and the mesenchymal lymphoid-tissue organizer (LTo) cells (65). It is thought that HEVs develop concomitantly with the accumulation of LTi cells to form the lymphoid anlagen; however, the developmental ontogeny of HEVs in lymphoid organs as well as the stepwise transcriptional program of HEV specification has not been clearly identified so far (66). The most important signaling pathway that has been directly linked to developmental LN-HEV formation and maintenance is the lymphotoxin-(LT)/ lymphotoxin b receptor (LTbR)-signaling pathway (67-69). LTbR is a member of the TNF receptor superfamily which binds the LTa1b2 heterotrimers or LIGHT ("homologous to lymphotoxin, exhibits inducible expression and competes with HSV glycoprotein D for binding to herpesvirus entry mediator, a receptor expressed on T lymphocytes") also known as TNSF14 (tumor necrosis factor superfamily member 14). Although the LTbR can activate both the canonical and non-canonical NFkB pathways, the non-canonical axis appears to be preferentially activated, specifically through the NIK kinase and the RelB/p52 transcriptional complex (70). Deletion of LTbR in ECs impaired the formation of HEVs in LN and subsequently LN homeostasis (69).
More recently, the S1P/S1PR1 axis has also been proposed to regulate HEV integrity in an autocrine manner and to facilitate HEV-DC interactions in LNs (71), thus suggesting the involvement of alternative signaling pathways regulating LN-HEV maintenance.
HIGH ENDOTHELIAL VENULES IN HUMAN CANCER
Martinet and colleagues made the first and formal observations of ectopic HEVs in human cancer samples (26). They observed MECA79 + vessels by immunohistochemistry in a subset of human primary and naïve melanoma and breast, ovarian, colon, and lung tumor sections. They further confirmed with additional human HEV-specific marker HECA-452 (72) and human HEV-specific antibodies G72 and G152 (73) that these vessels phenotypically resembled LN-HEVs and thus, termed them tumor HEVs (TU-HEV). Importantly, TU-HEVs were specifically located within lymphocyte-rich areas and frequently contained luminally-attached or extravasating CD3 + cells. Indeed, the density of TU-HEVs in breast cancer was a predictor of CD3 T cell and B cell infiltration, suggesting that TU-HEVs, like their homologs in LNs, are major gateways for lymphocyte infiltration (26). Importantly, the density of TU-HEVs positively correlated with disease-free, metastasis-free, and overall survival rates in a retrospective cohort of primary breast cancer patients, thus suggesting their implication in the formation of immune-active TLS-like structure (74).
Although these studies defined a common TU-HEV phenotype by MECA79-positivity across the different human tumor types, they also described a more heterogeneous phenotype in comparison to that of LN-HEVs. For instance, in lung cancer, MECA79 + blood vessels were also shown to express high levels of MAdCAM-1 (78). Additionally, in human melanoma (90) and oral squamous cell carcinoma (85,88), the typical thick MECA79 + vasculature with cuboidal ECs coexists with thin-walled MECA79 + vessels displaying a flattened EC morphology and dilated lumens. It is conceivable that these observations could reflect different degrees and stages of TU-HEV maturation, thus implying functional differences among intratumoral MECA79 + vessels. Indeed, plump TU-HEVs, that are surrounded by substantial lymphocyte aggregates are thought to be more mature than some isolated and flat TU-HEVs located at the periphery.
Since these observations are, however, only correlative, there is still a debate to which extent TU-HEVs are necessary to actively influence cancer progression in TLSs or TLS-like structures. Certainly, there are discrepancies between studies that are not only inherent to the considered tumor type but also dependent on intratumoral heterogeneity of TU-HEVs and TLSs, respectively. For instance, TU-HEVs can be present in T cell-and DC-rich areas (74,91) while also present in B cell-rich areas (92,93). Moreover, TU-HEVs appear to be more frequent than TLSs in breast cancer (26,94) and melanoma (79,91). Thus, it appears that the presence of TU-HEVs does not always correlate with bona fide intratumoral TLSs that inherit a "strict" definition but instead with a broader spectrum of TLSlike structures (23).
As the correlation of spontaneous TU-HEV and TLS formation with a positive outcome is preferentially observed in specific cancer types, one can envision that these naïve cancers have obtained a permissive environment for ectopic HEV formation. In line with this idea, "hot" tumors may be more prone to TU-HEV formation while "cold" tumors remain anergic (95).
This further raises the question as to whether cancer therapies and specifically those generating an immune-stimulating reaction, can instigate HEV and TLS formation. So far, only a few reports in breast (75,96) and colorectal (97) tumors have correlated the presence of tumoral TLSs/HEVs with a favorable response to combined radio-and chemotherapy (22). Given the plethora of ongoing clinical trials evaluating the effects of immune checkpoint inhibitors (ICI), it is of great interest to evaluate thoroughly TU-HEV/TLS formation and its correlation with patient response. In support, higher TLS density in tumors correlated with an improved response to ICIs and increased survival in melanoma and soft-tissue sarcoma patients (92,93,98), In summary, there is accumulating evidence from these clinical data that the formation of HEV-containing TLSs can be a marker of good prognosis but whether TU-HEV formation is a prerequisite for instigating TLS formation and antitumor response in human cancer remains obscure.
Spontaneous TU-HEV Formation
Why do some tumors spontaneously form HEVs while others do not? One clue comes from the observation that spontaneous HEV formation in tumors of mice was only observed when tumor cells expressed strong antigens, i.e., the commonly used OVA-antigen peptide in tumor cell lines or the viral oncoprotein simian virus SV40 large T-antigen to drive endogenous tumor formation in pancreatic islets (99,100). The presence of such antigens suggests that strongly antigenic tumors may have a more robust lymphocyte activity and, thus, be better poised to instigate TU-HEV formation.
So far, observations of spontaneous TU-HEVs in mice are rare and only reported in B16-OVA melanomas, LLC-OVA lung carcinomas and Rip1Tag5 (RT5) pancreatic neuroendocrine premalignant lesions (99)(100)(101). In line with the requirement of a tumor antigen to elicit a robust immune response, expression of SV40 Tag in the beta cells of pancreatic islets in RT5 mice does not commence before 10-12 weeks of age, leading to the recognition of Tag as a nonself protein (102). In contrast, pancreatic beta cells express Tag in Rip1Tag2 (RT2) mice already during embryonic development, probably due to differences in the site of integration of the transgene, and thus become tolerant to Tag (103). As a consequence, Tag expression in RT5 mice causes a severe immune response with intense infiltration of CD4 and CD8 T cells, B cells, and macrophages in hyperplastic RT5 islets, while islets of RT2 mice display a paucity of lymphocytes and do not become inflamed. This leads to the formation of immature MAdCAM-1 + HEVs in inflamed RT5 hyperplastic islets but not in non-inflamed RT2 hyperplastic islets suggesting that immune cell infiltrates are required to initiate HEV formation although they appear not to be fully developed (100). Similarly, the spontaneously formed TU-HEVs in B16-OVA melanoma and LLC-OVA exhibited much weaker PNAd positivity compared to LN-HEVs likely reflective of an immature HEV phenotype similar to that observed in RT5 hyperplastic islets (99,100). What these data also imply is the necessity of reactive immune cells to enable HEV formation in tumors.
Immune Cells Regulate HEV Neogenesis in Tumors
The first evidence that hematopoietic cells can regulate LN-HEVs in adulthood comes from the study of Moussion and Girard (68). Depleting CD11c + DCs in adult CD11c-DTR mice by administering diphtheria toxin (DTX) degenerated HEVs and reverted them to a MAdCAM-1 + immature stage reminiscent of neonatal HEVs. Congruently, CD11c + DCs are crucial for the switch from MAdCAM-1 to MECA79/PNAd expression during neonatal development of peripheral LNs (104). Consequently, due to the reduced HEV ability to recruit lymphocytes into the LN, LN size and cellularity was reduced (68).
Observations of DC-LAMP + mature DCs in close proximity of TU-HEVs in human breast cancer and melanoma tissue led to the initial proposition that DCs may also regulate HEVs in cancer (74, 105, 106) ( Figure 2). Nevertheless, most of the studies in mouse tumor models point to a more predominant role of lymphocytes. Spontaneous HEVs did not occur in B16-OVA tumors grown in Rag2-/-mice, lacking B and T lymphocytes but appeared when Rag2-/-mice were reconstituted with CD8 T cells before tumor implantation (99). Similarly, CD3 and CD8 T cell depletion led to a reduction of TU-HEV frequency and lymphocyte infiltrates in the pancreatic RIP1-Tag5 and a methylcholanthrene-induced fibrosarcoma tumor models (107,108). The role of CD8 T cells as critical inducers of TU-HEV formation is further underscored by the observation that depletion of immunosuppressive CD4 T regulatory (Treg) cells renders tumors permissive to TU-HEV and TU-TLS neogenesis (108-110) (Figure 2). Noteworthy, FoxP3 + Treg cell depletion with DTX using the FoxP3-DTR system, also disrupted the physiological LN-HEV network (108). DCs were, however, not required to form HEVs in Tregdepleted fibrosarcomas because HEVs were unaffected upon DC depletion (108). Although CD11c is a marker traditionally associated with pan-DCs, the expression of CD11c often overlaps in macrophages and DCs in non-lymphoid tissues (111). Therefore, the depletion of CD11c + cells in the beforementioned study may not be restricted to the intratumoral DCs. So far, it remains unknown whether Tregs may directly suppress HEV neogenesis by interacting with tumor endothelial cells or indirectly by inhibiting CD4 and CD8 lymphocytes and creating an immunosuppressive environment.
Although lymphocytes appear to be the main regulators of TU-HEV neogenesis, innate immune cells have also been proposed as potential candidates (107,112). Particularly, CD68 + macrophages have also been shown to facilitate TU-HEV formation in the Rip1Tag5 tumor model by producing the TNF receptor ligands TNFa and LTa (107). Moreover, in a Kras (G12D)-driven mouse model of lung cancer, the depletion of GR1 + neutrophils increased the intensity of MECA79 staining in CD31 + ECs, indicating that Gr1 + neutrophils are negative regulators of TU-HEVs (112) (Figure 2).
What are then the signaling pathways in ECs that instigate HEV formation in tumors? So far, it appears that the signaling cues and mechanisms involved in LN-HEV formation are also involved in tumoral HEV neogenesis. Several studies point to the lymphotoxin (LIGHT, LTa1b2)/LTbR pathway as the prevailing signaling cue in inducing TU-HEVs. Treatment with the LTbR agonist or the LTbR ligand LIGHT, which had been targeted to the tumor vasculature by fusing it to a vascular zip code peptide, induced MECA79 + HEVs in various mouse tumor models, including those of breast cancer, neuroendocrine pancreatic tumors, and glioblastomas (107,(113)(114)(115).
Important to note is that anti-angiogenic immunotherapy in the form of anti-VEGF plus anti-PDL1 induced the noncanonical LTbR pathway in ECs of breast and pancreatic endocrine tumors, which enabled HEV formation, enhanced lymphocyte infiltration, and prolonged survival of tumorbearing mice (113). The addition of agonistic LTbR antibodies to anti-VEGF plus anti-PDL1 therapy, thus fully activating the LTbR signaling cues, further increased HEV numbers and maturation in breast and pancreatic cancer and sensitized glioblastoma to the therapy. Combination treatment with LTbR antagonists, however, reversed these effects (113,114) (Figure 2).
Further, TNFR1 stimulation via TNFa or LTa3 seems to be accountable for spontaneous TU-HEV formation independent of LTbR. While LTbR-Ig blockade did not alter spontaneous HEVs in B16-OVA melanomas, HEVs were absent in these tumors when grown in TNFR1/2 -/mice or Rag2 -/mice replenished with LTa -/-CD8 T cells (99). In a carcinogen-induced fibrosarcoma model, Treg depletion increased numbers, proliferation, and activation of TNFa-producing intratumoral CD8 + T cells, which then induced the formation of intratumoral HEVs in a TNFR-dependent manner. Blockade of TNFR with TNFRII.Ig, anti-TNF antibodies, or via anti-LTa treatment reduced TU-HEV areas specifically in Treg-depleted fibrosarcomas, while LTbR-Ig had no effect (108). Targeting an LTa fusion protein to the tumor site has been shown to be another strategy to successfully induce MECA79 + HEVs and lymphoid aggregates in the tumor microenvironment. In this study, electron microscopy observations confirmed the HEV morphology of around 30% of the blood vessels. Moreover, the therapy was efficient in eradicating subcutaneous B78-D14 melanomas and their established pulmonary metastases (116). These observations are in line with a study conducted in chronic inflammation where the transgenic expression of LTa under the control of a rat insulin promoter generated structures resembling lymph nodes concerning the cellular composition and HEV detection (117).
Another potential signaling molecule involved in HEV formation is IFNg produced by NK cells and T cells because it stimulates the expression of the CXCR3 ligands CXCL9 and CXCL10, and the CCR7 ligand CCL21 as well as ICAM-1 in ECs, which all together induce T cell recruitment and infiltration (118). Although IFNg is not sufficient to directly induce HEV (99), it may have supporting functions in instigating TU-HEVs by increasing lymphocyte influx. This may have important implications because the signaling pathways described above, induce vessel normalization. During this process, excessive immature tumor vessels become pruned, lymphocyte adhesion molecules increase, and pericytes align more closely to and stabilize the vasculature which leads to enhanced blood flow and T-cell infiltration. Vessel-targeted LIGHT normalized blood vessels in murine primary tumors and metastases (107,114,115,119) and antiangiogenic therapy, alone and in combination with checkpoint blockade induced vessel normalization and boosted by further activation of the LTbR signaling using a LTbR agonistic antibody (113). In addition, a recent study has shown that genetic deletion of Myct1, a direct target gene of ETV2, was sufficient to normalize tumor vessels and induce TU-HEV formation in subcutaneous sarcoma, concomitant with antitumoral immunity. Myct1 deletion combined with immunotherapy was successful in increasing long-term survival in anti-PD1 refractory breast cancer model (120). Thus, although it remains obscure whether vessel normalization is a prerequisite for HEV formation, it is tempting to speculate that vessel normalization in tumors is a trigger to enhance lymphocyte infiltration which in certain areas reaches a signaling threshold that could lead to HEV neogenesis.
What these studies also reveal is that the complex process of TU-HEV development likely involves multiple pathways and signals, and requires further investigation. It is plausible that a process similar to the proposed two-step differentiation model of HEV formation in chronic inflammation, may take place. In accordance with this model, TNFR1 is required in the initial stages of chronic inflammation and induces flat MECA79 + blood vessels, whereas the LTbR pathway is involved for the additional maturation and acquisition of a fully mature HEV phenotype (121,122).
Do Tumoral HEVs Generate Specific Immune-Reactive Centers?
Naïve T cells are thought to become primed and activated by tumor antigen-presenting DCs, expand and differentiate in the tumor-draining lymph node, also referred to as sentinel LN, from which they home to the tumor site (123).
Interestingly, analysis of T cell clonality and homing indicate that TU-HEVs can facilitate infiltration of naïve T cells via the selectin L/CD62L axis into the tumor (99,116). T cell activation, therefore, not only occurs in the sentinel LN, but may also take place at the tumor site (22,116,124). The recruitment of naïve T cells into the tumor, bypassing the activation in the sentinel LN, may help to speed up and favor the generation of an in situ antitumoral response but also requires antigen presentation by DCs and other APCs for T cell activation (125). Congruently, TLSs have been shown to facilitate interactions between T cells and tumor-antigen-presenting CD11c + DCs in a genetically engineered mouse model of lung adenocarcinoma. Staining of g-tubulin (a marker of the microtubule-organizing center [MTOC]) depicted immunological synapses between DCs and CD8 T cells, in the tumors, which in turn upregulated the early activation marker (CD69) and became proliferative (109). The concept that naïve T cells may be educated within the tumor has also been observed in human tumors. Mature LAMP + DCs closely associated with CD3 T-cells have been identified in juxtaposition to TU-HEVs in human breast cancer (74). Importantly, dense aggregates of MHC-II + APCs and CD8 T cells have been identified in human renal cell carcinomas (RCC). These niches contain TCF1 + PD1 + stem-like CD8 T-cells that undergo slow self-renewal and give rise to terminally differentiated CD8 T cells. They provide the proliferative burst and thereby foster the antitumoral immune response seen after anti-PD1-immunotherapy (126,127). Interestingly, these T cellenriched nests appear to be active immune centers that closely resembled the extrafollicular regions of the lymph node and were quite distinct from the typical B cell-enriched-identified TLSs found in RCCs which did not exhibit closely interacting DCs and T cells (126). Whether TU-HEVs are also an integral part of these APC niches remains to be investigated.
Besides therapeutically exploiting TU-HEVs as lymphocyte gateways, they also offer a "route" to deliver chemotherapeutic agents. One of the key features of the pancreatic ductal adenocarcinoma (PDAC) is the dense and poor vascularized microenvironment which limits the penetrance of drugs to the site of the tumor. TU-HEVs have been identified in the stroma of human PDAC implanted in a humanized mouse model (84). Targeting TU-HEVs with MECA79-Taxol-nanoparticles has been shown to improve efficacy in delivering Paclitaxel to the tumor, resulting in tumor growth inhibition (84). Similarly, in preclinical models of breast as well as pancreatic tumors, an antibody (MHA112)-based strategy has been used to directly deliver the chemotherapeutic agent to tumors via targeting of TU-HEVs (128). Given these results, combining HEV-inducing strategies with HEV-specific deliverables of chemotherapeutical agents may represent a synergistic approach for future cancer therapy.
HEV ALTERATIONS IN SENTINEL LNS
LNs are critical for immune surveillance, providing a highly organized hub to obtain optimal conditions for naïve lymphocytes to interact with APCs. In response to certain stimuli such as infection and inflammation, the draining LNs undergo considerable expansion, known as lymphadenopathy, to accommodate the increased need of lymphocyte priming. This process is characterized by increased blood flow and lymphocyte trafficking while the lymphocyte exit via lymphatics is temporarily blocked (129)(130)(131). These changes increase the probability of antigen presentation and ensure the initiation of the appropriate antigen-specific immunity. LN expansion is orchestrated by transient LN-vasculature remodeling. Upon inflammation, HEVs quickly expand by undergoing clonal proliferation of a putative progenitor cell and succumb upon cessation of inflammation to return to their homeostatic stage (132). LN-HEV plasticity and remodeling upon inflammation are controlled by extensive reprogramming and have been comprehensively investigated at the transcriptional level (59).
Sentinel LNs are considered the major site at which the antitumoral immunity is initiated, but they also represent a privileged site for cancer cell dissemination (133) (Figure 3).
Similar to inflamed LNs, sentinel LNs also undergo vascular remodeling (88,(134)(135)(136)(137). Sentinel LN-HEVs often show dramatic morphological changes, shifting from thick-walled blood vessels with a small lumen to a thin-walled vasculature with an enlarged lumen and abundant red blood cells (RBC). Moreover, HEVs of sentinel LNs can display loss in PNAd/ MECA79 expression in association with dysregulation of CCL21 in perivascular FRCs (134)(135)(136). Given the importance of PNAd and CCL21 in the recruitment remodeling of naïve T cells and initiation of the adaptive response, the dysregulation of these components in sentinel LNs may indicate impaired LN functionality.
Noteworthy, experiments in nude mice have shown that these dramatic changes occur only in tumor-reactive LNs and not in endotoxin-induced lymphadenopathy, indicating that the mechanism of vascular reorganization in sentinel LN may differ from that of inflammatory-reactive LNs. Importantly, these studies have also shown that T cells are not the major players in the vascular remodeling of sentinel LN (135).
HEV abnormality has been observed in sentinel LNs of breast cancer, melanoma, and squamous cell cancer patients (88,(134)(135)(136)(137). As these modifications occur before detection of metastatic cancer cells in the sentinel LN (135,136), it is conceivable that tumor-emanating factors induce LN-HEV alterations to establish a pre-metastatic niche permissive for tumor cells. One could also speculate that the presence of enlarged HEV lumen engorged with RBC, could enhance oxygen and nutrient delivery for arriving cancer cells.
The majority of cancers invade the sentinel LN via lymphatic vessels before spreading to distant organs (138). Until recently, it was expected that metastatic cancer cells would also leave the LN through the efferent lymphatic vessels, the LNs of higher echelons, and the thoracic duct (139) (Figure 3). However, two recent seminal studies in mice have revealed that cancer cell dissemination can occur through the LN-HEVs by intravital microscopy. In the first study, murine 4T1 breast cancer cells intra-lymphatically infused into the subcapsular sinus of pLNs, migrated towards the LN center, then localized around HEVs, transmigrated through HEVs, and subsequently disseminated into the lungs. Importantly, lymphatic ligation did not compromise the capability of cancer cells to colonize the distal organs (140). Similar results were obtained in the second study, in which, using time-lapse multiphoton intravital microscopy, the photo-converted metastatic cancer cells were first seen in the subcapsular sinus and later invaded the cortex of the LN where they transmigrated into HEV + vessels. Metastatic cancer cells were then eventually detected in the systemic blood circulation and in the lungs (141).
Overall, these experimental studies revealed that LN-HEVs serve as a gateway not only for lymphocyte trafficking into the LN but can also enable tumor cell intravasation into the bloodstream. Concomitantly, HEV alterations into flattened, dilated blood vessels occur that have lost their morphological and likely functional properties and may likely be induced by the tumor. To this end, the implication of tumor-emanating factors in HEV remodeling in the premetastatic niche in LNs is unknown (Figure 3). In addition, whether tumor cell dissemination in human LNs also occurs through HEVs, remains to be clarified, but substantial LN-HEV remodeling preceding LN metastasis has also been shown in human breast cancer patients (136). The premetastatic LN alterations also provide an opportunity for identifying biomarkers of vascular changes in sentinel LNs that could be used to predict disease progression in human cancer (136).
CONCLUDING REMARKS AND FUTURE DIRECTIONS
Since sufficient infiltration of intratumoral T cell effector cells in malignant lesions is a major hurdle in anti-cancer immunotherapy (11,142), therapeutic induction of HEVs represents a compelling approach to boost effective transmigration of lymphocytes into the tumor. This may increase the benefits of immune checkpoint blockade and improve cell-based immunotherapies using chimeric antigen receptor (CAR) T cells in solid tumors. An additional and specific advantage of therapeutic HEV induction may be the creation of immune-reactive niches that spurt T cell activation and differentiation and replace exhausted and dysfunctional effector T cells.
Although these are tantalizing concepts, they also raise several questions about the tumor-specific ontogeny, regulation, and function of HEVs. Studies in mouse tumor models have provided the first insight into the cellular and molecular regulators of HEV formation and maintenance, partly resembling those of LN-HEVs and partly depicting disparities. The varying degrees of HEV morphology in tumors may also affect HEV functionality, as shown in sentinel LNs, raising concerns about the implication of HEVs in recruiting tolerance-promoting lymphocytes in tumors. Indeed, TLSs are correlated with a worse prognosis in some tumor types, including hepatocellular carcinomas, RCC lung metastases, and head and neck cancer, although the reasons are unknown (22,143,144).
An accumulation of Tregs has been observed in TLSs of a lung cancer mouse model (109). However, Treg depletion enhanced HEVs and improved an immune response in these tumors (109), as also observed in fibrosarcoma (108). Recent single-cell transcriptomic analyses of homeostatic and inflamed LNs (59,145) have provided a specific transcriptional signature of LN-HEVs that has shed some light on LN-HEV-specific signals (146). Comparing transcriptomics between LN-HEVs and TU-HEVs will be important to inform about general and tumorspecific HEV characteristics and functions. To this end, HEV development in LNs and in tumors remains obscure. When LNs become inflamed and enlarged, HEVs quickly expand in part by progenitor cell propagation but by what means HEVs arise from A B FIGURE 3 | Remodeling of LN-HEVs during cancer metastasis. (A) Metastasis is a stepwise process leading to the dissemination of cancer cells from the primary tumor towards preferential metastatic sites. Most commonly, metastatic cancer cells use the lymphatic system to exit the primary tumor, reach a proximal sentinel LN, circulate into adjacent LNs and eventually drain from the thoracic duct into the systemic venous system, thus spreading towards metastatic sites (e.g., lungs). Metastatic tumor cells in the LN parenchyma can also directly intravasate into the bloodstream via LN-HEVs and disseminate towards metastatic sites. (B) This alternative route involves an important remodeling of the sentinel LN already at the pre-metastatic stage, in preparation for the arrival of cancer cells. Overall, the sentinel LN expands, as evidenced by 1-expanded T and B cell compartments in the paracortex and cortex, respectively, and by 2-an extensive lymphangiogenesis. Importantly, HEVs are remodeled as 1-HECs switch to a PNAd low flat phenotype with thin-walled basal lamina and enlarged lumens filled with numerous RBCs, and 2-CCL21 expression is dysregulated in FRCs, suggesting impaired lymphocyte-recruiting functions of sentinel LN-HEVs. Whether tumor cells from the primary site secrete specific factors preparing this pre-metastatic niche prior to their circulation into the sentinel LN remains to be elucidated. Altogether, the expanded sentinel LN and remodeled HEVs allow the direct spreading of cancer cells from afferent lymphatics into the venous bloodstream during the metastatic stage. LN, lymph node; HEV, high endothelial venule; HEC, high endothelial cell; RBC, red blood cell; FRC, fibroblastic reticular cell.
tumor endothelial cells and expand is unknown. Such knowledge, however, will be crucial to therapeutically switch on and off HEV formation in a controlled manner in malignant lesions to avoid potential autoimmune reactions.
Finally, in situ HEV model systems can help to dissect the cellular and molecular circuits controlling TU-HEV neogenesis. To date, nonetheless, HEVs cannot be cultured and maintained ex vivo, thus rendering mechanistic analyses difficult. Indeed, several attempts to culture purified HEV-ECs have failed due to a rapid loss of their unique features once plated as monolayers suggesting the necessity of additional cell types, factors and specific growth conditions (147)(148)(149)(150). One attractive model system may be bona fide vascular organoids that have been successfully generated from human ES cells and fully recapitulate the heterogeneity and functionality of vessels in vitro and in vivo upon transplantation (151)(152)(153). Other systems involve microfluidics (154) or EC reprogramming (155) which could serve as more relevant platforms to induce and maintain HEV ex vivo.
High endothelial venules display a unique specialization of blood endothelial cells and due to their explicit interaction with lymphocytes, only arise in specific lymphoid organs during development. The fact that they can also ectopically develop in non-lymphoid organs during chronic (tumor) inflammation in the adult is again linked to their intimate relationship with lymphocytes, which may go far beyond mere lymphocyte transportation. Looking into the future, further investigations of TU-HEV blood vessels are timely to better comprehend their nature and functionality because enabling their therapeutic induction in tumors offers promising avenues, not only for immunotherapies, but also for other types of cancer treatment.
AUTHOR CONTRIBUTIONS
All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by grants from the Flemish government FWO (G0A0818N to GV and GB, G072021N to SG and GB) and the National Institute of Health NIH/NCI (R01CA201537 to GB).
|
2021-08-17T13:20:50.152Z
|
2021-08-17T00:00:00.000
|
{
"year": 2021,
"sha1": "ad177a903fa87ecda8b3227db79b39c46e49a127",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.736670/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad177a903fa87ecda8b3227db79b39c46e49a127",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232486140
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between psychosocial distress and oral health status in patients with facial burns and mediation by oral health behaviour
Background There is limited discussion on the influence of psychosocial factors on the oral health of patients with a facial burn injury. This report investigated the relationship between oral health and psychosocial distress in patients with facial burns and the role of oral health behaviour in mediating the relationship. Methods The data were part of a cross-sectional study that had systematically and randomly selected patients with > 10% total burn surface area from a burn centre in Pakistan. The oral health status (DMFT, CPI, OHI-S) and severity of facial disfigurement were assessed. Validated instruments in the Urdu language were self-administered and information relating to oral health behaviour (brushing and dental visits), oral health-related quality of life (OHIP-14), satisfaction with appearance, self-esteem, anxiety and depression, resilience, and social support were collected. The statistical analyses included simple linear regression, Pearson correlation, t-test, and ANOVA. Mediation analysis was carried out to examine the indirect effect by oral health behaviour. Results From a total of 271 participants, the majority had moderate to severe facial disfigurement (89%), low self-esteem (74.5%), and moderate to high levels of social support (95%). The level of satisfaction with appearance was low, whereas anxiety and depression were high. Disfigurement and satisfaction with appearance were associated with lower self-esteem and social support (p < 0.05). Greater severity of disfigurement, higher levels of anxiety and dissatisfaction with appearance, and lower levels of self-esteem and social support were associated with greater DMFT and OHIP-14 scores, worse periodontal and oral hygiene conditions, and less frequent tooth brushing and dental visits (p < 0.05). The main barriers to oral healthcare utilization were psychological and social issues (p < 0.05). The indirect effect by oral health behaviour was not significant for anxiety but was significant for disfigurement, satisfaction with appearance, self-esteem, and social support. Conclusion There is an association between the psychosocial factors and oral health of patients with facial burns through a direct effect and mediation by oral health behaviour.
Introduction
Burn injury is a traumatic experience that leaves a victim with acute and chronic physical and psychological conditions [1,2]. The long-term physical complications include deformities, immobility and functional impairments of the affected area, and pain [1,3]. Post-traumatic psychological complications such as distress, depression, and anxiety [4,5] also affect health, function, and quality of life in patients with burn injuries [6].
In burn injuries involving the facial area, long-term complications may include effects on oral health. When a burn injury involves the lips and mouth, scar contracture may lead to the distortion of the lips, microstomia, and narrowing of the mouth opening [7]. There may also be discomfort and pain as the scar stretches during oral functions as well as reduced sensation and muscle control around the affected area. The combination of these factors can greatly impair daily activities such as speaking, eating, swallowing, and accessing the oral cavity. The latter impairment can impact oral hygiene care as teeth cleaning becomes less comfortable and inefficient, thus increasing the risk of plaque accumulation and dental diseases. Further complications may include mouth sores due to drooling, teeth grinding, malocclusion, and temporomandibular joint disorder due to muscle incoordination [7][8][9].
Facial disfigurement also affects social interactions due to the unsightly appearance, difficulty in reading facial expressions, and unclear speech. This can cause psychological distress such as low self-esteem, anxiety, and depression [10]. Evidence has also linked these conditions to less frequent tooth brushing and fewer dental visits [11]. Poor oral health outcomes in individuals with facial burns have been linked to dental anxiety [12]. However, there is little discussion on the influence of psychosocial factors on oral health in these patients. Thus, the objective of the current report was to examine the relationship between psychosocial distress and oral health measures in patients with facial burns. In addition, the study assessed whether oral health behaviours mediated the above-mentioned relationships. There is a need to understand the conditions and mechanisms that affect the oral health of burn victims in order to develop intervention programs to rehabilitate and reintegrate them into society.
Materials and methods
The current report is part of a cross-sectional study that investigated the oral health status of patients with facial burns at the Burn Care Center of Pakistan Institute of Medical Sciences, Islamabad, Pakistan. Apart from the psychosocial measures, which have not been reported before, the parameters used in the present report have been described earlier in [12]. The study protocol was reviewed and approved by the ethics committee of the institution (Reference no. F.1-1/2015/ERB/SZABMU). Systematic random sampling was used by selecting every second patient who attended the centre for follow-up. Patients with head and neck burns involving more than 10% of total body surface area who were able to eat by mouth were included. An extraoral examination was carried out to assess the severity of disfigurement using a single item observer-rated disfigurement scale [13]. The scale ranges from 1 to 9 points and categorized as the minimum (1-3 points), moderate (4-6 points) and severe (7-9 points) disfigurement, according to their score on this scale. An intra-oral examination was by one qualified dentist to assess oral health status according to the World Health Organisation oral health survey methods, including the DMFT, Community Periodontal Index (CPI), and Oral Hygiene Index-Simplified (OHI-S) [14,15].
The participants completed self-administered, reliable, and validated instruments in the Urdu language to assess their oral health behaviours and psychosocial measures. The oral health behaviour measures included the frequencies of daily tooth brushing (once, twice, or more) and dental check-up in the past year (Yes, No) [16]. The barriers to utilization of oral health care services were assessed using an open-ended question: "Is there anything, such as cost, anxiety, location, illness, or other problems, that has kept you from going to the dentist?". Based on participants' responses their main reason for not visiting a dentist was categorised as dental anxiety, social, distance, cost, or self-perceived. If participants listed multiple reasons then their first reason was classed as the main reason. [12,17]. The Oral Health Impact Profile (OHIP-14) assesses oral health-related quality of life using 14 items measured on a 5-point Likert scale from "never" (0) to "very often" (4). The total score ranges from 0 to 56, and a lower score indicates a better oral health-related quality of life [18,19]. The Satisfaction With Appearance Scale (SWAP) is a 14-item instrument to measure the self-perceived satisfaction with appearance and sociobehavioural impact of burn scars. Participants rate each item on a 7-point Likert scale from 1 (strongly disagree) to 7 (strongly agree); a higher total score (range: 0-84) indicates a greater dissatisfaction with the facial image [5,20]. The Hospital Anxiety and Depression Scale (HADS) assesses anxiety (7 items) and depression (7 items) on a scale of 0 (less frequently or equivalent) to 3 (more frequently or equivalent). The total score for each psychological condition ranges from 0 to 21, where a higher score indicates a worse condition. For each condition, a score greater than 7 suggests the presence of psychological morbidity [21,22]. The Rosenberg Self-Esteem Scale (RSES) is a 10-item instrument to assess self-worth and self-acceptance on a four-point scale (0-3) ranging from strongly agree to strongly disagree. The total score ranges from 0 to 30 and higher scores indicate a higher level of self-esteem; patients with a total score < 15 are considered as having low self-esteem [23,24]. The Multidimensional Scale of Perceived Social Support (MSPSS) is a 12-item instrument to assess perceived social support using a 7-point Likert scale which ranges very strongly disagree (1) to very strongly agree (7). The total score is divided by 12 to give a mean score that ranges from 1 to 7, where a greater score corresponds to better perceived social support, and re-categorised as low (score: 1-2.9), moderate (3)(4)(5) and high (5.1-7) levels of support [25,26]. The Brief Resilience Scale (BRS) assesses resilience using a 6-item instrument with 5-point Likert scale responses ranging from "strongly disagree" (1) to "strongly agree" (5). The total score of all items is divided by 6 to give a mean score that indicates low (1) to high (5) resilience [27,28]. All data collection was carried out by FAC, who also underwent training from the burn specialists at the centre to measure disfigurement in a clinical setting.
Statistical analyses
Summary statistics were obtained for all the variables. Bivariate association analyses between the psychosocial factors and oral health outcomes and behaviours were conducted using Pearson correlation, t-test, ANOVA with post hoc tests and simple linear regression. Mediation analysis was carried out to examine the hypothesis that oral health behaviours mediate the influence of psychological factors on oral health. Instead of using the original form, the mediator and oral health outcome variables were aggregated using principal component analysis (PCA) without rotation to simplify the analysis and interpretation. The clinical and oral health-related quality of life measures were combined and re-defined as oral health outcome (Eigenvalue 3.30, 82.5% of variance explained), in which a larger value indicates a worse oral health condition. Tooth brushing and dental visits were combined to form health behaviour (Eigenvalue 1.57, 78.5% of variance explained); a larger value indicates better health behaviour. For both aggregated parameters, the mean = 0 and SD = 1. Analyses were carried out to examine the indirect effect of each psychosocial factor (stressor) on oral health outcome through health behaviour (psychosocial stressor → health behaviour → oral health outcome) based on Model 4 of the PROCESS macro v3.4.1 [29]. The assumptions and diagnostic of the statistical methods were performed for all analysis. For normality assumption, the graphical method showed that the variables were approximately normally distributed with the skewness and kurtosis less than ± 2. The significance level was set at 5% and analysis was conducted using IBM SPSS v26.0.
Results
A total of 300 patients were invited to participate in the study, 20 had declined and 9 incomplete responses were omitted; only N = 271 (90.3%) were available for analysis. The sample characteristics and oral health status were described in an earlier report [12]. In summary, the majority of the sample were females (68.6%), under 35 years old (78.9%), unemployed (49.1%), and from the low-income group (65.7%), and had 6-12 years of schooling (64.9%). The mean DMFT and overall OHIP-14 scores were 11.0 (SD = 2.4) and 37.7 (SD = 8.5) respectively. Most participants had periodontal pockets ≥ 4 mm in at least one site (59%), poor oral hygiene (66.1%), practised tooth brushing once a day (78%), and did not visit a dentist in the past year for a regular check-up (89%). Participants most commonly cited anxiety-related issues as the most important barrier to utilising oral health care services (46% of participants), followed by the cost of treatment (25%) and social issues (15.5%).
The summary statistics of the psychosocial measures are presented in Table 1. The majority of the participants had moderate to severe facial disfigurement (89%), low self-esteem (74.5%), and moderate to high levels of social support (95%). The high mean scores suggested high levels of anxiety, depression, and dissatisfaction with the appearance among the participants. Correlation analysis showed that the severity of disfigurement was positively and strongly correlated with dissatisfaction with appearance mildly with depression. The severity of disfigurement and satisfaction with appearance were both negatively and moderately correlated with self-esteem and social support (p < 0.05). Positive and moderate correlations were also found between anxiety and depression and self-esteem and social support (p < 0.05). The analysis showed significant associations between psychosocial factors and oral health. Increased severity of disfigurement, SWAP, and anxiety was associated with greater DMFT and OHIP-14 (p < 0.05); and positively correlated with the CPI and OHI-S indices (p < 0.05) ( Table 2). Better self-esteem and social support (greater scores) were associated with lower DMFT and OHIP-14 scores (p < 0.001) and negatively and moderately correlated with the CPI and OHI-S indices (p < 0.001). The mean scores for disfigurement and SWAP were greater in those who brush the teeth and visit the dentist less frequently (p < 0.001) and, correspondingly, the mean scores were lower for self-esteem and social support (p < 0.001) ( Table 3). In participants with dental anxiety, the mean scores of disfigurement, SWAP, depression, self-esteem, and social support were significantly different from at least one other barrier to oral health care use (Table 3). Similarly, in those with social barriers, the mean scores of depression and self-esteem were different from the distance and self-perceived barriers.
Resilience and depression were excluded from the mediation analysis because they were not related to oral health outcomes. The mediation analysis showed significant indirect effects of disfigurement, SWAP, self-esteem, and social support on oral health outcome, where the mediation by health behaviour explained 18%, 23%, 41%, and 34% of the relationship between the psychosocial factors and oral health outcome respectively (see Table 4). The indirect effect of anxiety through health behaviour was not significant.
Discussion
The study had examined the relationship between psychosocial factors and oral health in patients with facial burns and whether oral health behaviours mediate the relationship. The results showed that poor oral health conditions and oral health-related quality of life are associated with greater severity of disfigurement, dissatisfaction with appearance and anxiety, and lower selfesteem and social support. It also showed an association between poor psychosocial status and oral health behaviour. These findings are consistent with previous reports in that adverse psychological status is associated with a greater risk of developing oral diseases [30,31]. Nevertheless, the mechanism that explains the relationship is not as clear and direct as that for the influence of oral health behaviour, where poor personal and professional oral health care increases plaque accumulation and the risk of common oral diseases [32,33]. Except for depression, which is claimed to reduce the immune response in the development of periodontal disease [30], there is little discussion in the literature to explain a direct involvement of anxiety, dissatisfaction with appearance, self-esteem, and social support in the development of caries and periodontal diseases. It is more rational to assume that psychosocial factors influence oral health through oral behaviour practices. Following that hypothesis, this study examined the data for evidence of mediation in the relationship between psychosocial factors and oral health outcomes. The results showed the indirect effect of anxiety was not significant, but there were significant indirect effects of disfigurement, dissatisfaction with appearance, self-esteem, and social support on oral health status, mediated by oral health behaviour. Only one other study investigated the role of oral health behaviour as a mediator but the effect of maternal education level at birth on gingival bleeding was insignificant [34]. Socio-demographic factors such as age, gender, and income had been considered and examined in the mediation analysis but they did not meet the assumption of the analysis and thus, excluded from the current report.
Besides the statistical evidence, there are also other rationales for the mediation model. As previously mentioned, physical changes such as those caused by deformity to the facial region can physically influence oral health practice. Many participants in the current study were observed to have severe facial and lip deformities, and experienced pain and discomfort during mouth opening. This may influence the attitude towards, and practice of, personal oral hygiene care. However, issues such as the extent to which disfigurement affects brushing efficiency, oral hygiene care practice, and/or adequacy of oral health literacy and skills of the participants are not clear from the present study and require further investigation.
The deformity may also influence the socio-behaviour of the participants which indirectly affects dental treatment-seeking behaviour. Patients with facial burns are afraid to look at people, cover their faces because they feel apprehensive when they are stared at, avoid social environments, dislike going outdoors, and prefer to stay indoors for fear of societal stigma [35]. However, accessing oral health services requires them to travel and mix in a crowded environment, particularly when using public transportation, and puts them in undesirable and uncomfortable situations [36][37][38]. To avoid these, some participants may delay or cancel an appointment [39,40] and lose out on professional help. The current study supports this as some participants listed stigma and embarrassment as a (social) barrier to oral health care utilization. There was a low level of self-esteem and a moderate level of support among the participants [41]; these factors were inversely associated with oral health outcomes, consistent with earlier literature [42,43]. Adjusting to life adversities requires more than just innate capabilities such as coping skills, resilience, personality, and an individual will power; strong support from the people close to them is also greatly beneficial [44,45]. A typical Pakistani family is generally religious, family-oriented, and always willing to assist each other, particularly in matters involving health, finance, and moral issues [46]. The level of support is also dependent on the size of the social network, the closeness of family kinship, and available resources. Because most participants are from less affluent backgrounds, find it difficult to return to work and rely on others for financial and logistical support [47][48][49], they may also find dental treatment to be costly. Having better support and self-esteem is an advantage as they can buffer the adverse effect of burn injury.
The level of anxiety among the participants was moderate and its effect on oral health was not mediated by oral health behaviour. The anxiety, measured using the HADS, is likely to reflect dental anxiety, which is described as shyness, nervousness, and fear of dental treatment, and confirmed by the participants' responses to the question on the barrier to utilization of oral health care [12]. Previously, a systematic review found no evidence for the relationship between anxiety and dental caries and periodontal disease but the meta-analysis did, however, report an association between dental anxiety and caries severity, but not with periodontal disease and tooth loss [31].
The high depression levels among the participants are consistent with previous reports but no association with oral health was found in the present study despite considerable evidence, including from systematic reviews [10,[50][51][52]. This may be caused by the small variation in scores between the participants and/or difficulty to discriminate the comorbidity of anxiety and depression by the participants [31,53]. Previous evidence linked depression to health-risk behaviours such as increased consumption of carbohydrate-rich meals and snacks and less frequent dental attendance, brushing, and flossing [54][55][56]. Furthermore, taking antidepressant medication reduces saliva secretion and increases cortisol levels [56][57][58] which increases the risk of common oral diseases. Another factor, resilience, is reported to be protective against stress and assists in post-burn life adjustment in burn patients [59,60]. However, it is not associated with oral health outcomes in the present study, possibly because the resilience instrument is not a reliable measure in patients with facial burns.
Interpretation from the findings should also consider the circumstances surrounding the participants, including those not captured in the data. A reflection on the data collection process revealed that most of the participants seemed uncomfortable, hesitant, and shy. They were anxious, emotional, and depressed when asked about the history and implications of the injury. Female participants often hesitated and had to be persuaded to respond to the SWAP and MSPSS instruments and cause of burn injuries; most noticeably in chemical or acid burn victims. Some were reluctant to respond to the questionnaires in the presence of individuals who accompany them to the burn centre and cooperated only after they were separated, which raises questions for the reason behind it. Many of them were also financially dependent on their nonaffluent families. These observations suggest that there is a deeper complex psychosocial issue in this disadvantaged population that requires further attention and investigation. These barriers to care must be lifted successfully [31] before oral health intervention can be pursued. Programs to improve social interaction skills could be implemented. Cognitive-behavioural therapy can help facially disfigured patients to overcome social isolation, stress, and anxiety problems [61,62]. Programs that train and educate burn victims to effectively anticipate and control their emotions and respond positively to the reactions of others, build self-confidence and esteem and equip patients with the methods and strategies to manage adverse situations are recommended before a patient is discharged from hospital [63]. The availability of such programs is, however, very limited in nondeveloped countries. Further studies should explore the compatibility of such program with regards to the local cultural and social issues before they are emulated in Pakistan. It is also recommended that burn specialists are made aware of the specific long-and short-term oral health issues in patients with facial burns and refer them to oral health professionals. Some limitations of this study have been discussed previously, including the inference from the cross-sectional study design, lack of reliability because the patients were unwilling to return for clinical reassessment, and recall bias [12]. Limitations relating to the reliability of the instruments are also addressed above. Interpretation is also limited because patients with facial burns who receive follow-up at an institution are not representative of the general population. Hence, the results should be interpreted with caution. The key strength of this study is in its originality; this is the first study to investigate the impact of psychosocial distress on the oral health of patients with facial burns and highlights a niche and underserved population. It also provides potential evidence for the role of oral health behaviour as a mediator for the effect of psychosocial distress.
Conclusions
This study shows an association between psychosocial distress and poor oral health status and oral healthrelated quality of life in facial burn patients; furthermore, it shows this relationship is mediated by oral health behaviours. Patients with facial burns should be trained to resolve their psychosocial issues to help them overcome the barriers to seeking professional dental help.
|
2021-04-02T13:46:19.338Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7ccad113defafb7e1547e8ffe9735c7667aa1172",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/s12903-021-01532-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d20a51a37aec4b69ea6c099336c0604e3923a9d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
62888365
|
pes2o/s2orc
|
v3-fos-license
|
Not by Behaviour Alone : In Defence of Self-Reports and ‘ Finger Movements ’
We agree that it is important to study behaviour in psychology but warn against putting behaviour on a scientific pedestal. We argue that this would be problematic for at least three reasons. First, behaviour should not be seen as disconnected from thoughts and feelings; moreover, quarantining different domains of responses does not help to explain human psychology comprehensively. Second, because behaviour hardly ever speaks for itself, it is essential to gather other responses from participants (including self-reports and “finger movement responses”) to understand what their behaviour really means. Finally, and most importantly, we observe that the main response to the crisis in social psychology has consisted of calls to change our empirical practices. Here this call takes the form of arguing for studying one particular dependent variable: behaviour. Even though we agree that there is value in measuring behaviour, promoting such practices is not going to be a silver bullet that overcomes the key challenges that social psychology as a discipline is currently facing. To do that, a more fruitful avenue would be to focus on the theory that needs to underpin and inform that empirical work. Indeed, without a proper theoretical framework to guide the study of behaviour, developing a “science of behaviour” is in our view rather futile. Dariusz Doliński (2018, this issue) addresses the important issue of social psychology’s failure to study the behaviour of humans. He is right to observe that social psychologists claim to study real behaviour but rarely do. We not only fully agree with this point, but also appreciate the author’s analysis and arguments about why it is important for researchers to try harder and do better when it comes to studying significant and meaningful forms of human behaviour. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Forum Paper Not by Behaviour Alone 2 Social Psychological Bulletin | 2569-653X https://doi.org/10.5964/spb.v13i2.26196 In this commentary, however, we do not want to go over this ground again. Instead, we would like to use the available space to draw attention to a few other considerations that—in our view at least—need to be part of a broad debate about the nature of the dependent variables that social psychologists should investigate. In particular, in an attempt to extend this discussion, we would like to elaborate three points. First, it is important to be clear about what we mean when arguing for the importance of studying behaviour. Here debate often seems to centre on a firm distinction between behaviour and everything else (what we might call “non-behaviour”). In our view, we need to be careful because there are a range of contexts in which this distinction is more apparent than real and therefore proves unhelpful. Second, while there are good reasons to be concerned that studies relying on selfreports and surveys far outnumber studies that examine actual behaviour, there are also potential problems associated with an aspiration to study behaviour exclusively. We should be mindful of these problems not least because studies that only examine behaviour do not necessarily produce more important, valid and true findings than studies that exclude behavioural variables. Underpinning these two points is a final, larger, point. This is that while debate about the need to measure behaviour is important, there is another debate that we also need to engage in—a debate that should ideally precede this one. This relates to the question of what it is that social psychologists are trying to achieve in their research and what kind of independent as well as dependent variables they need to study in order to do this. Indeed, while we fully agree that it is time “to sound the alarm” when it comes to our failure to study human behaviour, we would also sound an alarm to draw attention to the neglect of theory when it comes to the necessary task of giving direction to various calls for change. In what follows, we will unpack these points in greater detail. The Distinction Between Behaviour and Non-Behaviour Is Often Unclear and Unhelpful When it comes to defining social psychology many researchers in the field still endorse the definition put forward by Gordon Allport in 1954 (Allport, 1954). Here Allport famously defined social psychology as the “scientific study of how people’s thoughts, feelings, and behaviours are influenced by the actual, imagined, or implied presence of others” (p. 1). This definition is interesting for a number of reasons. First, the very fact that three classes of dependent variables relating to thoughts, feelings, and behaviours are identified in the same breath, suggests that Allport did not see them as disconnected and independent, but as complementary. Indeed, each appears to provide a part of the puzzle needed to understand how humans are influenced by others. So while it is true that the three dependent variables are not always correlated (i.e., people may indeed feel and think one thing but do the opposite), this does not mean that we should just ignore feelings and thoughts because behaviour is the only thing that matters in the end. To ask the question of how what happens internally is distinct from what happens
Dariusz Doliński (2018, this issue) addresses the important issue of social psychology's failure to study the behaviour of humans.He is right to observe that social psychologists claim to study real behaviour but rarely do.We not only fully agree with this point, but also appreciate the author's analysis and arguments about why it is important for researchers to try harder and do better when it comes to studying significant and meaningful forms of human behaviour.
In this commentary, however, we do not want to go over this ground again.Instead, we would like to use the available space to draw attention to a few other considerations that-in our view at least-need to be part of a broad debate about the nature of the dependent variables that social psychologists should investigate.In particular, in an attempt to extend this discussion, we would like to elaborate three points.First, it is important to be clear about what we mean when arguing for the importance of studying behaviour.Here debate often seems to centre on a firm distinction between behaviour and everything else (what we might call "non-behaviour").In our view, we need to be careful because there are a range of contexts in which this distinction is more apparent than real and therefore proves unhelpful.
Second, while there are good reasons to be concerned that studies relying on selfreports and surveys far outnumber studies that examine actual behaviour, there are also potential problems associated with an aspiration to study behaviour exclusively.We should be mindful of these problems not least because studies that only examine behaviour do not necessarily produce more important, valid and true findings than studies that exclude behavioural variables.
Underpinning these two points is a final, larger, point.This is that while debate about the need to measure behaviour is important, there is another debate that we also need to engage in-a debate that should ideally precede this one.This relates to the question of what it is that social psychologists are trying to achieve in their research and what kind of independent as well as dependent variables they need to study in order to do this.Indeed, while we fully agree that it is time "to sound the alarm" when it comes to our failure to study human behaviour, we would also sound an alarm to draw attention to the neglect of theory when it comes to the necessary task of giving direction to various calls for change.
In what follows, we will unpack these points in greater detail.
The Distinction Between Behaviour and Non-Behaviour Is Often Unclear and Unhelpful
When it comes to defining social psychology many researchers in the field still endorse the definition put forward by Gordon Allport in 1954(Allport, 1954).Here Allport famously defined social psychology as the "scientific study of how people's thoughts, feelings, and behaviours are influenced by the actual, imagined, or implied presence of others" (p. 1).This definition is interesting for a number of reasons.First, the very fact that three classes of dependent variables relating to thoughts, feelings, and behaviours are identified in the same breath, suggests that Allport did not see them as disconnected and independent, but as complementary.Indeed, each appears to provide a part of the puzzle needed to understand how humans are influenced by others.
So while it is true that the three dependent variables are not always correlated (i.e., people may indeed feel and think one thing but do the opposite), this does not mean that we should just ignore feelings and thoughts because behaviour is the only thing that matters in the end.To ask the question of how what happens internally is distinct from what happens externally is also not particularly productive for two reasons.The first is that human psychology cannot be neatly dissected into different components that are independently controlled by different motivations and drivers.The second is that even if such demarcation were clear, the bigger question is how these elements are woven together within human experience.And here, just as it is a mistake to psychologize and think that psychology is all that matters, so too it is mistake to behaviouralize and think that behaviour is all that matters.
Behaviour on Its Own Is Often Uninformative
It is also the case that when Allport defined social psychology as the scientific study of thoughts, feelings, and behaviours, he did not seem to see any of the three classes of variables as more important than the other two.Indeed, even though everyone seems to be in awe when we can show that our manipulations affect actual behaviour, behavioural evidence does not necessarily provide the ultimate proof of the validity of our theory.This view stands in contrast to the way in which behaviour is often characterised in the debate about behaviour versus self-reports and finger movements in which the reasoning often implies that behaviour is the ultimate dependent variable-the final and definite response that necessarily trumps both thoughts and feelings (see Baumeister, Vohs, & Funder, 2007;Kruglanski, 2017).Appealing as this position might be, it is important not to fall into the trap of arguing that emotions, beliefs and attitudes are unimportant and meaningless adjuncts to the study of 'real' behaviour.To do so risks recreating the psychological void and theoretical dead-end of radical behaviourism.
As behaviourism has shown us, one reason why we neglect cognition and emotion at our peril is that behaviour itself is never unambiguous and so it is never clear from a person's behaviour exactly what it is that they are actually doing.Is writing this article an act of aggression, or of scholarship, or of self-expression?The significance of this point becomes very clear when we reflect on the lessons that have been handed down from classic studies in social psychology (e.g., see Jetten & Hornsey, 2017;Reicher & Haslam, 2017).
Turning first to the work of Stanley Milgram, as is well known, several variants of his 'Obedience to Authority' paradigm provided compelling evidence that a high proportion of participants would be willing to deliver ostensibly lethal shocks to a Learner who was performing poorly on a memory task.In this, we would all agree that the experiments captured some very compelling behaviour (i.e., participants were either willing to inflict harm on the Learner or they were not).Yet once one starts to engage closely with the studies, it turns out that the nature of this behaviour and its implications for our understanding of Milgram's research -and its broader relevance to society -are no longer that obvious (or even that compelling).This is seen in recent debate about precisely how the behaviour of Milgram's participants should be interpreted (e.g., see Brannigan, Nicholson, & Cherry, 2015;Haslam, Miller, Reicher, & Bettencourt, 2014).Should their willingness to administer shocks be seen as obedience to authority (as Milgram, 1974, argued); or as willingness to cooperate (Lutsky, 1995); or as engaged followership (Haslam & Reicher, 2017); or as a manifestation of trust (reflecting belief that the shocks were not real; Hollander & Turowetz, 2018)?Likewise, is unwillingness to continue administering shocks a sign of disobedience and willingness to challenge authority; or the result of identification with the victim; or a reflection of failure to trust the experimenter?The fact is, by looking only at behaviour, we can never answer such questions.More generally, this example makes the point that behaviour whose meaning might seem at first blush to be self-evident is rarely -in fact never -self-evident at all.We see this in the case of Milgram where, given the ambiguity about what participants were actually doing when they administered shocks, researchers have necessarily focused increasingly on how participants felt and thought about their experience (e.g., Haslam, Reicher, Millard, & McDonald, 2015;Hollander & Turowetz, 2018) and used an array of methods to try to tap into these emotions and cognitions (e.g., see Burger, Girgis, & Manning, 2011;Haslam, Reicher, & Birney, 2014;Slater et al., 2006).Importantly too, while this research has augmented our understanding of what is going on in the paradigm, most of these recent studies have explored behaviour which (for ethical reasons) is less obnoxious than that of Milgram's original research.But while this may make it less impressive and newsworthy it is no less useful scientifically.
Furthermore, the fact that the meaning of behaviour in the Milgram paradigm is ambiguous has meant that efforts to build theory around the studies has been notoriously difficult.In particular, because it was not clear what participants were actually doing when they complied with his experimenter's requests, it is clear from Milgram's experimental notebooks that he struggled to find a good explanation for his findings (see Haslam & Reicher, 2017).This is also apparent from various writings in which he provides a wide range of theoretical explanations of his results-explanations that are at times inconsistent with one other.For example, in 1963, Milgram leant towards a dispositional explanation of participants' behaviour: "Obedience is the psychological mechanism that links individual action to political purpose.It is the dispositional cement that binds people to systems of authority" (Milgram, 1963, p. 371).A year later, though, he favoured a situational explanation: "The disposition a person brings to the experiment is probably less important a cause of his/her behaviour than most readers assume.For the social psychology of this century reveals a major lesson: Often, it is not so much the kind of person one is as the kind of situation in which they find him or herself that determines how (s)he will act" (Milgram, 1974, p. 205).To this we might add that Milgram's own struggle itself reveals another major lesson: That the study of impactful behaviour does not on its own ensure theoretical progress.Indeed, in this regard, Blass (2004) makes the point that Milgram was actually handicapped as a theorist by the very power of the behaviour he had unleashed (see also Ross, 1988).
Much the same point also emerges from Asch's famous line judgement studies.Again, as is well known, Asch provided compelling behavioural evidence that a significant proportion of his participants would be happy to conform to a majority that made judgements of line length that were clearly wrong.But was this really conformity?Or was it instead an attempt to avoid embarrassment or an act of politeness (see Jetten & Hornsey, 2017)?Thankfully, to help answer such questions, Asch developed elaborate debriefing procedures to try to understand why his participants behaved as they did.Even though much of the richness of these self-reports -and of Asch's findings more generally -is ignored in most textbook accounts of the phenomena (Griggs, 2015;Swann & Jetten, 2017), it is clear that the development of Asch's theoretical analysis was primarily informed by attention to participants' self-reports during this debriefing (Asch, 1955(Asch, , 1956)).Moreover, the fact that he was more successful in this theoretical endeavour than Milgram can be attributed in no small part to this fact.Indeed, where Asch attended closely to the self-reports that he garnered, it was left for later researchers to do this in the case of Milgram's work (Haslam & Reicher, 2017;Hollander & Turowetz, 2017).
In the context of various points made by Doliński (2018, this issue), it is thus rather ironic that when it comes to understanding the psychological underpinnings of the findings of Milgram and Asch's research, this task is rendered much more difficult (and is perhaps impossible) without recourse to self-reports.The key point here, then, is that when it comes to understanding psychology, behaviour never speaks for itself.In particular, this is because behaviour is typically silent about the motivations, beliefs and feelings that drive human behaviour.And because they help to break this silence, finger movements and self-reports are an indispensable weapon in psychologists' arsenal (see also Swann & Jetten, 2017).Surely, the pen (or finger tap) is not always mightier than the sword, but it would be foolish to imagine that it never could be.
Behaviour Alone Cannot Inform Us About Psychological Process
Let us return one final time to Allport's definition of social psychology as the "scientific study of how people's thoughts, feelings, and behaviours are influenced by the actual, imagined, or implied presence of others" because this definition is relevant for one further reason.This relates to the fact that it is clear that the focus of this definition is not so much on the different types of dependent measures that are included to gain insight into processes that are of interest to social psychologists (i.e., thoughts, feelings, and behaviours), but rather on determining how these outcomes are "influenced by the actual, imagined, or implied presence of others".That is, social psychology is a science that is first and foremost interested in theorising about social influence (Turner, 1991).
To achieve that goal, we therefore need to study observable outcomes of such influence.These outcomes include behaviour, but also quite reasonably encompass cognitions and emotions.Whatever outcome we focus on, though, we see that studying this is not a goal in and of itself, but stands in the service of helping us to develop better theories of this influence process.In the Milgram paradigm, for example, this means that we are not primarily interested in the fact that people are prepared to punish others, but in the fact that in doing so (or not doing so) they show how far they are prepared to go to enact the instructions of another person (the experimenter).What we are actually interested in here, then, is not the shocking per se but rather the obedience (or, depending on how one Social Psychological Bulletin | 2569-653X https://doi.org/10.5964/spb.v13i2.26196theorises this, the co-operation, the followership, the trust).In other words, we-like Milgram-are less interested in the behaviour per se than in the process that underlies it.
This suggests that the study of behaviour (or thoughts and feelings for that matter) can never be the sole focus of our research endeavours.In these too our focus should be on developing sound theorising about the process rather than privileging the behaviour itself (fascinating as this may be).To do otherwise is to put the cart before the horse.
This, then, is perhaps the most important point to get across in this commentary and that has most relevance to the practice of contemporary social psychology.Here we applaud recent attempts to grapple with the 'replication crisis' and improve the way we conduct our research.In discussion of how we might do this, though, we see that energy has been focused much more on considering ways to improve our empirical work than on improving the theory that needs to underpin and inform that empirical work.This is unfortunate because for psychology to become the science of action we need to first and foremost have theories in place that help us to make sense of important aspects of human action and behaviour.
Again, the Milgram case is instructive here.For while, by most standards, Milgram did 'everything right' as an experimenter (his methods were reproducible, his findings were replicable, the behaviour he studied was impactful, his data were made publicly available), his limitations as a theorist would mean that if the whole of psychology looked like this it would be an impoverished discipline.
Concluding Comment
In making the above points our goal is not to question the validity of the eminently reasonable points that Doliński makes.Rather it seeks to enrich his analysis by drawing attention to a broader debate that we need to be having when we reflect on the value of behavioural measures.In particular, we need to be aware that studying impactful behaviour can be a very useful way of shedding light on important psychological processes of social influence.But the fact that it is impactful does not necessarily make it useful in this regard.Moreover, it is equally legitimate, and can be just as useful to study social influence in the realm of thoughts and feelings (accessed via self-reports or finger movements).Indeed, rather that quarantining the domains of cognition, emotion and action there is much to be said for examining the complex interplay between these inter-related classes of outcomes.More importantly though, whatever it is that we study, we should avoid fetishizing the dependent variable above all else.For, if we do, we will end up with a science of impressive-looking bricks when what we really need is a solid house.
Funding
The authors have no funding to report.
|
2018-12-20T20:10:54.733Z
|
2018-05-29T00:00:00.000
|
{
"year": 2018,
"sha1": "2a950e5ea420099e21a24719b22feb923146f856",
"oa_license": "CCBY",
"oa_url": "https://spb.psychopen.eu/index.php/spb/article/download/2409/2409.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2a950e5ea420099e21a24719b22feb923146f856",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
26248247
|
pes2o/s2orc
|
v3-fos-license
|
Redefining the Axion Window
A major goal of axion searches is to reach inside the parameter space region of realistic axion models. Currently, the boundaries of this region depend on somewhat arbitrary criteria, and it would be desirable to specify them in terms of precise phenomenological requirements. We consider hadronic axion models and classify the representations $R_Q$ of the new heavy quarks $Q$. By requiring that $i)$ the $Q$ are sufficiently short lived to avoid issues with long lived strongly interacting relics, $ii)$ no Landau poles are induced below the Planck scale, fifteen cases are selected, which define a phenomenologically preferred axion window bounded by a maximum (minimum) value of the axion-photon coupling about twice (four times) larger than commonly assumed. Allowing for more than one $R_Q$, larger couplings, as well as complete axion-photon decoupling, become possible.
Introduction. In spite of its indisputable success, the standard model (SM) is not completely satisfactory: it does not explain unquestionable experimental facts like dark matter (DM), neutrino masses, and the cosmological baryon asymmetry, and it contains fundamental parameters with highly unnatural values, like the the Higgs potential term µ 2 , the first generation Yukawa couplings h e,u,d , and the strong CP violating angle |θ| < 10 −10 . This last quantity is somewhat special: its value is stable with respect to higher order corrections [1] (unlike µ 2 ) and (unlike h e,u,d [2]) it evades explanations based on environmental selection [3]. Thus, seeking explanations for the smallness of θ independently of other "small values" problems is theoretically motivated. Basically, only three types of solutions exist. The simplest possibility, a massless up-quark, is now ruled out [4,5]. The so-called Nelson-Barr type of models [6,7] either require a high degree of fine tuning, often comparable to setting |θ| < ∼ 10 −10 by hand, or rather elaborated theoretical structures [8]. The Peccei-Quinn (PQ) solution [9][10][11][12], although it is not completely free from issues [13][14][15], arguably stands on better theoretical grounds.
Setting aside theoretical considerations, the question whether the PQ solution is the correct one could be set experimentally by detecting the axion. In order to focus axion searches, it is then very important to identify as well as possible the region of parameter space where realistic axion models live. The vast majority of search techniques are sensitive to the axion-photon coupling g aγγ , which is inversely proportional to the axion decay constant f a . Since the axion mass m a has the same dependence, theoretical predictions and experimental exclusion limits can be conveniently presented in the m a -g aγγ plane. The commonly adopted axion band corresponds roughly to g aγγ ∼ m a α/(2πf π m π ) ∼ 10 −10 (m a /eV) GeV −1 with a somewhat arbitrary width, chosen to include representative models [16][17][18]. In this Letter we put forth a definition of a phenomenologically preferred axion window as the region encompassing hadronic axion models which i) do not contain cosmologically dangerous relics; ii) do not induce Landau poles (LP) below some scale Λ LP close to the Planck mass m P = 1.2 · 10 19 GeV. While all the cases we consider belong to the KSVZ type of models [19,20], the resulting window encompasses also the DFSZ axion [21,22] and many of its variants [17].
Hadronic axion models. The basic ingredient of any renormalizable axion model is a global U (1) P Q symmetry. The associated Noether current J P Q µ must have a color anomaly and, although not required for solving the strong CP problem, in general it also has an electromagnetic anomaly: where G a µν (F µν ) is the color (electromagnetic) field strength tensor,G aµν (F µν ) = 1 2 µνρσ G a ρσ (F ρσ ) its dual, N and E the respective anomaly coefficients. In a generic axion model of KSVZ type [19,20] the anomaly is induced by pairs of heavy fermions Q L , Q R which must transform non-trivially under SU (3) C and chirally under U (1) P Q . Their mass arises from a Yukawa interaction with a SM singlet scalar Φ which develops a PQ breaking vacuum expectation value. Thus their PQ charges X L,R , normalized to X (Φ) = 1, must satisfy |X L − X R | = 1. We denote the (vectorlike) representations of the SM gauge group where the sum is over irreducible color representations (for generality we allow for the simultaneous presence of more R Q ). The color index is defined by Tr T a Q T b Q = T (C Q )δ ab with T Q the generators in arXiv:1610.07593v2 [hep-ph] 1 Jan 2017 C Q and Q Q is the U (1) em charge. The scalar field Φ can be parametrized as The mass of ρ(x) is of order V a ( √ 2G F ) −1/2 = 247 GeV, while a tiny mass for the axion a(x) arises from nonperturbative QCD effects which explicitly break U (1) P Q . The SM quarks q = q L , d R , u R do not contribute to the QCD anomaly, and thus their PQ charges can be set to zero. The renormalizable Lagrangian for a generic hadronic axion model can be written as: where L SM is the SM Lagrangian, with Q = Q L + Q R . The new scalar terms are: Finally, L Qq contains possible renormalizable terms coupling Q L,R to SM quarks which can allow for Q decays [23]. Note, however, that SM gauge invariance allows for L Qq = 0 only for a few specific R Q .
PQ quality and heavy Q stability. The issue whether the Q are exactly stable, metastable, or decay with safely short lifetimes, is of central importance in our study, so let us discuss it in some detail. The gauge invariant kinetic term in L PQ fea- try corresponding to independent rephasings of the Q L,R and Φ fields. The PQ Yukawa term (y Q = 0) breaks U (1) 3 to U (1) 2 . One factor is the anomalous U (1) P Q , the other one is a non-anomalous U (1) Q , that is the Q-baryon number of the new quarks [19], under which Q L,R → e iβ Q L,R and Φ → Φ. If U (1) Q were an exact symmetry, the new quarks would be absolutely stable. For the few R Q for which L Qq = 0 is allowed, U (1) Q × U (1) B is further broken to U (1) B , a generalized baryon number extended to the Q, which can then decay with unsuppressed rates. However, whether L Qq is allowed at the renormalizable level, does not depend solely on R Q , but also on the specific PQ charges. For example, independently of R Q , the common assignment X L = −X R = 1 2 would forbid PQ invariant decay operators at all orders. U (1) Q violating decays could then occur only via PQ-violating effective operators of dimension d > 4. Both U (1) P Q and U (1) Q are expected to be broken at least by Planck-scale effects, inducing PQ violating contributions to the axion potential V d>4 Φ as well as an effective Lagrangian L d>4 Qq . In particular, in order to preserve |θ| < 10 −10 , operators in V d>4 Φ must be of dimension d ≥ 11 [13][14][15].
Qq had to respect U (1) Q to a similar level of accuracy, the Q's would behave as effectively stable. However, a scenario in which U (1) Q arises as an accident because of specific assignments for the charges of another global symmetry U (1) P Q , seems theoretically untenable. A simple way out is to assume a suitable discrete (gauge) symmetry Z N ensuring that i) U (1) P Q arises accidentally and is of the required high quality; ii) U (1) Q is either broken at the renormalizable level, or it can be of sufficient bad quality to allow for safely fast Q decays. Table I gives a neat example of how such a mechanism can work (see also [23]). We choose R Q = R d R = (3, 1, −1/3) so that G SM invariance allows for L Qq = 0, and we assume the following transformations under Z N : breaking decay operators depends on the Z N charges of the SM quarks. Table I lists different possibilities for d ≤ 4 and d = 5. The last column gives the PQ charges that one has to assign to Q L,R so that U (1) P Q can be defined also in the presence of the operators in column 2 and 3.
Cosmology. We assume a post-inflationary scenario (U (1) P Q broken after inflation). Then, requiring that the axion energy density from vacuum realignment does not exceed [24][25][26], where N DW = 2N is the vacuum degeneracy corresponding to a Z 2N ⊂ U (1) P Q left unbroken by non-perturbative QCD effects. We further assume m Q < T reheating so that a thermal distribution of Q provides the initial conditions for their cosmological history, which then depends only on the mass m Q and representation R Q . For some R Q , only fractionally charged Q-hadrons can appear after confinement, which also implies that decays into SM particles are forbidden [27]. These Q-hadrons must then exist today as stable relics. However, dedicated searches constrain the abundances of fractionally charged particles relative to ordinary nucleons to n Q /n b < ∼ 10 −20 [28], which is orders of magnitude below any reasonable estimate of the relic abundance and of the resulting concentrations in bulk matter. This restricts the viable R Q to the much smaller subset for which Q-hadrons are integrally charged or neutral. In this case decays into SM particles are not forbidden, but the lifetime τ Q is severely constrained by cosmological observations. For τ Q ∼ (10 −2 − 10 12 ) s Q decays would affect Big Bang Nucleosynthesis (BBN) [29,30]. The window τ Q ∼ (10 6 − 10 12 ) s is strongly constrained also by limits on CMB spectral distortions from early energy release [31][32][33], while decays around the recombination era (τ Q > ∼ 10 13 s) would leave clear traces on CMB anisotropies. Decays after recombination would produce free-streaming photons visible in the diffuse gamma ray background [34], and Fermi LAT limits [35] allow to exclude τ Q ∼ (10 13 − 10 26 ) s. For lifetimes longer than the age of the Universe τ Q > ∼ 10 17 s the Q would contribute to the present energy density, and we must require Ω Q ≤ Ω DM ≈ 0.12 h −2 . However, estimating Ω Q is not so simple. Before confinement the Q's annihilate as free quarks. Perturbative calculations are reliable giving, for n f final state quark flavors: with, e.g., (c f , c g ) = ( 2 9 , 220 27 ) for triplets and ( 3 2 , 27 4 ) for octets. Free Q annihilation freezes out around T f o ∼ m Q /25 when (for m Q > few TeV) there are g * = 106.75 effective degrees of freedom in thermal equilibrium. Together with Eq. (8) this gives: The upper lines in Fig. 1 give Ω Q h 2 Free as a function of m Q for SU (3) C triplets (dotted) and octets (dashed). Only a small corner at low m Q satisfies Ω Q ≤ Ω DM , and future improved LHC limits on m Q might exclude it completely. However, after confinement (T C ≈ 180 MeV), because of finite size effects of the composite Q-hadrons annihilation could restart. Some controversy exists about the possible enhancements for annihilations in this regime. For example, a cross section typical of inclusive hadronic scattering σ ann ∼ (m 2 π v) −1 ∼ 30 v −1 mb was assumed in Ref. [36] yielding n Q /n b ∼ 10 −11 . It was later remarked [37] that the relevant process is exclusive (no Q quarks in the final state) with a cross section quite likely smaller by a few orders of magnitude. Ref. [38] suggested that bound states formed in the collision of two Q-hadrons could catalyse annihilations. This mechanism was reconsidered in [39,40] which argued that Ω Q could indeed be efficiently reduced. Their results imply: which corresponds to the continuous line in Fig. 1.
Ref. [41] studied this mechanism more quantitatively, and concluded that Eq. (10) represents a lower limit on Ω Q , but much larger values are also possible. Refs. [39,40] in fact did not consider the possible formation of QQ... bound states which, opposite to QQ, would hinder annihilation rather than catalyse it. Then, if a sizeable fraction of Q's gets bounded in such states, the free quark result eq. (9) would give a better estimate than eq. (10). If instead the estimate eq. (10) is correct, energy density considerations would not exclude relics with m Q 5.4 · 10 3 TeV, nevertheless, present concentrations of Q-hadrons would still be rather large 10 −8 < ∼ n Q /n b < ∼ 10 −6 . While it has been debated if concentrations of the same order should be expected also in the Galactic disk [42,43] searches for anomalously heavy isotopes in terrestrial, lunar, and meteoritic materials yield limits on n Q /n b many orders of magnitude below the quoted numbers [44]. Moreover, even a tiny amount of heavy Q's in the interior of celestial bodies (stars, neutron stars, Earth) would produce all sorts of effects like instabilities [45], collapses [46], anomalously large heat flows [47]. Therefore, unless an extremely efficient mechanism exists that keeps Q-matter completely separated from ordinary matter, stable Qhadrons would be ruled out.
Selection criteria. The first criterium to discriminate hadronic axion models is: i) Models that allow for lifetimes τ Q < ∼ 10 −2 s are phenomenologically preferred with respect to models containing long lived or cosmologically stable Q's. All R Q allowing for decays via renormalizable operators satisfy this requirement. Decays can also occur via operators of higher dimensions. We assume that the cutoff scale is m P and write O d>4 Qq = m 4−d P P d (Q, ϕ n ) where P d is a d-dimensional Lorentz and gauge invariant monomial linear in Q and containing n SM fields ϕ. For d = 5, 6, 7 the final states always contain n ≥ d − 3 particles. Taking conservatively n = d−3 we obtain: with g f the final degrees of freedom, and we have integrated analytically the n-body phase space neglecting ϕ masses and taking momentum independent matrix elements (see e.g. [48]). For d = 5, 6, 7 we obtain τ (d) Q > ∼ 4 · 10 −20 , 7 · 10 −3 , 4 · 10 15 × (f max a /m Q ) 2d−7 s. For d = 5, as long as m Q > ∼ 800 TeV decays occur with safe lifetimes τ (5) Q < ∼ 10 −2 s. For d = 6, even for the largest values m Q ∼ f max a decays occur dangerously close to BBN [49]. Operators of d = 7 and higher are always excluded. This selects the R Q which allow for L Qq = 0 (the first seven in Table II), plus other thirteen which allow for d = 5 decay operators. Some of these representations are, however, rather large, and can induce LP in the SM gauge couplings g 1 , g 2 , g 3 at some uncomfortably low-energy scale Λ LP < m P . Gravitational corrections to the running of gauge couplings become relevant at scales approaching m P , and can delay the emergence of LP [50]. We then specify our second criterium choosing a value of Λ LP for which these corrections can presumably be neglected: ii) R Q 's which do not induce LP in g 1 , g 2 , g 3 below Λ LP ∼ 10 18 GeV are phenomenologically preferred. We use two-loop beta functions to evolve the couplings [48] and set (conservatively) the threshold for R Q at m Q = 5 · 10 11 GeV. The R Q surviving this last selection are listed in Table II. Other features can render some R Q more appealing than others. For example problems with cosmological domain walls [51] are avoided for N DW = 1, while specific R Q can improve gauge coupling unification [52]. We prefer not to consider these as crucial discriminating criteria, since solutions to the DW problem exist (see e.g. [23,53]), while improved unification might be accidental because of the many R Q we consider. Nevertheless, we have studied both these issues. The values of N DW are included in Table II while, as already noted in [52], gauge coupling unification gets considerably improved only for R 3 .
Axion coupling to photons. The most promising way to unveil the axion is via its interaction with photons g aγγ a E · B where [16]: with N, E the anomaly coefficients in eqs.
(2)-(3) (the uncertainty comes from the NLO chiral Lagrangian [54]). The last column in Table II gives E/N for the selected R Q 's. We have sketched in Fig. 2 the "density" of preferred hadronic axion models, drawing with oblique lines (only at small m a ) the corresponding couplings. The strongest coupling is obtained for R s Q = R 8 and the weakest for R w Q = R 3 . They delimit a window 0.25 ≤ |E/N − 1.92| ≤ 12.75 encompassing all axion models in Table II. The corresponding couplings g aγγ fall within the band delimited in Fig. 2 by the two dashed lines) the upper (lower) limit is shifted upwards approximatively by a factor of 2 (3.5). It is natural to ask if g aγγ could get enhanced by allowing for more R Q 's (N Q > 1). Let us consider the combined anomaly factor for R s Q ⊕ R Q : Since by construction the anomaly coefficients of all R Q 's in our set satisfy E/N ≤ E s /N s , the factor in parenthesis is ≤ 1 implying E c /N c ≤ E s /N s . This result is easily generalized to N Q > 2. Therefore, as long as the sign of ∆X = X L −X R is the same for all R Q 's, no enhancement is possible. However, if we allow for R Q 's with PQ charge differences of opposite sign (we use the symbol to denote reducible representations of this type) E/E s and N/N s in Eq. (13) become negative and g aγγ can get enhanced. For N Q = 2 the largest value is E c /N c = 122/3 obtained for R s Q R w Q . For N Q > 2 even larger couplings can be obtained. However, contributions to the β-functions also become large and can induce LP. This implies that there is a maximum value g max aγγ for which our second condition remains satisfied. We find that R s Q ⊕ R 6 R 9 , giving E c /N c = 170/3, yields the largest possible coupling. The uppermost oblique line in Fig. 2 depicts the corresponding g max aγγ . More R Q 's can also suppress g aγγ and even produce a complete decoupling. This requires an ad hoc choice of R Q 's, but no numerical fine tuning. With two R Q 's there are three cases yielding g aγγ = 0 within theoretical errors [27] (e.g. R 6 ⊕ R 9 giving E c /N c = 23/12 1.92). This provides additional motivations for search techniques which do not rely on the axion coupling to photons [55,56]. Finally, since T (8) = 3 and T (6) = 5/2, by combining with opposite PQ charge differences R 12 with R 9 or R 10 , new models with N DW = 1 can be constructed. We have classified hadronic axion models using well-defined phenomenological criteria. The window of preferred models is shown in Fig. 2.
|
2017-01-01T22:16:24.000Z
|
2016-10-24T00:00:00.000
|
{
"year": 2016,
"sha1": "bc43820473a94396067d4bb527c5f8bfb80f6575",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/119978/1/668468.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bc43820473a94396067d4bb527c5f8bfb80f6575",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
232160867
|
pes2o/s2orc
|
v3-fos-license
|
A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker)
COVID-19 has prompted unprecedented government action around the world. We introduce the Oxford COVID-19 Government Response Tracker (OxCGRT), a dataset that addresses the need for continuously updated, readily usable and comparable information on policy measures. From 1 January 2020, the data capture government policies related to closure and containment, health and economic policy for more than 180 countries, plus several countries’ subnational jurisdictions. Policy responses are recorded on ordinal or continuous scales for 19 policy areas, capturing variation in degree of response. We present two motivating applications of the data, highlighting patterns in the timing of policy adoption and subsequent policy easing and reimposition, and illustrating how the data can be combined with behavioural and epidemiological indicators. This database enables researchers and policymakers to explore the empirical effects of policy responses on the spread of COVID-19 cases and deaths, as well as on economic and social welfare. The Oxford COVID-19 Government Response Tracker (OxCGRT) records data on 19 different government COVID-19 policy indicators for over 190 countries. Covering closure and containment, health and economics measures, it creates an evidence base for effective responses.
ensure coding consistency, and every data point is reviewed by a second coder.
OxCGRT's design emphasizes comparability, legibility and transparency. The data are published in multiple time-series formats for ease of use by non-experts and researchers alike, with legacy data available for continuity as we add new indicators. Several features underpin our approach. First, observations for most indicators are reported on monotonic ordinal scales, with others coded on continuous scales, allowing for quantitative analysis of the degree of government response. Second, the indicators are aggregated in different combinations into four composite indices ( Table 2) that provide a snapshot of the number and degree of policies in place in a given area. Third, geographic scope is recorded for appropriate indicators. Fourth, source notes and archived links to original sources are included to support detailed interpretation of specific policies.
OxCGRT has been used widely (see examples below) during the pandemic, revealing that the global coverage, granularity of policy detail and systematic structure of the data have been able to inform diverse literatures 1 . For instance, the data have been used by health policy experts and data scientists to calculate the levels of healthcare resources that are associated with different levels of transmission 2 , to estimate the impact of combinations of physical distancing measures on disease incidence 3,4 and on the time-varying reproduction number (R t ) 5 . Environmental scientists have drawn on the data to examine whether COVID-19 response policies affect air pollution levels 6,7 . Political scientists have considered whether policies vary by regime type 8,9 , and assessed whether upcoming elections reduce the strength of responses 10 . Economists have used the data to explore how working from home has shifted countries' sectoral structures 11 , to link stay-at-home policies to increasing food prices 12 , and to identify knock-on effects of large countries' response policies on the gross domestic product growth of smaller trade partners 13 .
A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker)
Many of those using the data have benefited from the specific features listed above. The ordinal indicator scales permit separate assessment of policy recommendations, as well as permissive and strict regulations 14 . While substantial attention has focused on the closure and containment measures captured in the stringency index, some studies have used all four 15 , or selected from among the containment and health index (CHI), the more holistic government response index (GRI) 16 and the economic support index (ESI) 17 , which provides an overall measure of financial assistance to households. Moreover, the coding of policies' geographic scope has enabled analysis of strictly national policies 18 , and comparison between national and localized approaches. These examples illustrate the value of OxCGRT data and related datasets 19 in helping researchers-in addition to decision-makers and publics-to make sense of the effects of governments' responses to COVID-19 across different populations and contexts, as well as what leads governments to adopt different policies.
In the following sections, we describe patterns of global COVID-19 government responses with the OxCGRT data in order to demonstrate what kinds of questions the data can help researchers tackle. We describe cross-national patterns in the timing of containment and health policies, followed by a more detailed presentation of policy sequencing. We then combine the data with mobile phone mobility data to relate policies to human behaviour 20 and review the potential for bringing together OxCGRT data with additional data sources in the Discussion. In the Methods, we describe the individual indicators in more detail, along with the data collection process, data coverage and how we calculate the indices. We also briefly compare OxCGRT with related projects to highlight their complementarities.
Results
To motivate applications of the data, we present general trends and patterns in government responses in the first months of the pandemic. We focus here on cross-national patterns, although OxCGRT contains more granular data on subnational jurisdictions as well. First, we document a surprising degree of commonality across countries in the early months of the pandemic followed by growing divergence. We also note patterns in policy reimposition and geographical scope-topics that have, to date, been relatively underexplored in the literature, yet they have important implications for how countries manage each wave of the pandemic. Second, we consider associations between the OxCGRT indices and a key outcome of interest, individual mobility, to illustrate the potential for the data to be combined with other indicators to investigate economic, social and epidemiological questions of interest.
What government responses do we observe? The data reveal a striking degree of commonality in government responses to COVID-19 in the first months of the pandemic. We group the 19 indicators into themes of closure and containment, health and economic support (Table 1), normalized to vary from 0-100 (for a full description, see Table 2). The CHI measures the number and intensity of closure and containment policies (for example, school closings and stay-at-home measures) and policies towards disease surveillance (for example, testing and contact tracing). Only a handful of countries had adopted strong containment (often referred to as lockdown) and health policies in early March, as Fig. 1 shows, yet within 1 month the world had changed and intensive policy responses had become a global phenomenon. In subsequent months, however, countries lifted policy restrictions, then, in some cases, reimposed policies in a policy see-saw as the epidemic waxed and waned. During the initial, global rise in policy responses, the data reveal a number of intriguing patterns. Most governments moved to a high level of response within a 2-week period around the middle of March, showing remarkable clustering. Figure 2 displays this initial policy convergence across 183 countries, which is not observed in later policy decisions (such as rolling back measures). This initial clustering pattern in mid-March contrasts with what would be expected if countries reacted according to the local epidemiological progression of the pandemic. For instance, in most countries, the sudden ramping up of response policies happened before they had experienced their tenth COVID-19-related death, while many other countries' responses preceded even their tenth recorded case. Countries may have observed their neighbours or the global response and reacted in concert. This clustering then seems to dissipate in later months as countries' responses diverge. This pattern has important implications for the coordination of responses to global infectious diseases considering that the World Health Organization's policy guidance to governments is tailored to the local progression of an infectious disease rather than potential herd behaviour.
Next, we examine specific policies, both during the initial process of policy adoption and in the months that followed as measures were either rolled back or maintained. The left panel of Fig. 3 captures the ramping up of policies, showing the proportion of countries adopting a particular policy, with day zero representing the first day of the COVID-19 policy response in each country.
The very limited crossing of the lines in this figure suggests that policies adopted by the median country (in terms of the speed of policy responses) occurred in approximately the same order as those adopted by countries in the first and third quartiles. In other words, the sequence of policy adoption is largely similar across countries. Specifically, there is more than a 50% chance that a randomly drawn country will have introduced public information campaigns, international travel controls and testing policies within 20 d of the first government response of any policy type; there is a 40% chance of this within 10 d and a >90% chance within 2 months. Economic support policies have tended to be established later than closure or containment and health policies, facial coverings aside.
A common pattern also characterizes policy reversal. The right panel of Fig. 3 indicates the proportion of countries maintaining the highest point they reach on the ordinal scale for each policy area. The global rate of policy reduction is indicated by the slope of the lines. There is crossing over among policies, but little among those lines representing closure and containment policies, which have roughly similar rates of rollback. During the initial 2 months of policy easing, while closure and containment policies were loosened, economic support policies and health policies were maintained at countries' individual maximum strengths.
While we see similarities in the policies that were adopted and relaxed, as well as when this occurred, there is interesting variation across policies' strength and geographic coverage and the extent to which they were later reimposed. Figure 4 shows how frequently countries imposed the strongest possible policies, what portion of observed policies applied nationally versus subnationally and which policies were reduced and subsequently reimposed. Most closure policies were adopted nationally at some point, and in approximately 20% of countries, stronger closure and containment policies were reimposed by the end of December 2020. In the case of workplace policies, approximately 80% of countries had reduced their restrictions by that point in the year, but 40% of all countries later reversed course. The clustering of policy responses during the process of adoption has a critical implication for researchers. Analysis of individual policies is difficult because there is limited variation across and within countries, resulting in collinear relationships. This has meant that most analyses of government responses to date have had to focus on aggregate indices. However, in later periods, we document substantially more variation. This variation enables more credible quasi-experimental analysis of individual policies, such as school reopening, testing campaigns and income support. Understanding the role of individual policies, in addition to aggregate government response levels, is of central importance for further research and policy action, which this database can enable.
Motivating applications of the data: how do government responses relate to behaviour? A key application of the OxCGRT data is to understand how policies relate to human behaviour. A number of studies have used OxCGRT and similar data to try to estimate the effect of policies on behaviour and the spread of the disease. Here, we do not aim to establish new estimates of causal effects, but rather seek only to demonstrate potential use cases to motivate further research. Figure 5 summarizes the results of linear panel regression models, comparing the strength of associations between the CHI, GRI and stringency index with changes in citizen mobility over time, as recorded by mobile phone applications (the full results are presented in Supplementary Tables 1 and 2). These models use standard techniques in the literature. We include country and date fixed effects in order to isolate within-country associations over time, accounting for seasonal and other calendar effects. In the Supplementary Information, we include models that also control for new daily deaths, to identify the association between policies and mobility unconfounded by the relationship between the severity of the epidemic and mobility.
The coefficients and confidence intervals in Fig. 5 show strong associations between OxCGRT indices and measures of behaviour. These associations are stronger the greater the number of policy indicators that are included in an index. All three indices shown contain eight indicators of closure and containment policies plus at least one additional indicator. Increases in the GRI-our broadest index of government responses-are most strongly associated with increases in the percentage of time spent in residences, as well as with decreases in the frequency of visits to groceries and pharmacies, workplaces, transit stations, places for retail and recreation, and parks. The CHI, which adds to closure and containment indicators all of our health policy indicators, shows only slightly weaker associations. The stringency index, which brings together the containment and health indicators with just one additional indicator (that is, public information campaigns) shows still less pronounced relationships in the same directions.
These analyses highlight the potential of OxCGRT data, combined with other datasets, to capture important changes in The graph shows the proportion of countries that, for various indicators, adopted any policies, adopted policies nationally (as opposed to geographically targeted policies) and adopted policies that correspond to the highest point on the ordinal scale. it also shows the percentage of countries that at some point reduced and reimposed such policies. A policy reduction is defined by a reduction in policy level sustained for at least 5 d. A policy reduction reversal is defined by an increase in policy level after a policy reduction that is sustained for at least 5 d. Adoption rates of some policies at any level (for example, facial coverings, income support and debt relief) are much higher than those depicted in the left panel of Fig. 3 because countries often continue to adopt these policies after the first policy reduction. The sample comprises 66,978 observations from 183 countries between 1 January 2020 and 31 December 2020.
behaviour in response to historic government action. While the associations presented here are merely suggestive, researchers are already using the data for more in-depth analyses 2,3,5 . Identifying causal effects of government policies is not straightforward due to many confounding factors and potential sources of endogeneity. Given these challenges, the rich nature of this database with day-by-day policy changes across a global distribution of countries and subnational jurisdictions enables rigorous quasi-experimental analysis. This illustrative application is designed to motivate further in-depth research and to demonstrate the potential for policymakers and researchers to answer important public policy and epidemiological questions using OxCGRT data.
Discussion
Alongside epidemiological and behavioural data, measures of government response help researchers and decision-makers to explore how best to address COVID-19. However, measuring government policies in a consistent and comparable way across jurisdictions and across time raises a number of methodological considerations, which can present difficult choices. In this section, we review these trade-offs and outline strategies for addressing them. First, while the OxCGRT ordinal scales distinguish, for example, a ban on gatherings of over ten people from a ban on gatherings of over 100 people, the limitation of an ordinal approach is that it groups heterogeneous observations into pre-established categories. For instance, both the United Kingdom and France had broadly similar stay-at-home orders during spring 2020, and both were categorized as the second-highest ordinal point on that indicator. However, French residents had to submit a form to authorities to leave their house, while UK residents did not. To mitigate the inevitable simplification that comes with codification, OxCGRT includes detailed notes and archived links to source materials for all observations in the dataset, helping researchers to draw on OxCGRT data in a more detailed way should it be required.
Second, the coding scheme loses granularity when applied to large jurisdictions with many heterogeneous subunits. As described in the Methods, OxCGRT data include three types of observation: those that describe all policies that apply to a given jurisdiction; those that describe policies put in place by a given level and lower levels of government; and those that describe only those instigated at a given level of government. Policies that apply only to a subunit of the given jurisdiction (for example, a single state of a country being coded) are flagged as targeted, while policies that apply to the whole jurisdiction are flagged as general. When both general and targeted policies exist simultaneously, OxCGRT always records the stricter policy. This choice may make the data more useful for evaluating the effect of policies on the spread of disease (since it records the stronger targeted measures that probably exist where there is a local outbreak) while reducing their ability to describe the overall state of policy across the country. For example, if a jurisdiction with many subunits has weak general policies and strong policies targeted at a single subunit, its overall coding will be high. In cases where this is frequently an issue, such as Brazil and the United States, OxCGRT has also comprehensively coded subunit jurisdictions (see Supplementary Fig. 2). We encourage users to consider this granularity issue carefully when making cross-national comparisons, and to considering using subnational information for large, heterogeneous jurisdictions where available.
Third, OxCGRT records policy interventions as a time series (the unit of observation is a jurisdiction day), recording the intensity and scope of policy in place for a given indicator at that place and time. An alternative approach that has been pursued by comparable data projects is to record the start and end dates of individual policies
Fig. 5 | Associations between different combinations of government responses and aggregate population behaviour.
This graph plots coefficients with 95% confidence intervals of the GRi, CHi and stringency index, used as independent variables in separate panel regression models predicting changes in Google mobility data. The models use standard errors clustered by country and include country and date fixed effects ( Supplementary Fig. 1 plots the coefficients of the same indices in models that control for daily deaths; Supplementary Tables 1 and 2 report the full results). Note that change in the duration of time spent at home, as a proportion of the day, shows less variation than the other dependent variables, which capture change in the frequency of visits to different categories of location. These results were calculated for 15 February to 9 October 2020.
(see more detailed comparison in the Methods). While both options have merit, the time-series structure allows researchers to more easily match policy indicators to other time-series data, such as to case or death rates, mobile phone mobility data and panel surveys. It also helps OxCGRT data collectors to capture government responses that do not take the form of discrete, formal policy interventions, but more ad hoc announcements, such as temporary limitations to internal movement during public holidays or religious events.
Fourth, the data are published continuously. Data collection occurs weekly, which enables OxCGRT to provide up-to-date information on government responses. Given that severe acute respiratory syndrome coronavirus 2 can spread extremely quickly, this speed is essential for effective use of the data. However, our volunteers are not necessarily able to update every jurisdiction in every cycle; consequently, some data can be up to 7 d out of date. To minimize recent gaps, we therefore publish our data in real time so that they can be utilized as soon as they have been contributed. The trade-off to this speed is that the most recent data are published in advance of their final validation check (see Methods for further details) and may therefore be corrected in the review process or through external feedback (although, in practice, large revisions are rare; see Methods).
Fifth, OxCGRT relies on human judgement and contextual expertise, rather than automated data collection or coding, to provide the best possible degree of accuracy and consistency. The ordinal scales require individual contributors to carefully interpret various policies within each domain, in order to assign a code that best fits each indicator. For example, many countries have taken similar action to close workplaces, yet the types of workplaces that are required to close often differ from country to country. This means that each data contributor needs to assess the policy announcements in a country alongside detailed guidance material and apply judgement. Volunteers go through a training process to instil a high level of consistency and attend weekly meetings to discuss coding queries and standardize interpretations. Many of our contributors are specialists in the countries that they code, and understand the country's culture, language and legal system in such a way that allows them to code with context and have access to local information to verify policies. While this shoe-leather-science approach is very human resource intensive, we have not found it possible to achieve comparable results with purely automated methods. Going forward, it may be fruitful to explore how different technological approaches can be combined with human coders.
Finally, and critically, the OxCGRT dataset records only the number and degree of government policies. It does not have a way to measure how well policies are implemented or enforced, nor does it measure the degree of compliance with official policies. OxCGRT data should therefore be considered one among several key elements in the broader puzzle of understanding governments' policy adoption and the links between government interventions, human behaviour and the spread of COVID-19.
Methods
This section describes OxCGRT's design and structure, as well as the processes through which data are collected and confirmed. Because the project is continually evolving, adding further indicators and jurisdictions over time, users should always check the project website for the most current information. All OxCGRT data are available on GitHub and via an application programming interface (API), and are licensed under the Creative Commons Attribution CC BY standard.
We hope that the methods outlined below will be of use not only to data users (the primary audience) but also to researchers who may be contemplating developing complementary measures or data collection projects for response to COVID-19 or other issues. In line with OxCGRT's open-source ethos, we invite the scientific community to use and build on not just the data we collect but the methods and system described below.
Indicators.
OxCGRT reports publicly available information on 19 indicators (see Table 1) of government response, as well as recording miscellaneous policies. The indicators capture all government measures related to a specific domain, including formally adopted laws, policies promulgated by executive or regulatory authorities, and softer guidance or advice. OxCGRT has added new indicators and refined old indicators as the pandemic has evolved. Future iterations may include further indicators or more nuanced versions of existing indicators. The indicators are of three types: ordinal, numerical and text.
• Ordinal indicators measure policies on a simple scale of severity or intensity, allowing us to describe the degree or strength of government response in each category. For these indicators, the rank order of the different levels is meaningful, but we make no claims regarding the scale of the intervals. Instead, each level has a specific meaning, which allows the different values to also be used as categorical variables. These indicators are reported for each day a policy is in place (not the day it is announced). Many have a further flag to note whether they are targeted (applying only to a subregion of a jurisdiction or to a specific sector) or general (applying throughout that jurisdiction or across the economy). For the newly added H7 (vaccination policy), the flag indicates whether the vaccine is being funded by the government or at a cost to individuals. • Numerical indicators measure a specific number, reporting fiscal values in US dollars. These indicators are only reported once, on the day they are announced. • Text is a free response indicator that records other information of interest.
All observations also have a notes cell that reports sources and comments to justify and substantiate the designation.
Data collection and reliability. The initial set of data collectors in March 2020 were recruited largely from the postgraduate student body of the Blavatnik School of Government at the University of Oxford. Since then, additional contributors have been recruited through Oxford University departmental mailing lists, student societies and alumni email lists, as well as referrals from existing contributors. Subnational coders are mostly students or recent graduates from partner institutions in the countries where we are collecting subnational data (for example, the University of São Paulo, Fundação Getulio Vargas and the State University of Pará in the case of Brazil). To date, approximately 400 data collectors have contributed to OxCGRT, and are listed on the project website.
New members of the data collection team undergo a series of training steps. First, they complete a self-directed tutorial of training slides and videos that explain how to search for data, interpret policies and submit contributions through the online interface. New contributors are then given a short test for comprehension and understanding of the coding schema and collection process. After that, new data collectors are expected to attend a weekly all-contributor meeting, at which point they will start being included in the regular task allocation.
OxCGRT collects national data on a weekly schedule, during which new task allocations are sent to the data collection team. This allocation is based on a regular review of database coverage, prioritizing those countries that have not been updated within the past week. Most contributors are assigned to a list of four to six jurisdictions and will cycle through that list each allocation round, building up expertise in a small set of jurisdictions. The data are published in real time as contributors enter them into the system.
After data are entered, they are marked provisional, which flags them for the review process. First, after each allocation round, a small team will perform quick spot checks to ensure that the data have been entered and there are no gross errors (for example, accidental deletion of a whole column can be noticed and fixed during this quick review). The provisional data are then queued for attention by a more thorough review team. This review team will examine the data entry and the original source and either confirm its veracity or flag the data entry for escalation. The review process suggests a high degree of accuracy in the initial data collection. As of 31 December 2020, 84.79% of all data points have never been changed, and since 1 June 2020, 87.45% of data points have not required revision. Note that these revisions include both post-hoc alterations to the coding scheme and factual errors. Meanwhile, just 0.41% of observations have been escalated by reviewers for adjudication (0.25% since 1 June 2020). Of the 1.2 million data points captured between 1 June 2020 and 31 December 2020, 319,840 were reviewed or changed; of these, 51% were confirmed without edits.
Data are collected from publicly available sources, such as government press releases and briefings, international organization reports and trusted news articles. OxCGRT records the original source material using archived links so that coding can be checked and substantiated.
Coding different levels of government response.
OxCGRT includes data at the country level for nearly all countries in the world. It also includes subnational-level data for selected countries-currently, Brazil (all states, the Federal District, state capitals and the next largest cities that are not geographically connected to the state capitals), the United States (all states plus Washington DC and a number of territories), the United Kingdom (the four devolved nations) and Canada (all provinces and territories).
OxCGRT data are typically used in three ways: (1) primarily, to describe all government responses relevant to a given jurisdiction; (2) less commonly, to describe policies put in place by a given level and lower levels of government; and (3) to compare government responses across different levels of government.
To distinguish between these uses, different published versions of OxCGRT data are tagged in the database. The TOTAL label implies that all government responses relevant to the people in a given jurisdiction are included in the coding, regardless of whether those policies are set by national or subnational governments (these may also be presented without any jurisdiction label in some of our data products). The jurisdiction label WIDE refers to policies put in place by a given level and lower levels of government. WIDE observations therefore do not incorporate general policies from higher levels of government that may supersede local policies. For example, if a country has an international travel restriction that applies country wide, this would not be registered in a STATE_WIDE record. The jurisdiction label GOV indicates that observations include only policies instigated by a particular level of government; higher-or lower-level jurisdictions do not inform this coding.
In the main OxCGRT dataset, we show the total set of policies that apply to a given jurisdiction (the TOTAL policies described above). Specifically, in the main dataset, this means that we replace subnational-level responses with relevant national government (NAT_GOV) indicators when the following two conditions are met: • The corresponding NAT_GOV indicator is general, not targeted, and is therefore applied across the whole country • The corresponding NAT_GOV indicator is equal to or greater than the STATE_WIDE or STATE_GOV indicator on the ordinal scale for that indicator In this way, national and subnational measures in the core dataset are comparable, in that they show the totality of policies in effect within a given jurisdiction.
Note that STATE_WIDE observations at the subnational level also capture policies that the national government may specifically target at a subnational jurisdiction. This is the case, for example, if a national government orders events to close in a particular city experiencing an outbreak. These kinds of policies are not inferred from NAT_GOV.
On our GitHub repositories, these different types of data are available in three groups, as summarized in Extended Data Fig. 1 For large, heterogeneous jurisdictions, users may wish to use a weighted average of subnational jurisdiction observations (for example, STATE_WIDE) instead of national observations (NAT_TOTAL). See Supplementary Fig. 2 for a comparison.
Composite indices.
To make it easier to describe government responses in aggregate, OxCGRT calculates simple indices that combine individual indicators to provide an overall measure of the intensity of government response across a family of indicators. These indices are designed to provide a simple snapshot of the number and degree of government responses in a particular domain. Because we have not designed the indices for any specific analytic usage, we aim to make them as simple and transparent as possible. Those using the data to study the effect of government policies on outcomes of interest will therefore probably wish to modify the indices to suit the exact research questions they are seeking to answer (for example, selecting only the variables they believe to be relevant, or weighting those they believe to be of greater importance). In other words, we offer the indices as a convenient prix fixe menu option, but we urge users to tailor the data to their specific needs by ordering a la carte.
As noted above, we stress that composite indices have strengths and weaknesses as descriptive and analytic tools. Governments' responses to COVID-19 exhibit nuance and heterogeneity. These issues create substantial measurement difficulties when seeking to compare national responses in a systematic way. Composite measures, which combine different indicators into a general index, inevitably abstract away from these nuances. It is hoped that cross-national measures allow for systematic comparisons across countries. By measuring a range of indicators, they mitigate the possibility that any one indicator may be over-or misinterpreted. However, composite measures also leave out much important information and make strong assumptions about what kinds of information count. If the information left out is systematically correlated with the outcomes of interest, or systematically under-or overvalued compared with other indicators, such composite indices may introduce measurement bias.
Broadly, there are two common ways to create a composite index: a simple additive or multiplicative index that aggregates the indicators, potentially weighting some; or a latent variable approach, in which observed indicators are used to predict an unobserved variable (that is, the index). While there are several approaches to latent variable analysis, such as factor analysis or principal component analysis, item response theory (IRT) models are particularly suitable in this case due to the ordinal nature of most indicators. Each approach has advantages and disadvantages for different research questions.
OxCGRT uses simple, additive, unweighted indices because this approach is most transparent and easiest to interpret. Because the purpose of these indices is to describe the number and degree of government responses, we weight each indicator and each interval on the ordinal scale equally (within each indicator). In other words, the difference between a 1 and a 2 in a given indicator contributes as much to an index as the difference between a 2 and a 3. Again, this strong assumption will not be appropriate for all uses, so we encourage users to carefully consider which combinations and weightings of policies best capture the dimensions they are seeking to measure.
Despite this caveat, we find significant internal consistency within the indices. We used a latent variable approach-specifically, IRT-as a robustness check for the stringency index (see Supplementary Table 3). IRT models have been used extensively in education to estimate the ability of a student (latent variable) based on the responses to individual test questions (observable indicators). In our case, the individual policy levels (added to the geographic flag) are the observable indicators and the policy index is the unobservable variable. The scores generated by an IRT model were highly correlated to our linear index (r = 0.98), which reinforces the validity of our approach.
OxCGRT publishes four indices that group different families of policy indicators: • GRI (all categories) • Stringency index (containment and closure policies, sometimes referred to as lockdown policies) • CHI (containment and closure and health policies) • ESI (economic support measures) Each index is composed of a series of individual policy response indicators. For each indicator, we create a score by taking the ordinal value and subtracting half a point if the policy is targeted rather than general, if applicable. We then rescale each of these by their maximum value to create a score between 0 and 100, with a missing value contributing 0. These scores are then averaged to obtain the composite indices. This calculation is described in equation (1) below, where k is the number of component indicators in an index and I j is the subindex score for an individual indicator.
We use a conservative assumption to calculate the indices. Where a datum for one of the component indicators is missing, it contributes 0 to the index. An alternative assumption would be to not count missing indicators in the score, essentially assuming they are equal to the mean of the indicators for which we have data. Our conservative approach therefore punishes countries for which less information is available, but also avoids the risk of over-generalizing from limited information.
The different indices are comprised as described in Table 2.
To facilitate usage, two versions of each indicator are present in the database: a regular version (which will return null values if there are not enough data to calculate the index) and a display version (which will extrapolate to smooth over the past 7 days of the index based on the most recent complete data).
Calculating subindex scores for each indicator. All of the indices use ordinal indicators where policies are ranked on a simple numerical scale. The project also records five non-ordinal indicators (E3, E4, H4, H5 and M1) but these are not used in our index calculations.
Some indicators (C1-C7, E1, H1, H6 and H7) have an additional binary flag variable that can be either 0 or 1. For C1-C7, H1 and H6, this corresponds to the geographic scope of the policy. For E1, this flag variable corresponds to the sectoral scope of income support. For H7, this flag variable corresponds to whether or not the vaccine is government funded.
The codebook has details about each indicator and what the different values represent.
Because different indicators (j) have different maximum values (N j ) in their ordinal scales and only some have flag variables, each subindex score must be calculated separately.
Each subindex score (I) for any given indicator (j) on any given day (t) is calculated by the function described in equation (2) based on the following parameters: This normalizes the different ordinal scales to produce a subindex score between 0 and 100, where each full point on the ordinal scale is equally spaced. For indicators that do have a flag variable, if this flag is recorded as 0 (that is, if the policy is geographically targeted, or for E1 if the support only applies to informal sector workers), this is treated as a half-step between ordinal values.
Note that the database only contains flag values if the indicator has a non-zero value. If a government has no policy for a given indicator (that is, the indicator equals zero), the corresponding flag is blank/null in the database. For the purpose of calculating the index, this is equivalent to a subindex score of zero. In other words, I j,t = 0 if v j,t = 0.
(if v j,t = 0, the function F j − f j,t is also treated as 0; see paragraph above).
Data usage. The data are published in real time. Unless a country has been updated in the past 24 h, there will be at least some gaps in coverage for the most recent days. In addition, if data are exported in the middle of an update, there can occasionally be missing data points in the time series. The dataset is also published with numbers of reported COVID-19 cases and deaths, drawn from open datasets at the European Centre for Disease Prevention and Control and Johns Hopkins University. Occasionally, there have been missing days for some countries in these sources (for example, if a country has not updated their case data over a long weekend). For this reason, particularly when using the dataset for descriptive analysis, we usually interpolate to cover any single missing days and use a carryforward function to extend the latest value of a missing variable.
In addition, we caution users against overinterpreting small fluctuations of single-digit changes in our index values. A small change in an index value may not necessarily represent a substantive change in the country's policy stance; it could, for example, just as easily represent a marginally different geographic coverage.
Comparison with related datasets. A number of datasets have tracked governments' responses to COVID-19 since the start of the pandemic 21 . While it is beyond the scope of this article to describe all of them in detail, here we report similarities and differences compared with two sister projects: CoronaNet 19 and the Complexity Science Hub COVID-19 Control Strategies List (CCCSL) 22 . While these projects overlap with OxCGRT to some extent, allowing for direct comparisons, the three projects also offer complementary attributes, expanding the knowledge and options available to the research community.
The three projects have constructed datasets with a number of similar features but also points of difference.
Unit of analysis. Both CoronaNet and CCCSL record government policies or measures as the unit of analysis; instead, OxCGRT uses the jurisdiction day. While each approach can be converted into the other, the OxCGRT dataset is purpose built as a panel. In contrast, CoronaNet and CCCSL are structured as unbalanced panels, requiring additional steps to convert into a format that facilitates conventional analysis.
Coverage of jurisdictions and dates.
OxCGRT publishes data on 184 countries and several subnational jurisdictions (50 states in the United States, 13 Canadian provinces and territories, 27 Brazilian states and over 50 cities and four UK devolved nations). CoronaNet publishes data on 195 countries and the following subnational jurisdictions: Brazil, China, Canada, France, Germany, India, Italy, Japan, Nigeria, Russia, Spain, Switzerland and the United States. CCCSL publishes data on 56 countries, 33 of them European. All three datasets aim to update continuously, although at the time of writing only OxCGRT had up-to-date information for all jurisdictions.
Coverage of government responses. All three datasets broadly cover what we have termed closure and containment and health policies. In addition, OxCGRT and CCCSL record economic support measures. OxCGRT uniquely covers public transportation-related and vaccine policies. However, it does not include states of emergency or enforcement measures (as CoronaNet does), nor does it include receiving international help, measures to secure supply chains, crisis management plans or port and ship restrictions (as CCCSL does).
Design of indicators. The 19 indicators of OxCGRT are either ordinal or numerical, with an additional binary flag that records whether measures are general or targeted. CoronaNet considers different elements, such as the directionality of policies (for example, inbound or outbound flights), the mechanism of travel (flights or trains), enforcement (mandatory or voluntary) and enforcers (national government or military). While the CCCSL covers fewer countries, their indicators are more granularly split into four levels, without an ordinal scale. This more descriptive approach then needs further processing before it can be analysed. While the detailed text descriptions enable rich qualitative analysis, they are less suited for quantitative analysis.
Data collection methods. All three datasets rely on hand-coded data entered by a large international pool of trained contributors into a central database. All three use publicly available sources, including policy documents and media reports. A key difference of the CoronaNet methodology is its use of a machine learning software instrument to extract data from news articles to aid contributors in their data collection. The CCCSL shares information sources in an open-source Zotero library.
From examination of the CoronaNet and CCCSL data and papers, it seems that OxCGRT is the only dataset to include archived web links to all original sources.
Looking at the data reveals further complementarities and differences between OxCGRT and related projects. OxCGRT most closely resembles CoronaNet, which also has global coverage for over 180 countries and which produces a government policy activity index that can be compared quantitatively to the OxCGRT indices. Our database is highly correlated with CoronaNet within a given country. Supplementary Fig. 3 shows the example of the United States, demonstrating how both indices track each other over time. Supplementary Fig. 4 quantifies this relationship for all countries, showing the average within-country correlation between the CoronaNet and OxCGRT government response indices within a given country. The average correlation (Pearson's r) is high at 0.85. This suggests robustness across the databases.
At the same time, the OxCGRT indices provide new information beyond the CoronaNet index, as indicated by a positive but not perfect correlation within countries (Pearson's r = 0.85). This is even more the case across countries. Fig. 3 illustrates the cross-country relationship between the Oxford and CoronaNet databases (Pearson's r = 0.28). This lower cross-country correlation may be partially associated with the difference in the methodology used by Coronanet to calculate the index (ideal point model of item response theory). These results reveal that the two databases are highly consistent within countries, enhancing confidence in both, and that OxGRT indices provide substantial new information especially for across country comparisons and analyses.
We note a few other distinctions. First, our absolute indices show more variation. The CoronaNet index falls, by and large, within 10 points on a 100-point scale with a standard deviation of 1.2. In contrast, countries in our database span the entire 100-point range across countries and over time with a standard deviation of 12. This granularity is particularly essential to capture important variation in waxing and waning of policies over time, in addition to more sweeping lockdowns, which can be captured with coarser measurement.
In summary, OxCGRT complements related efforts in a few dimensions. Our database has global coverage, enables comparable within-and across-country analysis, will be consistently updated and expanded, is publicly available, is built with a team of coders with contextual expertise in the respective countries in which they focus, and has a systematic panel data structure that has enabled merging with other databases and quantitative analysis.
Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The most up-to-date OxCGRT data and documentation are available via the project GitHub repository at https://github.com/OxCGRT/covid-policy-tracker. Source data are provided with this paper.
NaTuRE HumaN BEHaVIOuR
Extended Data Fig. 1 | Currently available OxCGRT data across different levels of government. 1. The "TOTAL" dataset is hand-coded at the national level, and at other subnational levels it combines the other datasets to report the overall policy settings that apply to residents within the jurisdictions. 2. NAT_WiDE does not exist. The "WiDE" label refers to data that ignores policies implemented by higher levels of government (for example reporting policies that apply to a state without including federal government policies). There are no higher levels of government above National, so any NAT_WiDE record would simply duplicate NAT_TOTAL. 3. in practice, we would not record CiTY_GOV. The data recorded as CiTY_WiDE would include only decisions made by city governments and any lower-level governments (if they existed), while ignoring policies from state and national governments.
Corresponding author(s): Thomas Hale, thomas.hale@bsg.ox.ac.uk Last updated by author(s): Feb 1, 2021 Reporting Summary Nature Research wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Research policies, see our Editorial Policies and the Editorial Policy Checklist.
Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted
Software and code
Policy information about availability of computer code Data collection Data collection occurs through Internet searches that trained contributors enter into a Azure-based SQL database using a custom-made website. The database endpoints are made publicly available via a Github repository. Data analysis Stata 16.1 was used to manipulate and analyze the data.
For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information.
Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability The most up-to-date OxCGRT data and documentation are available via the project github repository: https://github.com/OxCGRT/covid-policy-tracker. Data for all figures in the manuscript and supplemental information can be obtained from this repository. There are no restrictions on data availability.
nature research | reporting summary
April 2020 Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.
Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf Behavioural & social sciences study design All studies must disclose on these points even when the disclosure is negative.
Study description
This 'resource' article describes presents a dataset that provides both quantitative and qualitative data. It does not carry out causal inference.
Research sample
The database includes information on nearly all countries in the world and a number of subnational jurisdictions in the United States, Brazil, Canada, and the UK.
Sampling strategy
This article does not rely on sampling.
Data collection
The initial set of data collectors in March 2020 were recruited largely from the postgraduate student body of the Blavatnik School of Government at the University of Oxford. Since then, additional contributors have been recruited through Oxford University departmental mailing lists, student societies and alumni email lists, as well as referrals from existing contributors. Subnational coders are mostly students or recent graduates from partner institutions in the countries we are collecting subnational data (for example the University of São Paulo, Fundação Getulio Vargas, and the State University of Pará in the case of Brazil). To date approximately 400 data collectors have contributed to OxCGRT.
New members of the data collection team undergo a series of training steps. First, they complete a self-directed tutorial of training slides and videos that explain how to search for data, interpret policies, and submit contributions through the online interface. New contributors are then given a short test for comprehension and understanding of the coding schema and collection process. After that, new data collectors are expected to attend a weekly all-contributor meeting, at which point they will start being included in the regular task allocation.
OxCGRT collects national data on a weekly schedule, during which new task allocations are sent to the data collection team. This allocation is based on a regular review of database coverage -prioritizing those countries that have not been updated within the last week. Most contributors are assigned to a list of 4-6 jurisdictions and will cycle through that list each allocation round, building up expertise in a small set of jurisdictions. The data is published in real-time as contributors enter it into the system.
After data is entered, it is marked 'provisional', which flags it for the review process. First, after each allocation round, a small team will do quick spot checks to ensure that data has been entered and there are no gross errors (for example, accidental deletion of a whole column can be noticed and fixed during this quick review). The provisional data is then queued for attention by a more thorough review team. This review team will examine the data entry and the original source, and either confirm its veracity or flag the data entry for escalation. The reviews process suggests a high degree of accurate in the initial data collection. As of 31 December 2020, 84.79 percent of all datapoints have never been changed, and, since 1 June 2020, 87.45 percent of data points have not required revision. Note these revisions include both post-hoc alterations to the coding scheme and factual errors. Meanwhile, just 0.41 percent of observations have been escalated by reviewers for adjudication, 0.25 percent since 1 June 2020. Of the 1.2 million data points captured between 1 June 2020 and 31 December 2020, 319,840 were reviewed or changed; of these 51 percent were confirmed without edits.
Data is collected from publicly available sources such as government press releases and briefings, international organizations reports, and trusted news articles. OxCGRT records the original source material using archived links so that coding can be checked and substantiated.
Timing
Data collection began in March 2020 and continues through the present.
Data exclusions
The article does not contain analysis, but presents 'snapshots' of the data to demonstrate its potential uses. No data are excluded from these presentations.
Non-participation
There are no participants in this study.
Randomization
This study did not rely on randomization.
Reporting for specific materials, systems and methods We require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.
|
2021-03-10T06:23:18.674Z
|
2021-03-08T00:00:00.000
|
{
"year": 2021,
"sha1": "f735bbfa87e42d85eb8d09782ff96a063240202a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41562-021-01079-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "27763dec8996775a8bd9b890cb7609ab186a85cb",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204189871
|
pes2o/s2orc
|
v3-fos-license
|
Conformal invariance of transverse-momentum dependent parton distributions rapidity evolution
In recent years, the transverse-momentum dependent parton distributions (TMDs) [1–4] have been widely used in the analysis of processes like semi-inclusive deep inelastic scattering or particle production in hadron-hadron collisions (for a review, see Ref. [5]). The TMDs are defined as matrix elements of quark or gluon operators with attached lightlike gauge links (Wilson lines) going to either þ∞ or −∞ depending on the process under consideration. It is well known that these TMD operators exhibit rapidity divergencies due to infinite lightlike gauge links and the corresponding rapidity/UV divergences should be regularized. There are two schemes on the market: the most popular is based on Collins-SoperSterman [2] or soft-collinear effective theory [6] formalism, and the second one is adopted from the small-x physics [7,8]. The obtained evolution equations differ even at the leading-order level and need to be reconciled, especially in view of the future electron-ion collider accelerator which will probe the TMDs at values of Bjorken x between smallx and x ∼ 1 regions. In our opinion, a good starting point is to obtain conformal leading-order evolution equations. It is well known that at the leading-order perturbative QCD (pQCD) is conformally invariant, so there is hope of get any evolution equation without explicit running coupling from conformal considerations. In our case, since TMD operators are defined with attached lightlike Wilson lines, formally they will transform covariantly under the subgroup of the full conformal group which preserves this lightlike direction. However, as we mentioned, the TMD operators contain rapidity divergencies which need to be regularized. At present, there is no rapidity cutoff which preserves conformal invariance, so the best one can do is to find the cutoff which is conformal at the leading order in perturbation theory. In higher orders, one should not expect conformal invariance since it is broken by the running of QCD coupling. However, if one considers corresponding correlation functions in N 1⁄4 4 super Yang-Mills (SYM), one should expect conformal invariance. After that, the results obtained in N 1⁄4 4 SYM theory can be used as a starting point of QCD calculation. Typically, the result in N 1⁄4 4 theory gives the most complicated part of the pQCD result, i.e., the one with maximal transcendentality. Thus, the idea is to find the TMD operator conformal in N 1⁄4 4 SYM and use it in QCD. This scheme was successfully applied to the rapidity evolution of color dipoles. At the leading order, the Balitsky-Kovchegov evolution of color dipoles [9–12] is invariant under SL(2,C) (Möbius) group. At the next-to-leading order (NLO), the “conformal dipole” with the αs correction [13] makes NLO BalitskyKovchegov evolution Möbius invariant for N 1⁄4 4 SYM, and the corresponding QCD kernel [14] differs by terms proportional to the β function.
I. INTRODUCTION
In recent years, the transverse-momentum dependent parton distributions (TMDs) [1][2][3][4] have been widely used in the analysis of processes like semi-inclusive deep inelastic scattering or particle production in hadron-hadron collisions (for a review, see Ref. [5]).
The TMDs are defined as matrix elements of quark or gluon operators with attached lightlike gauge links (Wilson lines) going to either þ∞ or −∞ depending on the process under consideration. It is well known that these TMD operators exhibit rapidity divergencies due to infinite lightlike gauge links and the corresponding rapidity/UV divergences should be regularized. There are two schemes on the market: the most popular is based on Collins-Soper-Sterman [2] or soft-collinear effective theory [6] formalism, and the second one is adopted from the small-x physics [7,8]. The obtained evolution equations differ even at the leading-order level and need to be reconciled, especially in view of the future electron-ion collider accelerator which will probe the TMDs at values of Bjorken x between smallx and x ∼ 1 regions.
In our opinion, a good starting point is to obtain conformal leading-order evolution equations. It is well known that at the leading-order perturbative QCD (pQCD) is conformally invariant, so there is hope of get any evolution equation without explicit running coupling from conformal considerations. In our case, since TMD operators are defined with attached lightlike Wilson lines, formally they will transform covariantly under the subgroup of the full conformal group which preserves this lightlike direction. However, as we mentioned, the TMD operators contain rapidity divergencies which need to be regularized. At present, there is no rapidity cutoff which preserves conformal invariance, so the best one can do is to find the cutoff which is conformal at the leading order in perturbation theory. In higher orders, one should not expect conformal invariance since it is broken by the running of QCD coupling. However, if one considers corresponding correlation functions in N ¼ 4 super Yang-Mills (SYM), one should expect conformal invariance. After that, the results obtained in N ¼ 4 SYM theory can be used as a starting point of QCD calculation. Typically, the result in N ¼ 4 theory gives the most complicated part of the pQCD result, i.e., the one with maximal transcendentality. Thus, the idea is to find the TMD operator conformal in N ¼ 4 SYM and use it in QCD. This scheme was successfully applied to the rapidity evolution of color dipoles. At the leading order, the Balitsky-Kovchegov evolution of color dipoles [9][10][11][12] is invariant under SL(2,C) (Möbius) group. At the next-to-leading order (NLO), the "conformal dipole" with the α s correction [13] makes NLO Balitsky-Kovchegov evolution Möbius invariant for N ¼ 4 SYM, and the corresponding QCD kernel [14] differs by terms proportional to the β function.
II. CONFORMAL INVARIANCE OF TMD OPERATORS
For definiteness, we will talk first about gluon operators with lightlike Wilson lines stretching to −∞ in the þ direction. The gluon TMD (unintegrated gluon distribution) is defined as [15] Dðx B ;k ⊥ ;ηÞ ¼ Z d 2 z ⊥ e iðk;zÞ ⊥ Dðx B ;z ⊥ ;ηÞ; where jPi is an unpolarized target with momentum p ≃ p − (typically proton) and n ¼ ð 1 ffiffi 2 p ; 0; 0; 1 ffiffi 2 p Þ is a lightlike vector in the "þ" direction. Hereafter, we use the notation where ½x; y denotes straight-line gauge link connecting points x and y: To simplify one-loop evolution, we multiplied F μν by a coupling constant. Since the gA μ is renormalization invariant, we do not need to consider self-energy diagrams (in the background-Feynman gauge). Note that z − ¼ 0 is fixed by the original factorization formula for particle production [5] (see also the discussion in Refs. [16,17]). The algebra of full conformal group SOð2; 4Þ consists of four operators P μ , six M μν , four special conformal generators K μ , and dilatation operator D. It is easy to check that in the leading order the following 11 operators act on gluon TMDs covariantly, while the action of operators P þ , M þi , and K þ do not preserve the form of the operator (2). The action of the generators (4) on the operator (2) is the same as the action on the field F −i without gauge link attachments. The corresponding group consists of transformations which leave the hyperplane z − ¼ 0 and vector n invariant. Those include shifts in transverse and þ directions, rotations in the transverse plane, Lorentz rotations/boosts created by M −i , dilatations, and special conformal transformations with a ¼ ða þ ; 0; a ⊥ Þ. In terms of "embedding formalism" [18][19][20][21] defined in six-dimensional space, this subgroup is isomorphic to the "Poincaré þ dilatations" group of the fourdimensional subspace orthogonal to our physical lightlike þ and "−" directions. As we noted, infinite Wilson lines in the definition (2) of TMD operators make them divergent. As we discussed above, it is very advantageous to have a cutoff of these divergencies compatible with approximate conformal invariance of tree-level QCD. The evolution equation with such a cutoff should be invariant with respect to transformations described above.
In the next section, we demonstrate that the "small-x" rapidity cutoff enables us to get a conformally invariant evolution of TMD in the so-called Sudakov region.
III. TMD FACTORIZATION IN THE SUDAKOV REGION
The rapidity evolution of the TMD operator (1) is very different in the region of large and small longitudinal separations z þ . The evolution at small z þ is linear and double-logarithmic, while at large z þ , the evolution becomes nonlinear due to the production of color dipoles typical for small-x evolution. It is convenient to consider as a starting point the simple case of TMD evolution in the socalled Sudakov region corresponding to small longitudinal distances.
First, let us specify what we call a Sudakov region. A typical factorization formula for the differential cross section of particle production in hadron-hadron collision is [5,22] ηÞ is the TMD density of a parton f in hadron h, and σðff → HÞ is the cross section of production of particle H of invariant mass m 2 H ¼ q 2 ≡ Q 2 in the scattering of two partons. (One can keep in mind Higgs production in the approximation of the pointlike gluon-gluon-Higgs vertex). The Sudakov region is defined by Q ≫ q ⊥ ≫ 1 GeV since at such kinematics there is a double-log evolution for transverse momenta between Q and q ⊥ . In the coordinate space, TMD factorization (6) looks like where As we mentioned, TMD operators exhibit rapidity divergencies due to infinite lightlike gauge links. The "small-x style" rapidity cutoff for longitudinal divergencies is imposed as the upper limit of k þ components of gluons emitted from the Wilson lines. As we will see below, to get the conformal invariance of the leading-order evolution, we need to impose the cutoff of k þ components of gluons correlated with transverse size of TMD in the following way: Similarly, the operatorÕ in Eq. (9) is defined with with the rapidity cutoff for β integration imposed as θð˜σ The Sudakov region Q 2 ≫ q 2 ⊥ in the coordinate space corresponds to where z 12 ≡ z 1 − z 2 . In the leading log approximation, the upper cutoff for k þ integration in the target matrix element in Eq. (7) is
12
, and similarly the β-integration cutoff in the projectile matrix element is In the next section, we demonstrate that the rapidity cutoff (10) enables us to get a conformally invariant evolution of TMD in the Sudakov region (11).
A. Evolution of gluon TMD operators in the Sudakov region
In this section, we derive the evolution of gluon TMD operator (8) with respect to cutoff σ in the leading log approximation. As usual, to get an evolution equation, we integrate over momenta σ 2 ffiffi 2 p . To this end, we calculate diagrams shown in Fig. 1 in the background field of gluons with k þ < σ 1 ffiffi 2 p z 12 ⊥ . The calculation is easily done by method developed in Refs. [24,25], and the result is where the kernel K is given by where we suppress arguments z 1 ⊥ and z 2 ⊥ since they do not change during the evolution in the Sudakov regime. The first two terms in the kernel K come from the "production" diagram in Fig. 1(a), while the last two terms come from the "virtual" diagram in Fig. 1(b). The result (13) can be also obtained from Ref. [25] by Fourier transformation of Eq. (5.9) with the help of Eqs. (3.12) and (3.30) therein. The approximations for diagrams in Fig. 1 leading to Eq. (13) are valid as long as which gives the region of applicability of Sudakov-type evolution. Evolution equation (12) can be easily integrated using Fourier transformation. Since one easily obtains where we introduced notationᾱ s ≡ α s N c 4π . It should be mentioned that the factor 4γ E is "scheme dependent"; if one introduces to α integrals smooth cutoff e −α=a instead of rigid cutoff θða > αÞ, the value 4γ E changes to 2γ E .
It is easy to see that the rhs of Eq. (16) transforms covariantly under all transformations (4) except the Lorentz boost generated by M þ− . The reason is that the Lorentz boost in the z direction changes cutoffs for the evolution.
To understand that, note that Eq. (15) is valid until σ > z þ 12 z 2 12 ⊥ , so the linear evolution (16) is applicable in the region between From Eq. (16), it is easy to see that Lorentz boost z þ → λz þ , z − → 1 λ z − changes the value of target matrix element hp A jOjp B i by expf4λᾱ s ln
B. Evolution of quark TMD operators
A simple calculation of evolution of quark operator yields the same evolution (16) as for the gluon operators with trivial replacement N c → C F [26]. The factor g 2C F b (b ≡ 11 3 N c − 2 3 n f ) is added to avoid taking into account quark self-energy.
C. Evolution beyond Sudakov region
As we mentioned above, the TMD factorization formula (6) for particle production at q ⊥ ≪ Q translates to the coordinate space as Eq. (7) with the requirement z 2 12 k ≪ z 2 12 ⊥ . During the evolution (16), the transverse separation between gluon operators F i and F j remains intact, while the longitudinal separation increases. As discussed in Refs. [24,25], the Sudakov approximation can be trusted until the upper cutoff in α integrals is greater than q 2 ⊥ x B s , which is equivalent to Eq. (14) in the coordinate space. If x B ∼ 1 and q ⊥ ∼ m N , the relative energy between Wilson-line operators F and the target nucleon at the final point of evolution is approximately m 2 N , so one should use phenomenological models of TMDs with this low rapidity cutoff as a starting point of the evolution (16). If, however, x B ≪ 1, this relative energy is N , so one can continue the rapidity evolution in the region q 2 ⊥ x B s > σ > m 2 N s beyond the Sudakov region into the small-x region. The evolution in a "proper" small-x region is known [27]-the TMD operator, known also as Weiczsäcker-Williams distribution, will produce a hierarchy of color dipoles as a result of the nonlinear evolution. However, the transition between the Sudakov region and small-x region is described by a rather complicated interpolation formula [24]. In the coordinate space, this means the study of operator O at z 2 k ∼ z 2 ⊥ , and we hope that conformal considerations can help us to obtain the TMD evolution in that region.
V. DISCUSSION
As we mentioned in the Introduction, TMD evolution is analyzed by very different methods at small x and moderate x ∼ 1. In view of the future electron-ion collider accelerator, which will probe the region between small x and x ∼ 1, we need a universal description of TMD evolution valid at both limits. Since the two formalisms differ even at the leading order where QCD is conformally invariant, our idea is to make this universal description first in N ¼ 4 SYM. In a first step, we found a conformally invariant evolution in the Sudakov region using our small-x cutoff with the "conformal refinement" (10).
To compare with conventional TMD analysis, let us write down the evolution of "generalized TMD" [28,29] which coincides with usual one-loop evolution of TMDs [30] up to replacement 4γ E − 2 ln 2 → 4γ E − 4 ln 2. As we discussed, such a constant depends on the way of cutting k − integration, which should be coordinated with the cutoffs in the "coefficient function" σðff → HÞ in Eq. (6). Thus, the discrepancy is just like using two different schemes for usual renormalization. It should be mentioned, however, that at ξ ≠ 0 the result (19) differs from the conventional one-loop result, which does not depend on ξ; see, e.g., Ref. [31].
VI. CONCLUSIONS
The first result of our paper is finding the subgroup of SOð2; 4Þ, which formally leaves TMD operators invariant. Although there was some discussion of conformal invariance of the TMD approach in the literature [32,33], to the best of our knowledge, we present the first complete description of that subgroup.
The second result is related to the fact that conformal invariance is violated by the rapidity cutoff (even in N ¼ 4 SYM). As we mentioned above, since tree-level QCD is conformally invariant, it is convenient to have a leadingorder evolution which respects that symmetry so the NLO corrections can be sorted out as conformal plus proportional to the β function. We have studied the TMD evolution in the Sudakov region of intermediate x and demonstrated that the rapidity cutoff used in small-x literature preserves all generators of our subgroup except the Lorentz boost, which is related to the change of that cutoff. It should be mentioned that usually the analysis of TMD evolution in the x ∼ 1 region is performed with a combination of UV and rapidity cutoffs, which gives two evolution equations, in μ 2 and ζ (related to rapidity). However, although the results of these two evolutions are known at two- [34][35][36] and three-loop [37] levels, their relation to conformal properties of TMD operators is not obvious. It would be interesting to check if our cutoff corresponds to some conformal evolution path in the twodimensional ðμ 2 ; ζÞ plane [38].
Our main outlook is to try to connect to the small-x region, first in N ¼ 4 SYM and then in QCD. As we mentioned above, although the TMD evolution in a small-x region is conformal with respect to the SLð2; CÞ group, and our evolution (16) is also conformal [albeit with respect to a different group of which SLð2; CÞ is a subgroup], the transition between the Sudakov region and small-x region is described by a rather complicated interpolation formula [24] which is not conformally invariant. Our hope is that in a conformal theory one can simplify that transition using the conformal invariance requirement. The study is in progress.
|
2019-09-26T09:05:30.740Z
|
2019-09-20T00:00:00.000
|
{
"year": 2019,
"sha1": "81dfffd202b76fd7436912dec2126e87067137ab",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.100.051504",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "256c485d0fa3816878fed9601d493ec0f14f37e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
218862895
|
pes2o/s2orc
|
v3-fos-license
|
Trialstreamer: Mapping and Browsing Medical Evidence in Real-Time
We introduce Trialstreamer, a living database of clinical trial reports. Here we mainly describe the evidence extraction component; this extracts from biomedical abstracts key pieces of information that clinicians need when appraising the literature, and also the relations between these. Specifically, the system extracts descriptions of trial participants, the treatments compared in each arm (the interventions), and which outcomes were measured. The system then attempts to infer which interventions were reported to work best by determining their relationship with identified trial outcome measures. In addition to summarizing individual trials, these extracted data elements allow automatic synthesis of results across many trials on the same topic. We apply the system at scale to all reports of randomized controlled trials indexed in MEDLINE, powering the automatic generation of evidence maps, which provide a global view of the efficacy of different interventions combining data from all relevant clinical trials on a topic. We make all code and models freely available alongside a demonstration of the web interface.
Introduction and Motivation
The highest-quality evidence to inform healthcare practice comes from randomized controlled trials (RCTs). The results of the vast majority of these trials are communicated in the form of unstructured text in journal articles. Such results accumulate quickly, with over 100 articles describing RCTs published daily, on average. It is difficult for healthcare providers and patients to make sense of and keep up with this torrent of unstructured literature.
Consider a patient who has been newly diagnosed with diabetes. She would like to con- Figure 1: A portion of an example evidence mapping Interventions and their inferred efficacy for Outcomes, given the condition (or Population) of Type II Diabetes. These maps are generated automatically using the NLP system we describe in this work.
sult (in collaboration with her healthcare provider) the available evidence regarding her treatment options. But she may not even be aware of what her treatment options are. Further, she may only care about particular outcomes (for instance, managing her blood pressure). Currently, it is not straightforward to retrieve and browse the evidence pertaining to a given condition, and in particular to ascertain which treatments are best supported for a specific outcome of interest.
Trialstreamer is a first attempt to solve this problem, making evidence more browseable via NLP technologies. Figure 1 shows one of the key features of the system: an automatically generated evidence map that displays treatments (vertical axis) and outcomes (horizontal) identified for a condition specified by the user (here, migraines). We elaborate on this particular example to illustrate the use of the system in Section 3.
Trialstreamer aims to facilitate efficient evidence mapping with a user friendly method of presenting a search across a broad field (here, being a clinical condition) (Miake-Lye et al., 2016). We use NLP technologies to provide browseable, interactive overviews of large volumes of literature, on-demand. These may then inform subsequent, formal syntheses, or they may simply guide ex-ploration of the primary literature. In this work we describe an open-source prototype that enables evidence mapping, using NLP to generate interactive overviews and visualizations of all RCT reports indexed by MEDLINE (and accessible via PubMed).
When mapping the evidence one is generally interested in the following basic questions: • What interventions and outcomes have been studied for a given condition (population)?
• How much evidence exists, both in terms of the number of trials and the number of participants within these?
• Does the evidence seem to support use of a particular intervention for a given condition?
In the remainder of this paper we describe a prototype system that facilitates interactive exploration and mapping of the evidence base, with an emphasis on answering the above questions. The Trialstreamer mapping interface allows structured search over study populations, interventions/comparators, and outcomes -collectively referred to as PICO elements (Huang et al., 2006). It then displays key clinical attributes automatically extracted from the set of retrieved trials. This is made possible via NLP modules trained on recently released corpora (Nye et al., 2018;Lehman et al., 2019), described below.
System Overview
The evidence extraction pipeline is composed of four primary phases. First, text snippets that convey information about the trial's treatments (or interventions), outcome measures, and results are extracted from abstracts. Relations between these snippets are then inferred to identify which treatments were compared against each other, and which outcomes were measured for these comparisons. The extracted relations and evidence statements are then used to infer an overall conclusion about the comparative efficacy of the trial's interventions. Finally, the clinical concepts expressed in the extracted spans are normalized to a structured vocabulary in order to ground them in an existing knowledge base and allow for aggregations across trials.
A typical RCT report would pertain to a single clinical condition (the population), but might report multiple numerical results, each concerning a particular intervention, comparator, and outcome measure (which we describe as an ICO triplet).
Because the end-to-end task combines NLP subtasks that are supported by different datasets, we collected new development and test sets -160 abstracts in all, exhaustively annotated -in order to evaluate the overall performance of our system. Two medical doctors 3 annotated these documents with the all of the expressed entities, their mentions in the text, the relations between them, the conclusions reported for each ICO triplet and the sentence that contains the supporting evidence for this (Lehman et al., 2019).
We were unable to obtain normalized concept labels for the ICO triplets due to the excessive difficulty of the task for the annotators.
Modeling decisions were informed by the 60 document development set, and we present evaluations of the first four information extraction modules with regard to the 100 documents in the unseen test set.
Preprocessing
Enabling search over RCT reports requires first compiling and indexing all such studies. This is, perhaps surprisingly, non-trivial. One may rely on "Publication Type" (PT) tags that codify study designs of articles, but these are manually applied by staff at the National Library of Medicine. Consequently, there is a lag between when a new study is published and when a PT tag is applied. Relying on these tags may thus hinder access to the most up-to-date evidence available. Therefore, we instead use an automated tagging system that uses machine learning to classify articles as RCT reports (or not). This model has been validated extensively in prior work , and we do not describe it further here.
Next, we replace all abbreviations with their long forms using the Ab3P algorithm (Sohn et al., 2008). Using long forms has the complementary advantages of improving PICO labeling accuracy while also reducing the amount of context needed for prediction by downstream model components.
PICO Elements
In order to identify the spans of text corresponding to the PICO elements of the trial, we use the EBM-NLP corpus (Nye et al., 2018). This is a dataset
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Figure 2: Overview of the evidence extraction pipeline, applied to all RCT article abstracts automatically identified. Text spans are first extracted from these abstracts, then assembled into relations that reflect the structure of the trials, and finally used to infer the effect interventions were reported to have on measured outcomes, as compared to the control treatment. comprising ∼5,000 abstracts of RCT reports that have been annotated to demarcate textual spans that describe the respective PICO elements. In addition to these spans, it contains more granular annotations on information within spans (e.g., specific Population attributes like age and sex).
We follow our prior work (Nye et al., 2018) in training a BiLSTM-CRF model that learns to jointly predict each PICO element using EBM-NLP. Recent work has shown the efficacy of BERT (Devlin et al., 2018) representations in this space, e.g., Beltagy et al. achieved state-ofthe-art performance on EBM-NLP using this approach (2019). Therefore, for all text encoding we use BioBERT (Lee et al., 2019), which was pretrained on PubMed documents. 4 Results for Interventions/Comparators and Outcomes on our test set are reported in Table 1. Since these spans will serve as inputs to downstream models in the pipeline, high recall at the expense of precision is preferable; we will allow subsequent classifiers to discard spurious spans. We achieve 0.87 recall at the clinical concept level. 4 For PICO tagging on EBM-NLP we found that BioBERT performed comparably to SciBERT (Beltagy et al., 2019).
Evidence Statements
In addition to PICO elements, we extract all sentences in the abstract that are predicted to contain evidence concerning the relative efficacy of an Intervention. Our training data for this model is sourced from the Evidence-Inference corpus (Lehman et al., 2019), which comprises ∼10,000 annotated 'prompts' across ∼2,400 unique fulltext articles. Each prompt specifies an Intervention, a Comparator, and an Outcome. Doctors have annotated the prompts for each article, supplying an extracted snippet that presents the conclusion for these ICO elements, as well as an inference concerning whether the Outcome increased, decreased, or remained the same in the intervention group (as compared to the comparator group). We frame evidence identification as a sentence classication task, and train a linear classification layer on top of BioBERT outputs. Our positive training examples are the sentences containing evidence snippets in Evidence-Inference, and we draw an equal number of length-matched negatives randomly from the rest of the document. As shown in Table 2, we achieve extremely high recall on the test set, but only middling precision. On inspection, many of these false positives are sentences from the conclusion that provide a high-level summary of the evidence, but aren't the best evidence statement -as provided by the annotator -for any given ICO prompt.
Relation Extraction
To transform the extracted spans into a semantic representation of the trial that can be used to construct an evidence map, we must identify all instances of an outcome being reported, and infer which two treatments were being directly compared as the intervention and comparator with respect to said outcome. Finally, given each assembled ICO prompt, we can then predict the trial's findings regarding whether the outcome increased, decreased, or was not statistically different under the intervention versus the comparator. In effect, we are aiming to jointly extract ICO prompts and infer the directionality of the results reported concerning these, whereas prior work (Nye et al., 2018;Lehman et al., 2019) has considered these problems only in isolation.
Our strategy for assembling ICO prompts is informed by the style in which results are commonly described in abstracts.When results are described in an article the outcome is typically referenced explicitly, while the intervention and especially the comparator are often referenced either indirectly ("Mean headache duration was similar between groups"), or not at all ("No significant difference was observed for recovery time"). In the fully annotated dev set collected for this work, 87% of outcomes were described explicitly in an evidence span, while only 28% of treatments were explicit.
Motivated by this observation, we use the (explicit) outcomes extracted from an evidence snippet as a starting point; for each of these outcomes, the associated intervention and comparator are then inferred. This has the significant advantage of explicitly linking each outcome to the evidence that will be used to infer the directionality of the reported finding. This also provides the end-user with an interpretable rationale for the inference concerning treatment efficacy.
To link candidate extracted treatments to specific outcome mentions, we train a model that takes in a candidate treatment, an evidence statement containing the outcome, and the surrounding context from the document, and predicts if the treatment is the participating intervention, the participating comparator, or if it is not involved in We experimented with different slices of the document as the context, and achieved the highest dev performance using the first four sentences of the article. The class probabilities from this model are used to rank the possible interventions and comparators for each outcome, and when sufficiently probable candidates are identified we generate a complete ICO prompt.
After assembling all ICO prompts in a document, we feed them to a final classifier to predict the directionality of findings for each outcome, with respect to the given intervention and comparator. This model is trained over the evidenceinference corpus using the provided I, C, and O spans coupled with the sentences that contain the corresponding evidence statement. Empirically, we found that signal for the classifier is dominated by the outcome text and evidence span, with almost no contribution from the intervention and comparator. This is unsurprising given the regularity of the language used to describe conclusions. The reported directionality of the result is almost exclusively framed with respect to the intervention, and only 4.0% of all outcomes ever have different results for another I+C linking within the same document. The best performing model input was simply [CLS] OUTCOME [SEP] EVIDENCE [SEP], and the results on the test set are reported in Table 3. Table 4: Performance for predicting an article's exact MeSH terms using the rule-base system, run on both the automatically extracted spans and the expertprovided test spans.
Normalizing PICO Terms
In order to standardize the language used to categorize the articles with respect to their PICO elements, we turn to the structured vocabulary provided by the National Libaray of Medicine (NLM) in the form of Medical Subject Heading (MeSH) terms. This resource codifies a comprehensive set of medical concepts into an ontology that includes their descriptions, properties, and the structured relationships between them. Each article in the MEDLINE database maintained by the NLM is annotated with the relevant MeSH terms by expert library scientists (subject to the same lag that necessitates an RCT classifier instead of relying on annotated Publication Types).
To induce relevant MeSH terms for an extracted text span, we reproduced the method described in the Metamap Lite paper (Demner-Fushman et al., 2017) to extract MeSH terms describing the PICO elements. In short, we generated a large dictionary of synonyms for medical terms algorithmically using data from the UMLS Metathesaurus, with synonyms being matched to unique identifiers pertaining to concepts in the MeSH vocabulary. We used this dictionary to map matching strings in our extracted PICO text to MeSH terms, yielding a set of normalized concepts describing each of the population, intervention, and outcome spans in the documents.
To evaluate the accuracy of this approach, we compare the differences between the MeSH terms produced by our system against those provided by the NLM for the 191 articles that comprise the test set for EBM-NLP.
The test articles are provided with an average of 14.8 MeSH terms per article, while our system induces 14.0 terms on average. The strictest evaluation for this module is to require exact matches between the predicted MeSH terms and the official MEDLINE terms -a daunting task given the 30, 000 possible labels we have to chose from. However, because the concepts in the ontology exist in varying levels of specificity (for example Migraine with Aura is a subset of Migraine Disorders), it is often the case that the predicted MeSH term is sufficiently close to the provided MeSH term for practical purposes, but differs in the level of specificity.
To better characterize the performance of our approach, we therefore also consider relaxing the equivalence criteria to include matching immediate parents or children in the MeSH hierarchy. This modification results in a 42% relative increase in recall and a 23% increase in precision, as shown in Table 4.
We observe that while the absolute accuracy is not high, this technique generally captures the key terms for the PICO elements. The most common mistakes, shown in Table 5, mostly involve missing age or publication type terms, and systematic differences between the general MeSH terms commonly applied to articles (for example, we might apply Patients rather than Humans).
A more sophisticated aligment between the way MeSH terms are applied by experts and the terms produced by our system has the potential to improve the overall effectiveness of the tool; we intend to pursue this in future work.
Illustrative Example
To illustrate the envisioned use of our automatic mapping system, we return to the example we began with at the outset of this paper: seeking evidence concerning treatment of Type II Diabetes. To begin, the user specifies a condition (Population) of interest. We rely on Medical Subject Headings (MeSH) terms, 5 which as dis- cussed above is a structured vocabularly maintained by the NLM. We allow users to enter a search string and provide auto-complete options from the MeSH vocabulary. Users can additionally provide interventions or outcomes of interest to further narrow the search. We show an example of a constructed set of filters in Figure 3.
Once a set of search terms is specified, relevant RCTs are retrieved from the comprehensive and up-to-date database. 6 The interface then displays counts of unique interventions and outcomes covered by the retrieved trials. Each bar in these plots can be clicked to explicitly include that concept in the search terms, allowing for a data-driven approach to building up the search parameters via iterative refinement.
At this point, the evidence map shown in Figure 1 is also displayed, providing a summary of the evidence available for the effectiveness of the selected interventions with respect to their cooccurring outcomes. The user can mouse-over plot elements to view tooltips that include snippets of contributing evidence from the underlying abstracts, or click through to browse these texts annotated with all of the extracted information, as shown in Figure 4. 6 We update this database nightly by scanning MEDLINE for new RCT reports using our RCT classifier .
User Study
To evaluate the system's utility for a real-world task, we provided the tool to a team of researchers at Cures Within Reach for Cancer (CWR4C). 7 Domain experts reviewed the extracted ICO conclusions and automatically generated plots for a randomly selected subset of documents pertaining to cancer trials, a domain that is particularly challenging given the prevalence of complex compound interventions that often share individual components between trial arms.
The reviewers were asked to evaluate the types of mistakes made by the system as well as the overall precision and recall of the extracted conclusions for each document. Across 21 documents average precision was 54% and average recall was 75%, and the team expressed excitement about the efficacy of the system for their purposes. CWR4C has continued to work with this tool as a source of information about cancer-related clinical trials.
Conclusions
We have presented the evidence extraction component of Trialstreamer, an open-source prototype that performs end-to-end identification of published RCT reports, extracts key elements from the texts (intervention and outcomes descriptions), and performs relation extraction between these, i.e., attempts to determine which intervention was reported to work for which outcomes.
We use this pipeline to provide fast, on-demand overviews of all published evidence pertaining to a condition of interest. Moving forward, we hope to refine the linking of extracted snippets to structured vocabularies to run a more comprehensive user-study to evaluate the use of the system in practice by different types of users. We also hope to develop a joint extraction and inference model, rather than relying on the current pipelined approach.
|
2020-05-25T01:00:53.874Z
|
2020-05-21T00:00:00.000
|
{
"year": 2020,
"sha1": "58b035910ed026dfd3cb6bd4901a7a57041dd43c",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.acl-demos.9.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "82a19187f07b48eee106511827b4a008bab68a30",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
26808873
|
pes2o/s2orc
|
v3-fos-license
|
Continuous Rocuronium Administration Method Based on Pharmacokinetic / Pharmacodynamics Model during Propofol , Sevoflurane , and Desflurane Anesthesia
Purpose: Although rocuronium bromide (Rb) is suitable for continuous administration use, determination of optimal continuous doses is difficult due to individual differences. This study examines the efficacy of a continuous Rb administration method based on effect-site concentrations calculated by a pharmacokinetic/pharmacodynamics model during propofol, sevoflurane, and desflurane anesthesia. Methods: The 36 enrolled patients were equally divided into three groups (P; propofol, S; sevoflurane, and D; desflurane groups). After induction and administration of Rb 0.6 mg/kg, we calculated the simulated effect-site concentration at the point which the first twitch (%T1) recovered to > 0% and defined this as the Rb recovery concentration (Rbr.c.) level appropriate for continuous rocuronium administration. The continuous administration doses of Rb were adjusted to maintain Rbr.c. during surgery. The Rbr.c. and the recovery time at %T1 > 25% were recorded for each type of anesthesia. Results: Rbr.c. (μg/mL) for the P, S, and D groups were 1.54 ± 0.2, 1.24 ± 0.2, and 1.09 ± 0.2, respectively. Continuous administration doses (μg/kg/min) in the P, S, and D group were 6.7 ± 0.9, 5.2 ± 1.0, and 4.5 ± 0.8, respectively. Rbr.c. and continuous doses in the S and D groups were lower than the P group. Neuromuscular relaxations during surgery in the S and D groups were more strongly maintained than for the P group. There was also a significantly prolonged recovery duration for the %T1 > 25% in the D versus the other two groups (P < 0.05). Conclusion: Results showed that our continuous administration method was effective for maintaining sufficient muscle relaxation without excessively prolonged recovery effects for both sevoflurane and desflurane as well as propofol anesthesia. Corresponding author.
Introduction
Neuromuscular blocking agents are commonly administered intermittently.When given as a bolus infusion, however, large differences in the blood concentrations are noted between the before and after infusion levels.Thus, this can cause an undesired overdose or insufficient muscle relaxation during general anesthesia.Since rocuronium bromide (Rb) has a rapid onset, intermediates duration of action, and exhibits almost no accumulation effect, Rb is considered to be suitable for use by continuous infusion [1] [2].As compared to the use of Rb bolus infusions, continuous administration of Rb is able to stably maintain both the blood and effect-site concentrations, and provides sufficient muscle relaxation effects during surgery without any observed body movements or bucking.However, there are large interindividual differences in the sensitivity to Rb due to several factors, such as gender, age, body weight, skeletal muscle mass, and sensitivity to neuromuscular effects [3]- [7].Therefore, it is essential to be able to determine the optimal rates of continuous administration of Rb in accordance with the individual patient.We previously reported an effective continuous rocuronium administration method that was based on the simulated effect-site concentrations obtained from a pharmacokinetic/pharmacodynamics model during propofol anesthesia [8].Our results showed that the simulated rocuronium concentration at the time of recovery to a first twitch (%T1) > 0% after the initial bolus administration of rocuronium was a good indicator of the optimal effect-site concentrations that would occur during continuous rocuronium administration under propofol anesthesia.Based on the previous result, we hypothesized that this continuous administration method was also effective under inhaled anesthesia, because continuous administration doses of Rb in our protocol were decided in accordance to each patient under each anesthesia.In the present study, we further compared the efficacy of this method when using propofol, sevoflurane, and desflurane anesthesia.The specific aim of the present study was to clarify whether our method when using sevoflurane and desflurane anesthesia was as effective as that observed for propofol.
Methods
The present study was approved by the Ethics Committee of Kagoshima University Hospital and registered with the UMIN Clinical Trials Registry (UMIN 000012313) on November 18, 2013.It was conducted in accordance with the principles of the Declaration of Helsinki, and prior written informed consent was obtained from each patient.
This study enrolled 36 adult patients under 65 years of age with an American Society of Anesthesiologists physical class status of 1 to 2 who were undergoing elective general surgery.Patients were equally and randomly divided into three groups (P; propofol, S; sevoflurane, and D; desflurane groups).Patients with renal, hepatic, or neuromuscular diseases were excluded.Patients receiving medications known to interfere with neuromuscular blocking agents, such as antibiotics and anticonvulsants, as well as obese patients (BMI > 30 kg/m 2 ), were also excluded.All patients fasted overnight and were not premedicated.On arrival in the operating room, patients were monitored with electrocardiography, non-invasive blood pressure, pulse oximetry, and bispectral index (BIS; Aspect Medical Systems, Norwood, MA, USA).Under local anesthesia, an intravenous catheter was inserted into a forearm vein and a cannula was inserted into the radial artery.Neuromuscular monitoring of the other arm was performed using a train-of-four watch (TOF Watch SX; Organon, Osaka, Japan).Stabilization and calibration of the monitoring device was performed according to good clinical research practices in pharmacodynamic studies of neuromuscular blocking agents [9].After the acceleration transducer was placed in the hand adaptor, the fingers were fixed to an arm board.
Anesthesia was induced with propofol and remifentanil.After induction of anesthesia and stabilization of the muscle response to ulnar nerve stimulation, the TOF monitor was calibrated to obtain maximal nerve stimulation.
After induction, rocuronium bromide at a dose of 0.6 mg/kg of total body weight (TBW) was administered intravenously.Nerve stimulation using the single stimulation mode at 1 Hz was then conducted every 15 seconds.
When the %T1 decreased to 0%, which indicates a full neuromuscular block, tracheal intubation was performed.All patients were mechanically ventilated during their surgery.In the P group, the propofol infusion was adjusted based on the TBW using a target-controlled infusion pump (TE-371TCI; Terumo Corporation, Tokyo, Japan).During all experimental procedures, propofol was started to infuse with a target concentration of 3 µg/ml.Both sevoflurane and desflurane were started to inhale at 0.6 minimum alveolar concentration (MAC) (end-tidal concentrations were 1% for sevoflurane and 3.6% for desflurane) in the S and D groups.Infusion rate of propofol and inhaled concentrations of sevoflurane and desflurane were adjusted in order to maintain the BIS value within a range of 40 -60.In all cases, the remifentanil infusion was adjusted depending on surgical invasiveness and based on the ideal body weight (IBW), which was calculated as height (m) 2 × 22, within the range of 0.1 -0.5 μg/kg/min.The doses of propofol and rocuronium were adjusted according to the TBW.During the studies, the core temperature was maintained above 36.0˚Cand the palm temperature above 32.0˚C by using a Bair Hugger ® forced-air warming device (Arizant Healthcare, Inc., Eden Prairie, MN, USA).
When the %T1 increased to more than 3% in successive measurements after the initial administration of rocuronium, we defined the effect-site concentration of Rb determined by the Wierda pharmacokinetic-pharmacodynamic model [10] as the rocuronium bromide recovery concentration (Rbr.c.).At this point, we then started a 7 μg/kgTBW/min continuous administration of Rb.In order to maintain the Rbr.c.during the surgery, the administration rate was adjusted by changing the infusion rate of Rb in increments or decrements of 1 μg/kg/min every 10 minutes.When adjusting the rocuronium infusion rate, the anesthesiologist did not refer to the state of the neuromuscular relaxation that was shown on the neuromuscular monitor.If a patient exhibited a %T1 increase of more than 20% during the continuous Rb administration, the experimental protocol dictated that the experiment was to be immediately discontinued.Just prior to the end of the surgery, Rb administration was discontinued, and the time until recovery of the %T1 to over 25% was measured along with the effect-site concentrations at this particular point, and we administered 0.2 mg/kg sugammadex at the recovery point of %T1 > 25%.
Statistical Analysis
All results are shown as the mean and the standard deviation (SD).We analyzed the differences among the three groups using one-way analysis of variance (ANOVA), and post-hoc test was conducted using Bonferroni method.P-values < 0.05 were considered statistically significant.Statistical analysis was performed using Graphic Prism version 5.0 software (GraphPad Software, San Diego, CA, USA).
Results
Table 1 presents the characteristics of the 36 patients.The Rbr.c.(µg/ml) for the P, S, and D groups were 1.54 ± 0.2, 1.24 ± 0.2, and 1.09 ± 0.2, respectively.The Rbr.c. in the P group was significantly higher than that observed in both the S and D groups.(P < 0.05; Figure 1).Recovery concentrations (µg/mL) at the time that the %T1 returned to over 25% were significantly higher in the P versus the S and D groups (P < 0.05; Figure 2(a)).Recovery durations (min) required for the %T1 to return to over 25% in the P, S, and D groups were 15.7 ± Data are expressed as mean ± standard deviation range and ratio of patients.There were no significant differences for the background among the three groups.P; propofol.S; sevoflurane.D; desflurane.
7.0, 19.6 ± 7.6, and 28.2 ± 6.3, respectively, with the recovery duration significantly prolonged in the D versus the other two groups (P < 0.05; Figure 2(b)).Continuous administration doses (µg/kg/min) in the P, S, and D groups were 6.7 ± 0.9, 5.2 ± 1.0, and 4.5 ± 0.8, respectively.The level of the continuous doses in the S and D groups were significantly lower than that observed in the P group (Figure 3). Figure 4 shows the ratio for the %T1 values during the continuous administration of Rb (calculated as a percentage).The muscle relaxation effects for both sevoflurane and desflurane anesthesia were stronger than that observed when under propofol anesthesia.
Discussion
In the present study, the result showed that neuromuscular relaxations could be maintained without excessively prolonged effects under both sevoflurane and desflurane anesthesia, as well as propofol anesthesia.More sufficient and stable neuromuscular relaxation under inhaled anesthesia could be taken, therefore it was considered that recovery time for achieving %T1 > 25% was required longer that of the propofol anesthesia.
As compared with the propofol anesthesia, the Rbr.c., which was defined as the concentration of Rb at the point where the recovery of the %T1 was > 0% after the bolus administration, was significantly lower for both the sevoflurane and desflurane anesthesia in the current study.Previous researchers have reported that the effects of neuromuscular blocking agents are prolonged when using inhaled anesthesia such as sevoflurane, isoflurane and desflurane [11]- [13].While desflurane by itself exhibits few neuromuscular blocking effects when used at clinical concentrations [14].It has been shown to enhance the effects of neuromuscular blocking agents to the same degree as seen for sevoflurane [11].In contrast, propofol has neither neuromuscular blocking nor enhancing effects.Therefore, many other studies have shown that the potency of Rb was increased by 25% -40% during inhalation anesthesia with agents other than propofol [12] [13].These previous studies which were conducted with an inhaled anesthesia at MAC of over 1, showed that the effects of the neuromuscular blocking were dependent upon both the time and the concentration of the inhaled anesthetics.In our current study, we adopted 0.6 MAC for the maintenance concentrations of sevoflurane and desflurane, as we were also administering remifentanil, which reduces the needed concentrations of inhaled anesthetics.
By predeterming and adjusting an adequate continuous administration Rb doses in accordance to each anesthetic agent, it should be possible to maintain an efficient neuromuscular relaxation during surgery and to avoid prolonged recovery times.For example, Bock et al. reported that as compared to propofol, the doses for the continuous administration of Rb that were needed to maintain the same degree of neuromuscular relaxation were smaller for sevoflurane and desflurane [12].In addition, even though the doses were smaller, there were no significant differences among three anesthetic drugs for the recovery time from the neuromuscular blocking condi-tions.Another study reported that the continuous administration rates of Rb required to keep the %T1 between 3% -10% were lower for sevoflurane versus propofol anesthesia [15].In the present study, we also found that the required continuous administration doses were lower for the inhaled versus the propofol anesthesia.However, the neuromuscular relaxant effects were stronger and there was a longer time to recovery for %T1 > 25% for the inhaled anesthetic agents.With regard to our current results, it should be noted that the surgical times for our study were longer as compared to the previous studies, and that the inhaled anesthesia, especially sevoflurane, was able to increase and enhance the neuromuscular relaxation effects [16].Even though these differences could have an influence to our results, our study found that the continuous administration doses of Rb were smaller than that for the inhaled anesthesia cases where there were deeper neuromuscular relaxation conditions and significantly prolonged recovery durations from the effects after stopping the continuous administration of Rb.
In the present study, the recovery times for achieving %T1 > 25% were 15.7 ± 7.0, 19.6 ± 7.6 and 28.2 ± 6.3 minutes for the P, S and D groups, respectively.Although the longest recovery time needed for a desflurane anesthesia case was 38 min, we did not observe any other cases with excessively prolonged recovery duration.We found the data for a recovery to %T1 > 25% to be both sufficient and within an acceptable range for clinical use.Thus, this information should be of great benefit when evaluating the conditions of the patients and determining when it is both appropriate and safe to stop the continuous administration of Rb.Moreover, the administration of sugammadex can be used to completely antagonize any neuromuscular blocking effects induced by Rb, as the reversal effects of sugammadex are equally effective for both sevoflurane and propofol [17].Indeed, in our study, we confirmed that the neuromuscular relaxation effects were completely and rapidly reversed to %T1 > 100% in all cases when we administered sugammadex at the recovery point of %T1 > 25%.
When using continuous administration of Rb, the blood concentrations are more stable, which helps to keep the patients more stable and improve the surgical conditions.Previous experimental findings have demonstrated the advantages of using Rb, which exhibits no accumulation and has very few potent metabolites [18].While several continuous administration methods, including target control infusion methods, have been examined and proposed for routine use [19], there have been many difficulties encountered clinically.These problems have been speculated to be primarily caused by the large individual variations that are seen in the patients.Thus, we examined the efficacy of a new method of continuous administration of Rb that uses the Rbr.c. as the indicator for determining the optimal level of anesthesia.
Neuromuscular relaxation effects were found to be stronger when using inhaled anesthesia, especially desflurane.Although we defined the Rbr.c.values after the first bolus administration of Rb, there is a possibility that the inhaled anesthesia could have had an enhancing effect on the neuromuscular relaxation during the continuous administration of Rb.The gold standard for neuromuscular monitoring is the use of the TOF Watch to monitor the contraction of the adductor pollicis muscle [9].However, even if there is complete paralysis of the adductor pollicis muscle, other muscles such as the diaphragm and the abdominal muscles might be able to partially recover from the neuromuscular blockade [20].Since this could lead to moving or bucking of the patients and lead to an inadequate surgical field during the surgery, deeper neuromuscular relaxation effects are occasionally needed.Our current study showed that the %T1 values could be maintained in all cases at < 10% during a continuous Rb administration.In fact, in nearly half of the cases under propofol anesthesia and throughout the majority of the duration while under the inhaled anesthesia, the %T1 values were 0%.In addition, there were no harmful events such as moving and bucking of patients or any claims of an inadequate field by surgeons during any of the surgeries.Therefore, we believe that our continuous administration method is able to maintain sufficient neuromuscular relaxation conditions under all general anesthesia procedures, especially during inhaled anesthesia.
One of the limitations of our current study was that we could not show any correlation between the duration of a continuous Rb administration and the time to achieve a recovery of %T1 > 25% (data not shown).Even so, it should be noted that the inhaled anesthesia was able to induce enhancing neuromuscular relaxant effects in a time-dependent manner.Moreover, as compared to propofol and to sevoflurane, desflurane exhibited stronger neuromuscular relaxant effects and required a longer time to achieve a recovery of %T1 > 25%.Previous studies have reported that even though the results were not significant, desflurane did tend to enhance the neuromuscular blocking to a greater degree than sevoflurane [10] [11].Comparisons between desflurane and sevoflurane regarding which concentrations of the inhaled anesthesia are most effective and the associated recovery times will need to be examined in further studies.
Conclusion
The simulated effective concentration of Rb at the point where %T1 recovers to > 0% can be used as an indicator to define the optimal continuous administration rates of Rb.This continuous administration method of Rb is proved to be effective in maintaining sufficient and stable muscle relaxation without excessively prolonging the effects under sevoflurane and desflurane, as well as propofol anesthesia.As compared to propofol, inhaled anesthetics exhibit enhanced neuromuscular blocking effects, with the enhanced effects perhaps stronger under desflurane versus sevoflurane anesthesia.
Figure 2 .
Figure 2.(a) Simulated effect-site concentrations of the Rb (µg/mL) at the time of recovery of the %T1 to over 25% in the P, S, and D groups were 0.86 ± 0.4, 0.64 ± 0.3, and 0.45 ± 0.1, respectively.There were significantly lower Rb concentrations in the S and D groups at these points, as compared to the P group (P < 0.05).P; propofol.S; sevoflurane.D; desflurane.(b) Recovery durations (min) until the %T1 recover to over 25% in the P, S, and D groups were 15.7 ± 7.0), 19.6 ± 7.6, and 28.2 ± 6.3, respectively.The recovery duration in the D group was significantly prolonged as compared to the P and S groups (P < 0.05).P; propofol.S; sevoflurane.D; desflurane.
Figure 3 .
Figure 3. Continuous administration doses (µg/kg/min) in the P, S, and D group were 6.7 ± 0.9, 5.2 ± 1.0, and 4.5 ± 0.8, respectively.The continuous doses in the S and D groups were significantly lower than the dose in the P group.P; propofol.S; sevoflurane.D; desflurane.
|
2017-08-28T11:30:53.365Z
|
2016-05-31T00:00:00.000
|
{
"year": 2016,
"sha1": "cffe3dd5d0cefe54f3cb0770bfabf3976e47637f",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=67007",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cffe3dd5d0cefe54f3cb0770bfabf3976e47637f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256332745
|
pes2o/s2orc
|
v3-fos-license
|
Use of Asian selected agricultural byproducts to modulate rumen microbes and fermentation
In the last five decades, attempts have been made to improve rumen fermentation and host animal nutrition through modulation of rumen microbiota. The goals have been decreasing methane production, partially inhibiting protein degradation to avoid excess release of ammonia, and activation of fiber digestion. The main approach has been the use of dietary supplements. Since growth-promoting antibiotics were banned in European countries in 2006, safer alternatives including plant-derived materials have been explored. Plant oils, their component fatty acids, plant secondary metabolites and other compounds have been studied, and many originate or are abundantly available in Asia as agricultural byproducts. In this review, the potency of selected byproducts in inhibition of methane production and protein degradation, and in stimulation of fiber degradation was described in relation to their modes of action. In particular, cashew and ginkgo byproducts containing alkylphenols to mitigate methane emission and bean husks as a source of functional fiber to boost the number of fiber-degrading bacteria were highlighted. Other byproducts influencing rumen microbiota and fermentation profile were also described. Future application of these feed and additive candidates is very dependent on a sufficient, cost-effective supply and optimal usage in feeding practice.
The rumen is a dense and diverse microbial ecosystem, capable of transforming fibrous plant material and nonprotein nitrogen into valuable products, such as short chain fatty acids and microbial protein [1]. However, this fermentation process is accompanied by the synthesis of non-beneficial products such as methane and is not always efficient, due to the limited supply of essential nutrients and/or inadequate feed formulation. Therefore, particular attention should be paid to dietary regimens that optimize fermentation. Several dietary supplements have been proposed for such a purpose [2][3][4][5][6], targeting inhibition of methane and rapid ammonia release, and improvement of fiber degradation.
Inhibition of methane production and excess ammonia formation conserves dietary energy and proteins, respectively. These effects were observed after supplementation with antibiotics [4] and halogenic chemicals [7], the majority of which have now fallen out of favor due to global concerns regarding food safety and environmental burden. Therefore, alternative agents are required, preferably naturally occurring materials such as plant resources [3,8]. The main components, most of which are plant secondary materials, have been screened out. They have ecological functions as chemical messengers between plants and the environment, often exhibiting antimicrobial activity [9]. Such alternatives have been actively explored, especially since growth-promoting antibiotics were banned in Europe in 2006.
Fiber digestion is preceded by fiber-digesting rumen microbes, mainly bacteria [10]. Therefore, preferential activation of fibrolytic rumen bacteria is important. Bacterial growth can be stimulated by vitamins, amino acids, branched chain fatty acids and other nutrients. Additionally, the use of easily degradable fiber as a strategy has been known since the 1980s [11][12][13]. Evaluation of supplements as boosters for fiber degradation should include the determination of fiber digestibility as well as the analysis of rumen bacterial abundance and activity. A mechanistic understanding of expected events would confirm theoretical knowledge, making supplement use more acceptable to farmers. Materials that have been proposed in the last decade include agricultural byproducts deemed safe, cost-effective and easily acceptable among farmers and product consumers.
This review describes selected agricultural byproducts that are available in the Asian region as potent feed or additive candidates for the above purposes. Characteristics, actions and benefits of such agricultural byproducts are discussed from the viewpoint of modulation of rumen microbiota and fermentation.
Selected byproducts containing plant secondary compounds as inhibitors of formation of non-beneficial fermentation products Cashew byproduct
Cashew nut shell liquid (CNSL), a byproduct of cashew nut production that accounts for about 32% of the shell, has many industrial applications and is used as a raw material for products such as paints, brake linings, lacquers and coatings [14]. The global production of CNSL is estimated at 450,000 metric tonnes per year [15], providing a readily available supply of CNSL. Vietnam and India are major CNSL-producing countries. This liquid also exhibits a wide range of biological activities, as it contains compounds with antimicrobial [16], antioxidative [17] and antitumor [18] properties, represented by anacardic acid, cardanol and cardol, which are all salicylic acid derivatives with a carbon-15 alkyl group. These phenolic compounds, especially anacardic acid, are reported to inhibit a variety of bacteria [19]. Proportions of these alkyl phenols in CNSL vary with producing area (cultivar) and deshelling process (heating). Therefore, the function of CNSL as a rumen modifier can also vary with these factors, as indicated in Tables 1 and 2. An early study by Van Nevel et al. [20] first indicated that anacardic acid could be used as a propionate enhancer in the rumen. Anacardic acid is found in cashew and ginkgo trees, particularly in their seeds. As cashew is the more abundant plant material, it is considered a more useful source of anacardic acid. The main action of anacardic acid and related phenolics is a surfactant action that inhibits mainly Gram-positive bacteria [16] lacking an outer membrane. Such cells are physically disrupted by anacardic acid. This selective inhibition of Gram-positive rumen bacteria might result in the alteration of rumen microbiota and fermentation products.
Indeed, Watanabe et al. [21] first indicated that unheated CNSL dramatically reduced methane production while increasing propionate production in batch cultures. They also reported that CNSL reduced methane levels in a rumen simulation technique (RUSITEC) fermenter, accompanied by drastic alterations in rumen microbiota. Quantitative polymerase chain reaction (PCR) demonstrated that formate and/or hydrogen producing bacteria decreased in abundance, while succinate and/or propionate producing bacteria increased with CNSL supplementation. In feeding experiments using cattle, we observed a similar response to CNSL [22]; specifically, a reduction in methane emission (19-38%) accompanied by alteration in the ruminal abundance of bacterial species responsible for methane and propionate production, causing a shift in hydrogen flow [23]. However, as expected, alterations of microbiota and fermentation profile in these feeding studies were less pronounced than those in in vitro studies. In feeding experiments using sheep, microbial and metabolic alterations were also observed, although alterations in the abundance of bacterial and archaeal members in sheep rumen (Suzuki et al. unpublished results) were not the same as those observed in cattle rumen (Su et al. unpublished results). In fact, in response to CNSL feeding, groups belonging to Proteobacteria, relatives of Succinivibrio and Succinimonas, showed increased levels in the rumen of cattle and sheep, while increases in Methanomicrobium mobile and Methanobrevibacter wolinii were respectively observed in the rumen of cattle and sheep.
As CNSL administration did not adversely affect digestibility in either cattle or sheep, this agricultural byproduct can be recommended for use as a potent methane-inhibiting and propionate-enhancing agent, due to its effects on rumen microbiota. However, the long-term effects of CNSL should be evaluated for practical application, as was emphasized for the ionophore monensin [24], which showed a reduction in efficacy with increased feeding period duration.
Later in vitro and in vivo studies on CNSL do not wholly support the above favorable results, due to the low level of CNSL supplementation and heat treatment for CNSL preparation (Table 1). Although CNSL supplementation decreased methane production, inhibition was only 18% [25], while it was 57% in the similar batch culture system used in our study [21]. CNSL feeding to dairy cows decreased methane emission by only 8% [26]. The differences between these later results and our initial ones might be the quantity and quality of CNSL. Danielson et al. [25] tested 3 times lower supplementation level of CNSL than the level examined by Watanabe et al. [21], and Branco et al. [26] used heatprocessed CNSL that contains cardanol as a main phenolic compound instead of the most potent phenolic, anacardic acid [27][28][29]. Microbial response was clearly Feeding (dry cow) 0.32% of DMI ns a ns - Feeding ( . Therefore, this cashew byproduct should be used in unheated form at an optimized supplementation level. Of alkylphenols present in CNSL, anacardic acid is most functional but decarboxylated and converted to caldanol by heating and long exposure to oxygen. Therefore, preparation and storage of CNSL are important to maintain its functionality. Recently, we found that CNSL feeding improved antioxidative status in cattle, causing higher free radical scavenging activity and lower lipid peroxidation products in the rumen and blood serum (Konda et al. unpublished results). Although the mechanisms involved in these changes are not yet clear, anacardic acid possessing antioxidative activity [17], can affect theses parameters directly and/or indirectly through alteration of rumen microbiota and their fermentation products.
Ginkgo byproduct
Another source of anacardic acid is the ginkgo plant, grown widely among Far-East countries such as China, Korea and Japan. Industrial uses of ginkgo are its leaves for medicinal use (China) and its nuts for food (Japan). Leaf extracts for medicinal use are even exported to European countries and also evaluated as a rumen modifier [30]. Ginkgo fruit is a byproduct in the process of ginkgo nut separation (unsuitable for human food use due to its peculiar smell), yielding ca. 2,600 metric t/yr in Japan, accounted for 230% of nut production [31]. Therefore, biomass of ginkgo fruit is much smaller in comparison with CNSL. In this regard, use for feed additive might be limited locally.
The main phenolic of ginkgo is anacardic acid, but it has different alkyl groups in comparison with those of cashew (C13:0, C15:1 and C17:1 for ginkgo vs. C15:1, C15:2 and C15:3 for cashew). An in vitro evaluation of ginkgo fruit extract as a rumen modifier using batch and RUSITEC systems showed that the extract decreased methane production in a dose-dependent manner and microbial responses were similar to those observed for CNSL (Tables 1 and 2 Both CNSL [21] and ginkgo fruit extract (Oh et al. unpublished results) decrease ammonia concentration in RUSITEC. Since both inhibit the growth of proteolytic, peptidolytic and deaminating rumen bacteria in pure culture, feeding of these extracts may spare dietary protein, peptide and amino acid. In fact, the growth of hyper ammonia-producing rumen bacteria was markedly inhibited by either the form of anacardic acid contained in CNSL or ginkgo fruit extract (Oh et al. unpublished results). Manipulation of protein and amino acid degradation is important, because excreted ammonia could be the source of nitrous oxide, which has much higher potential for global warming than methane. Also, decreased ammonia level in the rumen, but not lower than 5 mgN/dL to ensure microbial protein synthesis [32], may improve feed nitrogen economy. Since ginkgo fruit has not been tested in a feeding study, in vivo evaluation is to be made on rumen and animal responses including palatability of the diet to which ginkgo fruit is supplemented.
Tea byproduct
China is one of the biggest tea producers globally. Tea seed meal after oil extraction has previously been considered worthless. However, saponins contained in the tea seed meal have been found to exert beneficial antiprotozoal and antimethanogenic effects through surfactant action [33]. Significance of tea saponins and other source plants such as yucca and quillaja for the use of ruminant feed has been demonstrated [33,34]. Table 3 shows functionality of saponins of tea seed, tea seed meal and other source plants (Thai blueberry, fenugreek, and mangosteen). A series of studies on tea seed saponins revealed that the addition of tea seed saponins to in vitro cultures killed up to 79 % of protozoa. Moreover, in vivo experiments (feeding of tea seed saponin to lambs at 3 g/d) showed that the relative number of rumen protozoa to rumen bacteria was reduced by 41% after 72 d of tea saponin administration [35]. Using denaturing gradient gel electrophoresis (DGGE) analysis, a significantly lower diversity in protozoa was reported [36], indicating that the antiprotozoal activity of tea saponins might not be transient. Although an exception was observed by Ramirez-Restrepo [37], negative effect of tea saponins on rumen protozoa is consistent regardless of in vitro and in vivo conditions, and considered as one of main factors to modulate rumen fermentation in relation to bacterial and archaeal changes as discussed below.
The effect of tea saponins on the ruminal abundance of methanogenic archaea was not significant, while they drastically decreased the expression of the methyl coenzyme M reductase gene (mcrA) in the rumen [38]. This suggests that selective inhibition of methanogens might be involved in the antiprotozoal action. Using defaunated and refaunated sheep, Zhou et al. [36] showed that Dosage could not be expressed as % of dry matter intake (DMI) due to lack of data on feed intake tea saponins reduce methane production by inhibiting protozoa, most likely in coordination with their suppressive effects on protozoa-associated methanogens. Indeed, the presence and functional significance of protozoaassociated methanogens has been demonstrated [39,40]. Saponins alter rumen microbial community with a decrease in protozoa and fungi and increase in Fibrobacter succinogenes [38,41]. The latter can compensate for fiber digestion possibly depressed by the decreased number of fungi, leading to a fermentation change toward less methane and more propionate, since protozoa and fungi produce hydrogen, while F. succinogenes produces succinate as a propionate precursor. Recently, Belanche et al. [42] reported decreased diversity in the archaeal community by supplementation with ivy fruit saponins in RUSITEC fermenter: Methanomassilicocaaceae is substituted by Methanobrevibacter, a theoretically less active community member even though it is predominant in the rumen [43]. From these reports, it is apparent that the mechanism involved in the modulation of rumen fermentation by saponins remains to be fully characterized. Ruminal responses could differ depending on saponins that occur in a number of plants and comprise a variety of molecules. Tea saponins are, as indicated by a review article [34], one of the promising rumen modifier without negative influence on feed intake and digestibility if supplemented properly (3-5 g/d for goats and lambs).
Tea byproducts also contain catechin that can increase the proportion of unsaturated fatty acids in goat meat [44], presumably through alterations in the rumen microbiota. Another beneficial action of tea catechin is to improve antioxidant status of beef, once the catechins are ingested and absorbed by the animal. This was speculated by direct addition of tea catechins to beef [45].
Other byproducts
Other materials potentially modulating rumen fermentation are also shown in Table 3. Fenugreek is cultivated in western and southern Asian regions, where it is used as a spice, seasoning, fragrance in the form of sprouts, and is also known as a source of saponins. Fenugreek seed extract rich in saponin (0.29 mg/mL of diluted rumen fluid) inhibits growth of protozoa and fungi and increases growth of fibrolytic bacteria, leading to 2% decrease of methane production in vitro [41], awaiting a feeding assessment.
The seeds of Thai blueberry, Antidesma thwaitesianum Muell. Arg., containing condensed tannin, were evaluated as a ruminant feed [46]; goats fed the diet with this meal from the wine and juice industry (inclusion of 0.8-2.4% in DM) did not show any differences in feed intake, digestibility, ruminal pH or ammonia-nitrogen, while they showed a dose-dependent shift in short chain fatty acid production toward more propionate and less acetate and butyrate. Methane production linearly decreased (up to 8%) and nitrogen retention linearly increased (up to 45%) with seed meal supplementation level. Therefore, this byproduct might be an effective modulator of rumen fermentation and ruminant nutrition, though the mechanisms involved are not clear.
Feeding of mangosteen peel powder to lactating cows (300 g/d) can decrease methane production by 14% with a drastic decrease of rumen protozoa, while other representative rumen microbes are not affected [47]. Since mangosteen contains not only saponins but also condensed tannins, microbial and fermentation changes might be due to these two secondary metabolites.
Polyphenols in chickpea husk (abundantly available in southern and western Asia) exert antibacterial activity against mainly Gram-positive bacteria [48]. Rats fed chickpea husk at 5% level showed an altered hindgut bacterial community based on different DGGE banding patterns [49]. The authors also found that chickpea husk extract exhibited anti-oxidative activity measured as free radical scavenging activity and lipid peroxidation. In fact, rats fed chickpea husk had lower thiobarbituric acid reactive substance (TBARS) values in their blood plasma, suggesting the potency of this byproduct as a health-promoting agent in animals [49]. These favorable effects of chickpea husk are considered to be due to the presence of tannins that could have different impact depending on molecular species (i.e. source plants, cultivars and growing region) [50].
Asia is the origin of many plants that are sources of essential oils. As a byproduct of essential oil, leaf meal of Eucalyptus camaldulensis is paid attention due to the ability to decrease rumen ammonia level (by 34%) when fed to swamp buffaloes (120 g/d) possibly through the action of 1,8-cineol [51]. Therefore, it is proposed as another possible manipulator of protein and amino acid degradation in the rumen, which might save feed nitrogen. Since essential oils are generally expensive, their byproducts (residue of oil extraction) such as the above leaf meal is one option recommended for practical use.
New additive candidates from Asian agricultural byproducts have been explored for the use to decrease rumen methane and ammonia, in which in vitro evaluation is often used for initial screening. This evaluation is quick, quantitative, and very useful to define mechanisms involved in the efficacy of candidate material. However, as in vitro effect is always higher than in vivo effect, final recommendation is to be made after detailed evaluation by a series of feeding studies.
Easily digestible fibers as boosters of fiber degraders Chickpea and lablab bean husks
Fibers are not always efficiently degraded in the rumen due to complexity of fiber structure and components and less well optimized rumen microbiota. Recently, some easily degradable fibers have been proposed to modulate rumen microbiota toward quick optimization of developing fiber-degrading consortia [52]. We have found that husks from a few species of local beans (chickpea and lablab bean) show high potency in improving rumen fermentation [52,53]. The functionality of these husks is summarized in Table 4. These fiber sources are considered a replaceable fibrous feed, as well as a booster of the degradation of the main forage. Indeed, these fiber sources can be characterized as easily digestible [11,12].
Easily digestible fiber sources might promote the rapid growth of fibrolytic microbial biomass, which in turn facilitates the digestion of the other fiber in the rumen. Ammonia-treated barley straw and hay [11] have been used as sources of easily digestible cellulose and/or hemicellulose. Unmolassed sugar beet pulp [12,54], citrus pulp and dried grass [12], ammonia-treated rice straw [55] and soybean hull [56] are also sources of easily digestible fiber. However, their properties have not been fully characterized, especially in relation to the activation of fibrolytic rumen microbes.
It is imperative to determine whether the rumen bacteria that are activated by supplemental fiber correspond to the bacteria that are responsible for main forage digestion [53]; otherwise, this fiber cannot be considered a booster of main forage degradation. In this regard, local bean husks seem ideal for the enhancement of rice straw digestion, as they increased the ruminal abundance of the representative fibrolytic bacterium Fibrobacter succinogenes [53], whose importance in the degradation of grass forage such as rice straw is extensively studied [57][58][59][60][61][62][63][64] and widely accepted [65,66]. Sugar beet pulp, another easily digestible fiber that finds popular use in several countries, was eliminated by initial screening due to its failure to activate F. succinogenes [53].
Specific activation of F. succinogenes by selected materials (chickpea husk and lablab bean husk) was confirmed in a series of in situ and in vitro studies [52,53]. Quantitative PCR indicated that these fiber sources were heavily colonized by F. succinogenes. Pure cultures of several different strains of F. succinogenes revealed growth stimulation after addition of the bean husks as the sole carbon substrate.
Finally, a digestion trial, in which each type of husk was supplemented at 10%, was employed to evaluate them as digestion boosters for a rice straw-based diet [53]. The digestibility of acid detergent fiber was 3.1-5.5% greater in diets supplemented with chickpea husk or lablab bean husk than in the control. Total short chain fatty acid levels were higher in sheep fed lablab bean husk-supplemented diet than in sheep fed other diets, while acetate levels were higher in lablab bean husk-supplemented diet than in the control diet. Ruminal abundance of F. succinogenes was 1.3-1.5 times greater in diets supplemented with chickpea husk or lablab bean husk than the control diet. These results suggest that bean husk supplementation might improve the nutritive value of a rice straw diet by stimulating the growth of fibrolytic bacteria, represented by F. succinogenes. Regarding the use of chickpea husk, selection of cultivar may be important, because some show a higher content of tannin (e.g. chickpea husk from western Asia) that can inhibit fibrolytic bacteria and their enzymes.
Soybean hull
Soybean hull (soybean husk) is one of a number of popular feed ingredients that are partly interchangeable with main forages (up to 25-30% of dry matter intake) for lactating dairy cows without negatively affecting fermentation, digestion or production performance [67]. Soybean hull activated representative rumen cellulolytic and hemicellulolytic bacteria in a pure culture study, and growth stimulation of Prevotella ruminocola was notable after incubation with the water soluble fraction of soybean hull (Yasuda et al. unpublished results). Therefore, this familiar feed should be reevaluated for its potency in activating specific but important rumen bacteria and further examined to optimize its usage. Soybean hull also has unidentified functions that can modulate hindgut microbiota and fermentation in monogastric animals. Rats fed a diet containing 5% soybean hull showed higher Hemicellulose/cellulose ratio indicates the degree of complexity of fiber structure abundance of lactobacilli, leading to a higher lactate level and lower pH in the cecum in comparison with a control diet containing 5% cellulose, and this was partly explained by the presence of oligosaccharides in soybean hull (Htun et al. unpublished results). These results indicate availability of this material for non-ruminant animals, even companion animals such as dogs, as reported by Cole et al. [68], who valued the hull as a dietary fiber source.
Conclusions
Representative materials and components showing rumen modulatory effects, many of which can be obtained from Asian agricultural products, were introduced in this review. We focused on inhibition of methane production and protein degradation, and on stimulation of fiber digestion. Evaluation of such byproducts and their components should include mechanistic analyses together with practical feeding trials. Since the availability of candidate byproducts may depend on the region, cost-effective use of individual byproducts should be developed locally. Once the functional potency and a sufficient supply of candidate byproducts can be globally confirmed, these byproducts hold promise as rumen modulators to improve rumen fermentation and enable safer, healthier, more efficient and environmentally friendly production of ruminant animals.
|
2023-01-29T15:24:33.523Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "bcceba8fd385c98bf96a1989eb7bd3552fe889ca",
"oa_license": "CCBY",
"oa_url": "https://jasbsci.biomedcentral.com/track/pdf/10.1186/s40104-016-0126-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "bcceba8fd385c98bf96a1989eb7bd3552fe889ca",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
36005181
|
pes2o/s2orc
|
v3-fos-license
|
Wavelength Tunability of Ion-bombardment Induced Ripples on Sapphire
A study of ripple formation on sapphire surfaces by 300-2000 eV Ar+ ion bombardment is presented. Surface characterization by in-situ synchrotron grazing incidence small angle x-ray scattering and ex-situ atomic force microscopy is performed in order to study the wavelength of ripples formed on sapphire (0001) surfaces. We find that the wavelength can be varied over a remarkably wide range-nearly two orders of magnitude-by changing the ion incidence angle. Within the linear theory regime, the ion induced viscous flow smoothing mechanism explains the general trends of the ripple wavelength at low temperature and incidence angles larger than 30. In this model, relaxation is confined to a few-nm thick damaged surface layer. The behavior at high temperature suggests relaxation by surface diffusion. However, strong smoothing is inferred from the observed ripple wavelength near normal incidence, which is not consistent with either surface diffusion or viscous flow relaxation.
I. INTRODUCTION
Energetic particle bombardment on surfaces is known to produce one-dimensional (ripples or wires) and zero-dimensional (dot) structures at the submicron or nano-scale by a self-organization process. Recently, significant experimental and theoretical effort has been expended in order to develop ion bombardment patterning methods for the production of periodic nanostructures on various substrates. 1,2,3,4,5,6,7,8 These studies have demonstrated the potential to tailor surface morphology and related surface properties for novel optoelectronic and spintronic applications. 9,10 In addition, recent work has provided new insight into the mechanisms of the instability-driven self-organization process. 11 A significant milestone in our understanding of the origins of a self-organized ripple topography formed by ion sputtering is the work of Bradley and Harper (BH) in which they proposed a linear continuum equation to describe the main features of ripple formation. 12 The main idea of BH is that smoothing and roughening processes have different wavelength dependence, leading to a preferred wavelength where the surface amplitude grows the most rapidly.
However, certain experimentally observed features, such as the saturation of the ripple amplitude and the appearance of kinetic roughening are not predicted by the linear BH theory. 13,14 An extension of the linear BH theory into the non-linear regime has been proposed in order to avoid these shortcomings, 11 resulting in a noisy version of the Kuramoto-Sivashinsky equation: The υ 0 term, which represents the average erosion rate of the unperturbed planar surface can be neglected in Eq. (1), since it does not affect the process of ripple formation. The surface height h is then in a coordinate system that moves with the average surface during the erosion process. η(x, y, t) is a Poisson noise term related to random fluctuations, uncorrelated in space and time, in the flux of the incoming ions.
Within this theory, ion sputtering produces periodic modulated features (correlated lat-eral ordering) that arises from a competition of a roughening instability mechanism and surface relaxation. A roughening mechanism that often dominates the surface morphology is curvature-dependent sputtering, which is based on the linear cascade approximation first proposed by Sigmund. 15 However, certain compounds, such as GaSb and InP, 9,16 and elemental materials with refractory-metal seeding, 17 may also exhibit island agglomeration of excess elements by a process related to preferential sputtering.
In contrast, a wider range of relaxation mechanisms have been proposed in order to explain various experimental observations: (i) Surface diffusion (SD) mediated smoothing has been proposed to explain the temperature and ion flux dependence of ripple wavelength in the high temperature regime. 2 when λ x =λ y =0, ξ x = ξ y =0. In this article, our discussions of the wavelength tunability of ion-bombardment induced sapphire ripples are emphasized within the linear regime. The linear terms with coefficients ν x and ν y represent the curvature dependent ion erosion rates, and K xx and K yy are coefficients representing the surface smoothing terms.
Linear stability analysis indicates that the establishment of a periodic ripple structure across the surface depends on the balance between the curvature dependent roughening and surface smoothing mechanisms. 12 Two modes of rippled morphology can be induced by ion bombardment, with ripple wavevectors parallel or perpendicular to the projection of the ion beams. Regardless of the respective smoothing mechanisms (SD, SES, IVF), the wavelength of ion sputtered ripples with orthogonal orientations ℓ x (parallel) and ℓ y (perpendicular) are generally expressed as: The minimum between ℓ x and ℓ y determines which orientation dominates the ion-induced ripple topography. In Eq. (2) and (3), the coefficients ν x , ν y for curvature dependent roughening terms are given (following Ref. 11) by : In the expressions above, the terms are defined as: where F is a coefficient relating to the local sputter yield Y (θ) as: In addition, d is the ion energy deposition depth, σ and µ are ion energy distribution widths parallel and perpendicular to the incoming ion beams, J is the ion flux per area, p is a material constant depending on the surface binding energy U 0 and scattering cross-section, 15 and n is the atomic density of the substrate.
For the SD smoothing mechanism, thermally activated surface diffusion induces surface smoothing during ion sputtering. If surface self-diffusion is isotropic, then Here, D s is the surface self-diffusivity, which has an Arrhenius temperature dependence K yy,SES = F d 3 24 For the IVF model, the ion-enhanced surface viscous flow within a thin ion-damaged layer dominates the surface smoothing. Hence, stant and isotropic. The depth of the damaged layer is taken to be equal to the ion penetration depth d. 19 In the following sections, the data analysis and discussion of the wavelength tunability of ion-bombardment induced sapphire ripples are based on Eqs. 2-11 for the ripple wavelength.
Additional smoothing mechanisms beyond SD, SES and IVF are considered in section V.
III. EXPERIMENTAL
The ripples are produced on sapphire (0001) in a custom surface x-ray ultra-high vac- The x-ray flux after the Si (111) monochromator crystal is 2×10 12 photons/sec at a wavelength of λ=1.192Å with a beam-size of 0.5 mm × 0.5 mm. In the schematic representation of the x-ray measurement geometry shown in Fig. 1, the z axis is always taken to be normal to the sample surface, and the y axis along the projection of the incident x-ray beams onto the surface. k i and k f are the wave vectors of the incoming and scattered x rays, respectively.
The components of the scattering momentum transfer Q = k f − k i can be expressed by the glancing angles of incidence (α i ) and exit (α f ) with respect to the surface (x-y plane), and the in-plane angle ψ.
In the GISAXS geometry, the incident or exit x-ray beams are fixed near the critical angle for total external reflection (0.2 • for sapphire). A 320-pixel linear position sensitive detector (PSD) is positioned along x axis at the angle α f with respect to the surface, in order to collect in-plane scattered x rays. In terms of scattering momentum transfer, the PSD acquires a range of Q x at a constant Q z and Q y (typically, . Time-resolved GISAXS provides access to the evolving wavelength, shape and amplitude of surface ripples. For a full reciprocal mapping (Q x vs. Q z ), an α i = α f reflection mode scan is performed.
IV. RESULTS
In this part, subsections A through D describe a series of systematic investigations of the dependance of sapphire ripple characteristics (wavelength, orientation), on experimental parameters, including ion energy, ion incidence angle and temperature. It is also clear in Fig. 4 that the two satellite peaks develop in an unequal way as irradiation proceeds. After 30 minutes, the peak on the positive Q x side is noticeably larger than the one on the negative Q x side. At 40 minutes, the larger peak is several times more intense than the smaller one. This diffuse intensity asymmetry was also observed in the GISAXS study of ion-eroded SiO 2 by Umbach et al. 19 Fig . 5 shows an asymmetric saw-tooth model profile, which is used as a simplified approximation to the ripple shape. Here, the parallel component of the incident ion beam is along the -x direction, as defined above. We note that the term ∂h/∂x in Eq. (1), representing the surface local slope, has opposite signs at different sides of the solid saw-tooth. Thus, an off-normal incidence will produce different erosion rates on positive and negative slopes.
The model predicts that the unbalanced erosion will make ripples move like waves across the surface in a direction opposite to the projection of the incident beam along the surface. 12 However, we note that a recent study of ripples formed on ion-bombarded glass surfaces showed forward propagation of ripples. 23 Nonlinear terms of the form (∂h/∂x)(∂ 2 h/∂x 2 ) produce the asymmetric shape, which is observed in our x-ray diffuse scattering measurements. 11 Therefore, the appearance of an asymmetric GISAXS pattern indicates the onset of this lowest order non-linear term. Fig. 6 shows the observed dependence of ripple wavelength ℓ x on ion energy ε for ion sputtered sapphire at low temperature 300 K (a) and high temperature 1000 K (b), respectively. This series of sapphire ripples are obtained at 45 • off-normal ion incidence. In Fig. 6(a), square and circle symbols represent the wavelength of ripples produced by the high-flux RF plasma ion source and the low-flux ion gun, respectively. The sapphire ripple wavelength increases with ion energy at low temperature, which is consistent with observations for ion eroded SiO 2 , GaAs and Si surfaces. 19,24,25 The data obtained from high/low flux ion sources overlap within experimental error at both 500 eV and 1000 eV, which indicates that the ripple wavelength is independent of incident ion flux at low temperature. A non-linear least squares fit gives a power law coefficient with p=0.71 for the dependence of the wavelength on ion energy (ℓ x ∼ ε p ). Also plotted are curves corresponding to p=1 and 0.5 for comparison.
B. Ripple Wavelength Variation with Ion Energy
In Fig. 6(b), the ripple wavelength decreases with ion energy. A non-linear least squares fit gives a power law coefficient with p=-0.44. Also plotted are curves corresponding to p=-0.25 and -0.75 for comparison.
A general formula for a low temperature ripple wavelength along the dominant orientation x-axis can be expressed from Eq. (2), (4) and (9), based on the SES model (section II).
Taking |ν x | ∼ F d and K xx,SES ∼ F d 3 , we can obtain the dependence of the wavelength on ion energy as: The dependence of d on ε is quantitatively accessible with the aid of the ion-collison simulator SRIM. 26 It indicates that d varies as ε p with p=0.48 for α-sapphire at the incidence angle of 45 • . The p=0.48 obtained from Eq. (12) matches the observed wavelength-ion energy relation in Fig. 6(a) reasonably well. However, a quantitative analysis (as detailed in section IV-C) by Eq. (12) predicts values of the ripple wavelengths that are an order of magnitude smaller than our measured values of ℓ x , indicating that the SES, which contains no adjustable parameters, does not account for the observed ripple wavelength at low temperature. 19 A specific expression for the ripple wavelength based on the ion-enhanced surface viscous flow (IVF) model, 27 can be derived from Eq. (2), (4), (7) and (11) as an extension to Eq.
(12). Inserting K IV F = γd 3 /η s , |ν x | ∼ F d and F=JY(θ)/nc into Eq. (2), we can get the ion energy dependence for the IVF model as: where the coefficient K xx of the IVF smoothing term replaces K xx in Eq. The fitted power law coefficient of p=0.71 in Fig. 6(a) and that of the IVF model (p=0.67) for the dependence of the wavelength on ion energy at low temperature are indistinguishable within experimental error. Moreover, we have observed that the RHEED pattern for the sapphire (0001) surface disappears upon ion irradiation at room temperature, confirming that ion bombardment amorphizes the surface, or at least induce a layer with a very high defect density. Similar behavior of surface amorphization under ion bombardment has been noted in the study of ion sputtered Si and InP. 16,29 The key idea of the IVF model is that this thin layer can relax by a collective motion ("flow"), driven by surface tension.
The SD model can be useful in predicting the high temperature ripple wavelength. The formula for the wavelength with its wavevector along the x-axis can be expressed from Eq.
(2), (4), (7) and (8), as: where only d and Y(θ) are dependent on ion energy. Thus, the SD model gives ℓ ∼ ε p where p=-0.55 for sapphire ripples at high temperature. The SD model is consistent with the energy dependence at high temperature in Fig. 6(b). A more refined model that combines both the IVF and SD mechanisms is given in section IV-D, which is also compatible with the data in fig. 6(b). Other variations of the SD model that include ion-bombardment effects are discussed in section V. We also note that the sapphire surface exhibits a well developed RHEED pattern after etching at 1000 K, indicating a higher degree of surface crystallinity at this temperature.
C. Dependence of Ripple Wavelength on Ion Incidence Angle (Low Temperature)
The observed wavelength-angle phase diagram for sapphire ripples produced by 600 eV Ar + bombardment at room temperature is displayed in Fig. 7. The wavelength of sapphire ripples is varied through a remarkably wide range ( 30 nm to 2 µm) by changing the incidence angle. Below 40 • , the ripple wavelength is particularly sensitive to the incidence angle, while the wavelength is relatively constant in the middle range from 40 • to 65 • . Ion incidence at an angle larger than 70 • rotates the orientation of the ripples by 90 • .
The theoretical wavelength-angle phase diagram for sapphire ripples produced by 600 eV Ar + bombardment at room temperature is also shown in Fig. 7. On the other hand, the IVF model prediction agrees with the observed ripple orientation over the whole range. However, the IVF mechanism does not predict the observed large wavelength-angle dependence below 40 • . This point will be discussed further in section V.
Region II: This region is characterized by negative K xx,SES < 0, which prevents the appearance of ℓ x for the SES model, and thus the SES model predicts only ℓ y ripples in this region. In the IVF model, the ℓ x wavelength increases to infinity at the region II/III boundary near 65 • . Thus, the IVF model predicts that the dominant ripples will switch their wavevector orientation to the y-direction in this region. This boundary could be adjusted towards higher angles since it is very sensitive to the change of simulated parameters, such as d, d σ and d µ . The experimental observation is that ℓ x ripples are still observed, but longer scale order in the orthogonal direction begins to build up. Overall, the behavior in this region agrees reasonably well with the prediction of the IVF model.
Region III: ν x > 0, ν y < 0, K xx,SES < 0, K yy,SES > 0, K IV F > 0. The ℓ x ripple is not stable in either model, since ν x becomes an effective smoothing term when it is positive.
Near 90 • , ℓ y either drops to zero (SES) or increases to infinity (IVF). Again, the IVF model correctly predicts the observed behavior, at least qualitatively.
Ex-situ AFM images in Fig. 8 display surface morphologies obtained at different angles of incidence for ion sputtered sapphire, corresponding to the observed phase diagram in Fig. 7. Figs. 8(a) and 8(b) show images for off-normal incidence at 25 • , which produces micronscale ripples with wavevector parallel to the ion beam direction, and are readily visible in the large-scale image [ Fig. 8(b)]. In contrast, 55 • incidence, shown in Fig. 8(c), produces a well-ordered nanorippled surface with the wavevector parallel to the projection of the incoming ion beams along the surface, which has a wavelength similar to that shown in Fig. 3. We note that at the larger scale in Fig. 8(d) the surface roughness is also correlated with wavevector perpendicular to the incoming ion beam, as predicted by ℓ y in the phase diagram.
Ion incidence at 65 • still creates detectable ripples with wavevector parallel to the projection of the incoming ion beams in Fig. 8(e), but obvious submicron furrows oriented along the ion beam direction are observed in Fig. 8 Fig. 9 shows the observed ripple wavelength dependence on inverse temperature 1/T for two different angles of incidence. The ion energy is 600 eV for both angles of incidence. All samples are preheated to a chosen temperature and then sputtered at this temperature until a well-defined wavelength is established. The ripple wavelength obtained at 45 • is constant at low temperature and increases significantly when the temperature increase over 700 K.
We have used the SD mechanism to describes the temperature dependence of the ripple wavelength at 45 • . The wavelength varies as ℓ ∼ (T ) −1/2 exp(−∆E/2k B T ), where ∆E is the activation energy for surface diffusion and k B is Boltzmann's constant. 12 However, this expression does not take into account the low-temperature component of ion bombardment induced smoothing. Thus in Fig. 9, the observed dependence of the ripple wavelength on temperature at the incidence angle of 45 • is modeled with a modified K xx (solid line), as: Taking Eq. (15) The calculated wavelengths for 35 o incidence based on Eq. (2), (4), (8), (11) and (15) are shown by the dashed line in Fig. 9. A weaker, but still significant temperature dependence is predicted, which is not observed experimentally (square symbols). Rather, the experimental ripple wavelengths at 35 • are independent of temperature. This indicates that the different smoothing mechanisms (i.e. thermal vs. non-thermal mechanisms) have different dependences on the angle of incidence. In particular, the rapid increase in wavelength at low angles is inferred to be due to a non-thermal smoothing mechanism that increases at low angles to dominate over the other mechanisms, but is not included in our present model.
Finally, Fig. 10 shows the observed wavelength-angle phase diagram (ℓ x vs. θ) for sapphire ripples obtained at two temperatures, 300 K and at 1000 K. The theoretical curves based on Eq. (2), (4), (8), (11) and (15) are also plotted. The solid line for 300 K is identical to the solid line in Fig. 7, which is just based on the IVF model. The dashed line for 1000 K includes both SD and IVF smoothing effects, combined as shown in Eq. (15). We note that the experimental ripple wavelength ℓ x exhibits a very large increase and is not sensitive to thermal activation for incidence angles below 40 • . Taken together, these observations again indicate a strong non-thermal smoothing mechanism which is not adequately explained by any of the models under consideration.
V. DISCUSSION
The observations in the section IV-B and IV-C indicate that the IVF and SD models fits some of the trends of the wavelength dependence on experimentally accessible parameters.
However, some of the characteristics during the ripple formation are beyond the current theories as described in sections IV-C and IV-D. Fig. 7 shows that the obtained wavelength for incidence angle lower than 40 • spans a range of two orders of magnitude from 30 nm to 2 µm. Figs. 8(c) and 8(d) confirm that ion incidence at 25 • only produces surface features at the larger scale. Furthermore, ion sputtering at normal incidence does not roughen the surface at all from the onset of irradiation, which is confirmed by a real-time GISAXS study. This is in contrast to the theoretical behavior which predicts that the ripple wavelength only increases slightly as normal incidence is approached. If the nonlinear terms λ x and λ y are introduced into the continuum equation describing the surface motion, kinetic roughening 11 is expected in the region I of the phase diagram (when λ x · λ y > 0), but such roughening is not observed. These unusual effects lead us to propose that there is an additional smoothing mechanism that dominates the behavior near normal incidence.
We have considered the fact that the theoretical ripple wavelength is very sensitive to the ion range d, so that a small (factor of two) uncertainty in d would have a large (order of magnitude) effect on the calculated wavelength for certain models. A factor that is not taken into account in SRIM simulations is the ion-channeling effect. 30 Another important factor that is not taken into consideration by current models to explain surface morphology created by ion sputtering is ion impact induced lateral mass redistribution. 4,32,33 It can induce a form of surface smoothing process that is different from the SD, SES or IVF relaxation mechanisms previously discussed. Impact-induced downhill currents have been identified as the driving force underlying the ultra-smoothness of surfaces resulting from ion assisted film deposition. 33 . We also note that the lateral current term is expected to be strongest at low angles of incidence. 4 The behavior of the ripple formation at high temperature can be explained reasonably by the SD model. However, another important fact observed in our experiments, but not discussed above, is that thermal annealing at 1000 K without ion irradiation does not produce any distinguishable decay of the amplitude of as-prepared ripples, in contrast with previous studies on Si and Ag surfaces. 34,35 This indicates that the surface smoothing at high temperature is not by a type of surface diffusion that is purely thermally activated. Rather, it is likely to involve to the creation of mobile species on the surface during ion bombardment.
Further work on the flux dependence of the ℓ x at high temperature will assist us in clarifying the dominant creation process for mobile species underlying the ion-enhanced surface diffusion. 2
VI. SUMMARY
In summary, the formation and characteristics of ripple morphologies on sapphire surfaces produced by ion sputtering are systematically investigated by in-situ GISAXS and ex-situ AFM. The ripple wavelength can be modulated effectively in a wide range of 20 to 2000 nm by changing the ion incidence angle, ion energy and temperature. This phenomenon provides an easy route to fabricate nanostructured surfaces for exploring novel nanoscale phenomenon. The IVF and SD smoothing mechanisms are shown to play an important role in the formation of the sapphire ripple structure. The possible importance of impact-induced lateral currents as a smoothing mechanism should also be investigated further.
ACKNOWLEDGMENTS
We wish to acknowledge the experimental assistance and facility supports of Dr. Lin
|
2017-10-02T13:25:22.578Z
|
2006-08-08T00:00:00.000
|
{
"year": 2006,
"sha1": "2c61481f1793e9b8fd02bea843bd9d5080b78840",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0608203",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2c61481f1793e9b8fd02bea843bd9d5080b78840",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
17489270
|
pes2o/s2orc
|
v3-fos-license
|
Large N_c in Chiral Resonance Lagrangians
In the first part of the talk, I discussed results on the determination of the ratios of the light quark masses from large N_c chiral perturbation theory, to be described elsewhere. The following notes contain material from the second part of the talk, which concerns the implications of large N_c for resonance dominance estimates of the low energy coupling constants in chiral perturbation theory.
Introduction
In the large N c limit, N c → ∞, at fixed scale Λ QCD , the spectrum of QCD is known to consist of an infinite number of stable states. 2,3,4 The degrees of freedom in the corresponding low energy effective theory are the states with masses vanishing in the chiral limit (m u = m d = m s = 0) of large N c QCD, viz. π, K, η and η ′ . 5 The presence of the remaining massive states shows up only indirectly, trough their contributions to the low energy expansion of QCD correlation functions. In chiral perturbation theory, these are accounted for in the form of low energy constants that arise at nonleading order in the low energy Lagrangian.
In this note we discuss consequences for the low energy constants in the framework of an explicit realization of this scenario where, however, only the lowest lying resonance states are retained. In Sec. 2, we briefly review the framework of the low energy expansion at large N c . Secs. 3 and 4 deal with the accommodation of explicit resonance degrees of freedom in that setting. Sec. 5 reviews the role of constraints from QCD asymptotic behaviour in the determination of the parameters occurring in the chiral resonance Lagrangian. In Sec. 6, we discuss the implications of our analysis for the standard framework of chiral perturbation theory where N c is not treated as large. The corresponding numerical analysis may be found in Sec. 7. Finally, Sec. 8 contains a discussion of the results and our conclusions.
Low Energy Expansion at Large N c
If the number of colours is treated as large, the low energy effective Lagrangian for QCD involves 9 degrees of freedom. The field variables are collected in a unitary matrixŨ (x) ∈ U(3) and the extra field shows up as the phase of the determinant ofŨ (x), detŨ (x) = e iψ(x) . (1) The bookkeeping can be simplified by introducing a counting parameter δ, where powers of the momenta, quark masses and 1/N c are weighted according to The expansion of the effective Lagrangian starts with a term of order δ 0 , In addition, the terms in the Lagrangian are subject to the constraints According to these rules, the leading order term in the effective Lagrangian involves terms of order N c p 2 and N c 0 p 0 , while the term of order δ collects the N c p 4 , N c 0 p 2 and 1/N c p 0 contributions, etc. For a detailed account of these matters, we refer the reader to Ref. 7. The leading order Lagrangian in this expansion takes the form, 5,6,7 where θ is the field conjugate to the winding number density, which on account of the U(1) A -anomaly transforms in such a manner that the combinationψ +θ remains invariant under chiral transformations. The external vector (v µ ), axial vector (a µ ), scalar (s) and pseudoscalar (p) fields enter in the expression for the covariant derivative D µŨ andχ, For a suitable choice of the effective variables, i.e. in particular the matrixŨ andψ + θ, the individual terms in the effective Lagrangian in Eq.
(3) obey what we shall call 'canonical large N c counting rules': These state that terms with a single trace are of order N c , while the occurrence of each additional trace reduces the order of the term by unity. These rules also apply to terms involving powers of the chiral invariant combinationψ + θ if these are counted like extra traces. 7 The effective Lagrangian in Eq. (5) exclusively involves fields which are of O(1) in the large N c limit. In this case, the rules immediately apply to the coefficients of the terms and we deduce the following large N c behaviour for the three coupling constants in the Lagrangian in Eq. (5), F 0 is pion decay constant in the limit of zero u, d and s quark masses,B 0 is related to the quark condensate in the same limit,B 0 = − 0|ūu|0 0 /F 2 0 . In the limit N c → ∞,τ 0 coincides with the topological susceptibility of the corresponding quarkless theory (Gluodynamics) and equips the η ′ with a mass of order 1/ √ N c , At order δ the effective Lagrangian involves 11 additional low energy constants and takes the form, 7 The invariant derivatives D µψ , D µ θ and the field strength tensors R µν and L µν are defined by where r µ = v µ + a µ and l µ = v µ − a µ . The somewhat queer naming scheme for the coupling constants is chosen so as to facilitate the comparison with the standard framework, see Sec. 6.
In accordance with the canonical large N c counting rules, the terms in the first line of Eq. (9) are of order N c 0 p 2 and the remaining terms are In the following we are going to show how to obtain estimates for those coupling constants on the basis of a chiral Lagrangian with resonance fields.
Matter Fields
The chiral transformation law for the effective fieldŨ (x) is at the heart of the construction of the effective Lagrangians in Eqs. (5) and (9), with . For the matter fields we need to find a corresponding transformation law, such that under transformations of the unbroken symmetry group V R (x) = V L (x) = V (x) it reduces to the proper transformation law, 8 e.g.
for a 3 × 3 matrix R collecting a nonet of resonance fields. To this end, introduce a unitary matrixũ(x) ∈ U(3), such that In order to promote Eq. (14) to a covariant relation, we deduce the following transformation law for the fieldũ(x), with a unitary matrix T (x) ∈ U(3). Note that for a general chiral transformation, the matrix T (x) depends not only on the transformation matrices V R (x) and V L (x) but also on the effective fieldŨ (x), In the special case of a vector transformation V R (x) = V L (x) = V (x), this dependence disappears, however: does therefore represent one of possible extensions of the vector transformation law in Eq. (13) to general chiral transformations. It furthermore has a However, the matrix T (x) hardly notices that we are considering unitary matrices: In fact, T is independent of detŨ and det V R V † L .
the advantage of being right-left symmetric and preserving the hermiticity of the matter field R(x).
In this representation, the external fields r µ , l µ andχ appear in the following building blocks, 9 as well as in the covariant derivative associated with the transformation law in Eq. (16), Expressed in terms of the lower case effective fields, the leading order Lagrangian in Eq. (5) reads We are now in a position to proceed with the construction of the chiral Lagrangians for the resonance fields.
Resonance Lagrangians
The chiral Lagrangians for vector (V), axial vector (A), scalar (S) and pseudoscalar (P) resonance fields take the form, 9 where, for convenience, the vector and axial vector resonances are described in terms of antisymmetric tensor fields, R νµ = −R µν . 10,9,11 At order δ, chiral symmetry permits the following set of independent contributions to the currents J R , Compared to the standard framework studied in Ref. 9, our resonance Lagrangian involves one genuinely new contribution, in the pseudoscalar current J P . We have denoted the corresponding coupling constant by d 0 , while otherwise we have borrowed the notation of Ref. 9.
In the normalization convention of Eqs. (20) and (21), the resonance fields must be booked as order √ N c . The kinetic terms are then of order N c , in accordance with the canonical large N c counting rules stated in Sec. 2. The coupling constants in the resonance Lagrangian exhibit the following large N c behaviour so that the terms involving the currents J R are of order N c as well, with the exception of the piece proportional to d 0 which is of order 1 so as to account for the occurrence of the factorψ +θ. Finally, the resonance masses When those masses are treated as large, the resonances may be integrated out, with the result 9 By use of the relations in Eq. (17) the result may be cast in the form whereL R 1 stands for an expression of the general form of the Lagrangiañ L 1 in Eq. 9 with specific values of the coupling constants, namelỹ while the resonance contributions to the coupling constantsL i ,H 1 and H 2 are all of order N c and listed in Table 1. Finally, the contribution proportional to (ψ + θ) 2 may be absorbed in an oder 1/N c shiftτ 0 →τ P 0 according toτ Note that this correction has the right sign to explain why determinations of τ 0 in the framework of lattice gauge theory 12,13 would lead to values higher b The discussion simplifies somewhat for rescaled resonance fields R ′ = R/F 0 = O(1). Table 1. Resonance contributions to the low energy coupling constantsL i andH i arising at next to leading order in the framework of large Nc chiral perturbation theory, cf. Eq. (9).
than those obtained from phenomenological determinations of the corresponding coupling constantτ P 0 entering in the chiral Lagrangian. It is remarkable that the model fails to generate contributions to the coupling con-stantsΛ 1 andH 0 . With the phenomenological valueΛ 2 − 1 2Λ 1 ≃ 0.16, 14,15 we conclude that the product of the two coupling constants d 0 and d m is negative, d 0 d m < 0. In the following, we are going to adopt the convention d m > 0.
Constraints from QCD Asymptotic Behaviour
A way to obtain values for the parameters entering the chiral resonance Lagrangian is to relate them to the observed properties of the lowest lying resonances in the spectrum of QCD. Instead, we prefer to fix a maximal number of those coupling constants by considering the constraints that follow by imposing the proper asymptotic behaviour for massless QCD. 9,11,[16][17][18][19][20][21][22] For the vector and axial vector resonances two such constraints may be inferred by considering the Weinberg sum rules, 23 demanding the asymptotic fall-off of the pion vector form factor, and the axial form factor G A (t), 11 The four preceding equations allow us to express the three coupling constants F V , G V , F A in terms ofF 0 and the axial vector meson mass in terms of M V , where we adopted the conventions F V , F A > 0. Inspection of the results in Table 1 shows that this entails the prediction of the coupling constantsL 2 , L 9 ,L 10 andH 1 in terms of the ratioF 0 /M V , 11 In the scalar and pseudoscalar sector, there exists a constraint analogous to the one following from the first Weinberg sum rule in Eq. (28), 17 as well as one condition from the asymptotic fall-off of the scalar form factor of the pion, 16 For a discussion of the spin 0 counterparts of the relations in Eqs. (29) and (31) (c m > 0) and lead to the prediction of 2L 2 +L 3 ,L 5 ,L 8 andH 2 in terms ofF 0 /M S , The prediction forL 5 remains put, also if contributions from the pseudoscalar resonances are allowed -this coupling constant depends only on the product c m c d which is fixed by Eq. (35). However, in this case the predictions for 3L 2 +L 3 ,L 8 andH 2 are modified according to where c m and c d have been traded for d m . In particular we findL SP 8 ≥L S 8 as long as M P ≥ M S , i.e. the contributions from the pseudoscalars tend to increase the value ofL 8 . Before turning to the discussion of the numerical implications of the above, let us translate the results obtained so far to the standard framework of chiral perturbation theory, where more independent information on the values of the low energy coupling constants is available.
Implications for the Standard Framework
If the number of colours is not treated as large, the η ′ does no longer play a particular role but is just another of the states which remain massive in the chiral limit. In this case, the framework set up in Ref. 6 provides the adequate description. There, the low energy expansion proceeds in powers of the momenta and light quark masses alone, In the following, we are going to exploit the fact that this framework effectively emerges from the theory discussed previously: We only need to consider it in the particular corner of its domain of validity where the mass of the η ′ is large in comparison to the momenta and the light quark masses, while still being small in comparison to the intrinsic scale of QCD.
To perform the matching procedure it is convenient to explicitly display the dependence on the singlet fieldψ by introducing an effective field U (x) ∈ U(3) according to 7 Because the combinationψ + θ represents a chiral invariant, the field U transforms in the same manner asŨ in Eq. (12). By its definition and Eq. (1), it is further subject to the constraint and does therefore describe the desired 8 degrees of freedom. To further simplify the discussion, we now switch off the singlet parts of the external fields and set When treating the η ′ mass M 0 (8) as large in comparison to the momenta and quark masses, the solution of the equation of motion for the singlet fieldψ implies the relations and it is a simple matter to convince oneself that the Lagrangian in Eq. (5) reduces to the standard form 6 valid for U † D µ U = 0 (recall that we switched off the singlet external fields). Finally, the contribution to the coupling constant L 7 is given by 14,24,7 More information on the relation of the coupling constants in the two versions of the theory may be found in Ref. 7. In particular the contributions generated by chiral loops as well as the contributions arising from nonvanishing singlet external fields may be found there.
Numerical Results
For the numerical evaluation we employ the values (in MeV units), with intentional similarities to F π , M ρ , M a0 and M π ′ , respectively.
c Here, the superscript R is meant to indicate the sum over all resonance contributions. However, the relations (46) also hold for the individual contributions, R = V, A, S, P. Table 2. Numerical values for the coupling constants L i of the order p 4 chiral Lagrangian, in units of 10 −3 . Rows 1, 2 and 3 list values for the L r i (µ) at the scale µ = 770 MeV (Mρ) obtained on the basis of phenomenology 6,27 and lattice calculations. 28 Row 4 displays the resonance estimates for the L i obtained in Ref.
9. The last column shows the numbers obtained in the present work, for three different values of the parameter dm.
6.9 ± 0.7 6.9 ‡ 7.2 According to Eq. (26), the model fails to generate a contribution of the typeΛ 2 in the absence of contributions from pseudoscalar resonances and we therefore setΛ P 2 = 0 for d m = 0. Otherwise, we adopt the phenomenological valueΛ P 2 →Λ 2 − 1 2Λ 1 = 0. 16. 14,15 This coupling constant generates a shift of formal order N c in the coupling constant L 7 , viz.
where in the above expression we have eliminatedτ P 0 in favour of the mass of the η ′ in the chiral limit, and the numbers given in Table 2 correspond to our favoured central value 26 of M 0 = 900 MeV.
Discussion and Conclusions
In Table 2 Table 2 follow if the individual errors listed in Ref. 28 are added in quadrature). In the present framework, the predictions are simply an algebraic consequence of the absence of multiple trace terms in the resonance Lagrangian in Eqs. (20) and (21). The coupling constants L 1 , L 2 , L 9 , L 10 gain contributions exclusively from the vector and axial vector resonances (in view of the considerations in Sec. 5, those should indeed be viewed as one entity). The predictions exhibit an impressive agreement with the values from Ref. 6 and to a lesser extent also with those from Ref. 27. d We should clarify at this point that the difference between the results of Ref. 9 and the present investigation is easily traced back to a difference in the numerical value of F V adopted in that reference, the value that follows from the observed ρ 0 → e + e − rate. 25 Accordingly, the authors of Ref. 9 do not make use of the relation (31), which in fact is known to be subject to corrections. 22 antisymmetric tensor fields V µν and A µν simply do not notice the presence of the additional singlet pseudoscalar field.
The prediction for L 5 represents a scalar counterpart to the one for L 9 , but is clearly seen to work less well, in particular when compared to the lattice value. 28 An obvious difference is seen in the magnitude of the two coupling constants as well: In the present picture, this fact finds an explanation in the difference of the vector and scalar meson masses. The authors of Ref. 30 present theoretical arguments in favour of a scalar mass of the order of 1.5 GeV, which would help to resolve the discrepancies for L 5 . In any case, it should be noted that the coupling constant L 5 is known to possess a strong scale dependence 6 and thus varies significantly over the range µ = 500, . . . , 1000 MeV. With the central value of Ref. 6, the particular value 2.2 · 10 −3 is reached for µ ≃ M η .
In the absence of contributions from the nonet of pseudoscalar resonances (d m = 0), our formulas imply L 8 = 1 4 L 5 leading to a rather low value for L 8 which, however, is in good agreement with the value from the lattice. 28 In view of the discrepancy with L 5 , the combination 2L 8 − L 5 turns out significantly negative, however, to be compared with which is of course possible, at the cost of the validity of the relations in Eqs. (34) and (35), cf. also Eq. (51). Though of formal order N c 2 , the prediction for the coupling constant L 7 is known not to be extraordinarily large -neither is the η ′ extraordinarily light. The contributions from the additional pseudoscalar nonet lead to an additional small negative shift in L 7 . Note that the model should better predict a rather decent value for this constant, because in L 7 and, for that matter, also L 3 , there is no scale dependence to be blamed for the discrepancy. In the case of L 3 , which is dominated by the vector and axial vector contributions, the phenomenological values 6,27 are not conclusive about the need for extra pseudoscalar contributions. In summary, the resonance dominance estimates for the coupling con-stants L i have been demonstrated to lead to a rather coherent picture, also when the implications from large N c are taken seriously from the beginning to the end. The model involves a remarkably low number of adjustable parameters, and phenomenology appears to be in favour of the inclusion of the contributions from the pseudoscalar π ′ nonet. Note added: After the submission of the original manuscript it was pointed out to me that the list of references did only constitute an incomplete account of the recent work on resonance Lagrangians and large N c QCD. The present note aims at improving the work at hand in this respect: Work closely related to the present article work is described also in Refs. 32, 33, 34. The large N c approximation to QCD was tested qualitatively and quantitatively in Refs. [35][36][37][38]. These references in particular introduced the concept of the 'Minimal hadronic ansatz' denoting the smallest set of states required to match the given short distance behaviour. Large N c methods were further successfully applied to a wide range of phenomena such as the decay of pseudoscalars into lepton pairs, 39 the evaluation of electroweak contributions to the pion mass difference, 40 K 0 −K 0 mixing, 41 the weak matrix elements Q 7 and Q 8 , 42,43 rare kaon decays, 44,45 electroweak hadronic contributions to the muon g − 2 46 and the determination of ǫ ′ /ǫ. 47 For reviews of the subject we refer the reader to Refs. 48-52. In the meantime, the work announced in Ref. 31 has appeared. 53
|
2014-10-01T00:00:00.000Z
|
2005-02-07T00:00:00.000
|
{
"year": 2005,
"sha1": "872331a77b1a4971fa1b4d381b788cfc96461b65",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0502065",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "872331a77b1a4971fa1b4d381b788cfc96461b65",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235361940
|
pes2o/s2orc
|
v3-fos-license
|
Gsα-dependent signaling is required for postnatal establishment of a functional β-cell mass
Objective Early postnatal life is a critical period for the establishment of the functional β-cell mass that will sustain whole-body glucose homeostasis during the lifetime. β cells are formed from progenitors during embryonic development but undergo significant expansion in quantity and attain functional maturity after birth. The signals and pathways involved in these processes are not fully elucidated. Cyclic adenosine monophosphate (cAMP) is an intracellular signaling molecule that is known to regulate insulin secretion, gene expression, proliferation, and survival of adult β cells. The heterotrimeric G protein Gs stimulates the cAMP-dependent pathway by activating adenylyl cyclase. In this study, we sought to explore the role of Gs-dependent signaling in postnatal β-cell development. Methods To study Gs-dependent signaling, we generated conditional knockout mice in which the α subunit of the Gs protein (Gsα) was ablated from β-cells using the Cre deleter line Ins1Cre. Mice were characterized in terms of glucose homeostasis, including in vivo glucose tolerance, glucose-induced insulin secretion, and insulin sensitivity. β-cell mass was studied using histomorphometric analysis and optical projection tomography. β-cell proliferation was studied by ki67 and phospho-histone H3 immunostatining, and apoptosis was assessed by TUNEL assay. Gene expression was determined in isolated islets and sorted β cells by qPCR. Intracellular cAMP was studied in isolated islets using HTRF-based technology. The activation status of the cAMP and insulin-signaling pathways was determined by immunoblot analysis of the relevant components of these pathways in isolated islets. In vitro proliferation of dissociated islet cells was assessed by BrdU incorporation. Results Elimination of Gsα in β cells led to reduced β-cell mass, deficient insulin secretion, and severe glucose intolerance. These defects were evident by weaning and were associated with decreased proliferation and inadequate expression of key β-cell identity and maturation genes in postnatal β-cells. Additionally, loss of Gsα caused a broad multilevel disruption of the insulin transduction pathway that resulted in the specific abrogation of the islet proliferative response to insulin. Conclusion We conclude that Gsα is required for β-cell growth and maturation in the early postnatal stage and propose that this is partly mediated via its crosstalk with insulin signaling. Our findings disclose a tight connection between these two pathways in postnatal β cells, which may have implications for using cAMP-raising agents to promote β-cell regeneration and maturation in diabetes.
INTRODUCTION
Pancreatic b cells secrete the blood-glucose-lowering hormone insulin and play a crucial role in controlling whole-body glucose homeostasis.
A deficit in the number of functional b cells leads to insulin deficiency, elevated blood glucose levels, and the emergence of diabetes.
Regenerative medicine strategies aimed to replace lost or dysfunctional b cells are currently viewed as promising therapies to treat this disease. Some approaches propose endogenous b cell regeneration by stimulating the proliferation/survival of residual b cells, whilst others propose transplantation of substitute b cells created in the laboratory from other cell sources. Progress on these two fronts relies on our 1 knowledge of agents and molecular pathways amenable to be manipulated to promote b-cell expansion and/or to achieve b cell functional maturation. Early postnatal life is a critical period for acquiring the appropriate number of functional b cells needed to sustain the metabolic needs of the adult organism [1]. The b-cell population expands dramatically during the perinatal period due to increased proliferation [2e4], which rapidly declines until reaching low replication values maintained throughout adulthood (cells in cycle: z1% in rodents and <0.2% in humans) [5]. In concert with their expansion, neonatal b cells upregulate the expression of identity and functionality genes and develop the capability to regulate insulin secretion in response to high glucose (GSIS), which is the hallmark of their mature state [6e9]. Therefore, early postnatal life (in mice, between birth and weaning) is a critical period in b-cell development that can provide information on the identity of central regulators of b-cell growth and function.
Cyclic adenosine monophosphate (cAMP) is a common and versatile intracellular signaling molecule. In b cells, cAMP has been implicated in the stimulus-insulin secretion coupling process [10e12], in the expression of key b-cell markers such as Insulin and the transcription factors Pdx1 and Mafa [13e15], as well as in b-cell proliferation and survival [16e20]. cAMP is generated from ATP by adenylyl cyclases, which can be regulated by G-protein coupled receptors (GPCRs) that either stimulate this enzyme via Gsa or inhibit it via Gia subunits. Genetic approaches that disrupt these subunits have evidenced their involvement in the regulation of b-cell mass. Thus, deletion of the gene encoding Gsa in mouse pancreatic b cells using the Rat Insulin Promoter 2 (RIP2)-Cre transgene resulted in reduced b-cell mass, deficient insulin secretion and whole-body glucose intolerance in adult mice [21]. Conversely, inhibition of Gi/oa through the expression of the Pertussis toxin in b cells led to increased b-cell mass, augmented insulin secretion, and improved glucose tolerance [22]. In both models, the b-cell mass phenotype appeared during the early postnatal stage and was associated with altered b-cell proliferation. However, neither the mechanisms involved nor the effects on postnatal b-cell maturation were explored. Here we sought to investigate the involvement of Gsa-dependent signaling in postnatal b-cell development in detail. To ablate the Gnas gene (i.e., codes for Gsa) from b cells, we used Ins1 Cre knock-in mice, which present highly selective induction of Cre-dependent recombination in b cells [23]. Because the RIP2-Cre line used before is known to drive significant non-b-cell Cre expression, namely in the hypothalamus and pituitary [24], and to display transgene-related b-cell dysfunction [25], we reasoned that using Ins1 Cre deleter mice should lead to unambiguous insights into the role of Gsa signaling in postnatal b cells. Our study demonstrates that the specific elimination of Gsa in b cells results in hyperglycemia and whole-body glucose intolerance.
This metabolic phenotype is associated with a compromised postnatal functional b-cell mass establishment and entails both reduced b-cell expansion and deficient b-cell maturation. Mechanistically, we show that Gsa ablation leads to severe depletion of intracellular cAMP levels, reduced Creb activation, and multilevel dysregulation of the insulin transduction pathway in postnatal b cells.
Mice
Mice with loxP sites surrounding Gsa exon 1 (Gnas flox/flox ) [26], Ins1(Cre) knock-in mice [23], and ROSA26-Stop-EYFP mice [27] were described elsewhere. Female Gnas flox/flox mice were mated to male Gnas flox/þ ;Ins1 Cre/þ mice to generate Gsa knockout mice (Gnas flox/flox ; Ins1 Cre/þ ). As controls, we used littermates with Gnas flox/flox ;Ins1 þ/þ and Gnas flox/þ ; Ins1 þ/þ genotypes, except for b-cell sorting experiments, where we used Gnas flox/þ ;Ins1 Cre/þ and Gnas þ/þ ;Ins1 Cre/þ . Experimental procedures and postnatal tissues were collected at the indicated times, considering birth the postnatal day 0 (p0). Mice were bred and maintained on a standard pellet diet (2014S Teklad Global, Harlan Laboratories) and 12:12 h light/dark cycle at the barrier animal facility of the University of Barcelona. Principles of laboratory animal care were followed (European and local government guidelines), and animal experimental procedures were approved by the Animal Research Committee of the University of Barcelona. Animals were euthanized by cervical dislocation. Genotyping for mice was performed by PCR on tail DNA using primers supplied in Supplementary Table S1. The PCR was carried out using DreamTaq DNA polymerase (Thermo Fisher Scientific, Waltham, US), and the reaction was performed by denaturation at 95 C for 3 min and 35 cycles of amplification (95 C for 30 s, 60 C for 30 s, 72 C for 1 min), finishing with 10 min at 72 C.
Whole-body metabolic tests
The Intraperitoneal and Oral Glucose Tolerance Tests were performed after 6 h of food deprivation by the administration of D-glucose (2 g/kg body weight) via intraperitoneal injection or by oral gavage, respectively. The Insulin Tolerance Test was performed after 6 h of food deprivation by the administration of an injection of insulin (Humulin; 0.5 U/kg body weight). Glucose levels in tail vein blood samples were measured at 0, 15, 30, 60, and 120 min after injection using a clinical glucometer and AccueCheck test strips (Roche Diabetes Care, Sant Cugat, Spain). Glucose-stimulated insulin secretion (GSIS) was measured in 5e6 h fasted mice following an intraperitoneal injection of glucose (3 mg/kg body weight). Tail vein blood was collected in heparinized capillary tubes (Microvette, Sarstedt, Nümbrecht, Germany) at indicated time points. Plasma insulin concentration was measured using the Ultrasensitive Mouse Insulin ELISA (Chrystal Chem, Zaandam, Netherlands). Plasma proinsulin levels were measured with the highly specific Mouse Proinsulin ELISA (Mercodia, Uppsala, Sweden).
Islet isolation and culture
Islets were isolated by collagenase digestion (Collagenase P, Roche Diagnostics GmbH, Mannheim, Germany) and discontinuous Histopaque (SigmaeAldrich, Steinheim, Germany) gradient centrifugation (p28 and adult mice) [28] or manual handpicking under a stereomicroscope (p7 mice). The collagenase solution (0.7 mg/ml) was injected into the common bile duct in p28 and adult animals or multi-injected at a concentration of 0.5 mg/ml in the pancreas of p7 mice. After isolation, islets were either used fresh or transferred to dishes containing an RPMI-1640 medium (SigmaeAldrich) with 11 mM of glucose, 10% fetal bovine serum (FBS) (Biosera, Nuaille, France), 2 mM L-glutamine, and HyCloneÔ Penicillin-Streptomycin (100 U/ml penicillin, 100 pg/ml streptomycin; GE Healthcare Life Sciences, PGH, USA) for a 16e24 h recovery culture before performing additional procedures.
Gene expression
Total RNA was prepared from isolated islets using the NucleoSpin XS RNA kit (Mackerey-Nagel, Düren, Germany). First-strand cDNA was prepared using the Superscript III RT kit and random hexamer primers (Invitrogen, Carlsbad, CA, USA). Reverse transcription reaction was carried for 90 min at 50 C and an additional 10 min at 55 C. Quantitative real-time PCR (qPCR) was performed on an ABI Prism 7900 sequence detection system using GoTaqÒ qPCR Master Mix (Promega Biotech Ibérica, Alcobendas, Madrid, Spain). Expression relative to a housekeeping gene was calculated using the deltaCt method. We picked the moderately expressed gene Tbp as housekeeping for all the genes except for the pancreatic hormones and Iapp, which are more abundant and whose expression was compared to that of Actb.
Optical projection tomography
Isolated pancreases were fixed in 4% paraformaldehyde and divided into the splenic, gastric, and duodenal lobes [30]. At this point, all samples were randomized and blinded for further Optical projection tomography (OPT) processing, which was performed as previously described [30,31]. Sample processing for OPT measurements was performed as follows: pancreatic specimens were freeze-thawed to increase permeability, bleached (in DMSO, Methanol, and hydrogen peroxide, 1:2:3, respectively, Thermo Fisher Scientific) to reduce endogenous fluorescence, and stained with primary guinea pig antiinsulin (1:500 dilution, Dako) and secondary goat Alexa 594 antiguinea pig (1:500 dilution, Molecular Probes) antibodies. Once stained, all samples were mounted in 1.5% Low-melting SeaPlaqueÔ Agarose (Lonza Bioscience, Basel, Switzerland), dehydrated in pure Methanol (Thermo Fischer Scientific), and optically cleared using a 1:2 dilution of benzyl alcohol and benzyl benzoate, respectively (Acros organics). OPT imaging of cleared samples was performed using a BiOPTonics SkyScanner 3001 (version 1.3.13 SkyScan, Belgium). Once all iso-tropic voxel-based images were collected, image data sets were identically processed using a contrast limited adaptive histogram equalization (CLAHE), and post-acquisition misalignment correction was performed using Discrete Fourier Transform Alignment (DFTA). The processed and aligned frontal projection images were then reconstructed to tomographic sections (NRecon version 1.6.9.18, Bruker SkyScan) and uploaded to Imaris (version 8.1, Bitplane, UK). For insulin-positive volume quantifications, an iso-surface algorithm with a threshold value between 5 and 8 and a voxel filtering of 10 (corresponding to 50 mm diameter of a sphere) was applied to measure individual islet volumes and islet count.
cAMP measurements
Following the recovery culture, islets were incubated in Krebs solution containing 2.8 mM glucose for 1 h 30min at 37 C with agitation. Then, batches of 20 islets were either resuspended immediately in 30 ml of lysis buffer (Cisbio assays, Parc Marcel Boiteux, France), supplemented with 0.5 mM IBMX (SigmaeAldrich) for determination of basal cAMP levels or incubated for 20 min at 37 C with IBMX (0.5 mM) and Forskolin (1 mM, SigmaeAldrich) and then washed twice with HBSS-BSA and lysed as previously described. Lysates were kept at À80 C until cAMP determinations using the cAMP dynamic 2 assay kit (Cisbio, Codolet, France).
2.9. Islet hormone content Between 8 and 20 islets from p7, p14, and p28 mice were placed into an acid alcohol solution (75% ethanol, 0.18 N HCl), sonicated, and extracted overnight at 4 C. The solution was then centrifuged to remove tissue in suspension and neutralized. Insulin and/or proinsulin concentrations were measured using mouse insulin and proinsulin ELISA kits (Mercodia).
Dissociated isolated b cells and proliferation assay
After isolation and in order to eliminate fibroblasts, islets were cultured for 7 days in RPMI-1640 medium (SigmaeAldrich) with 11 mM glucose, 10% FBS (Biosera), 2 mM L-glutamine, and antibiotics. Dissociated islet cells were obtained by treatment with 0.05% trypsin-EDTA for 4e5 min, seeded onto 384-well plates (15,000 cells/well), and cultured for 24 h with RPMI-1640 media supplemented as before. For proliferation assays, DICs were blanked overnight with RPMI-1640 medium containing 8 mM glucose and 0.1% FBS and then incubated for an additional 24 h period in the same media supplemented with the following reagents: exendin (200 nM), recombinant human Igf1 (10 nM), and recombinant human insulin (10 nM). During the last 5 h of culture, 5-Bromo-2 0 -deoxyuridine (5-BrdU) was added, and BrdU incorporation was determined using the Cell Proliferation ELISA kit (colorimetric) following the manufacturer's instructions (Roche Diagnostics, Mannheim, Germany).
Statistics
Data are presented as mean AE standard error of the mean (SEM). Statistical significance was tested using unpaired Student's t-test or two-way ANOVA for in vivo metabolic tests.
b-Cell specific Gsa knockout mice exhibit whole-body glucose intolerance
We generated mice in which the gene Gnas was ablated from b cells using Ins1 Cre knock-in mice (b-GsaKO, hereafter). Initially, to specifically evaluate the extent of Cre-mediated recombination, we introduced the reporter R26-YFP allele. Immunostaining against YFP revealed that most b cells had recombined this allele by postnatal day 28 (p28; Figure 1A). Next, to determine the extent of Gsa downregulation, we compared the expression of Gsa-coding transcripts in RNA that were isolated from whole islets and purified YFP þ cells of p28 b-GsaKO mice and their Cre-negative littermates. Gnas mRNA levels were reduced in knockouts compared to controls (about 50% and 90% in p28 islets and sorted p28 YFP þ cells, respectively; Figure 1B). The smaller reduction observed in islets compared to purified YFP þ cells is possibly due to non-recombined b-cells and/or non-b-cells present in the islets. Next, we characterized whole-body glucose homeostasis in b-GsaKO mice. As a preliminary experiment, we confirmed that Ins1 Cre/þ knockin mice did not show changes in whole-body glucose tolerance or insulin sensitivity compared to their wild-type littermates ( Figure S1). Hence, Cre-negative littermates served as controls throughout the rest of the study. b-GsaKO mice had similar body weight as controls from p0 to p28 but weighed 20% less at 8 weeks of age (wo; Figure 1C).
Random blood glucose levels were significantly higher in b-GsaKO mice than their control littermates from p14, with differences ranging from þ28% at p0 to þ180% at p28/8wo ( Figure 1D). At 8wo, b-GsaKO mice displayed marked whole-body glucose intolerance without detectable changes in insulin sensitivity as compared to littermate controls ( Figure S2). Importantly, impaired intraperitoneal and oral glucose tolerance in the context of normal insulin sensitivity was evident as early as p28 (Figure 1EeG), revealing that defects in glucose homeostasis develop during the first weeks of postnatal life. Glucose intolerance was associated with insufficient insulin as indicated by blunted glucose-induced insulin secretion and lower random plasma insulin levels in p28 GaKO as compared to controls ( Figure 1H,I). In summary, b-GsaKO mice phenocopied the b-cell specific GsaKO mice generated with the RIP2-Cre transgene in terms of development of insulin-deficient diabetes. However, b-GsaKO mice did not exhibit the increased early postnatal lethality, linear growth retardation, or improved insulin sensitivity reported in the former model [21]. (Figure 2A,B). At p14 we observed a tendency of decreased fractional b-cell areas (À28%), which became significant at p28 (Figure 2A,B). In agreement, b-cell mass was lower in p28 knockout relative to control animals (À25%), this difference becoming larger at 8wo (À50%; Figure 2C). By contrast, the a-cell mass was comparable between knockout and control mice from p14 to adulthood ( Figure 2D). Using OPT [32], we identified an overall decrease in the islet number (À11%) and a specific reduction in islet volume corresponding to small islets (<10 6 um 3 ) in p28 b-GsaKO as compared to controls ( Figure S3). Together, these results demonstrate that a loss of Gsa decreases b-cell mass.
Furthermore, this effect first appears during the second to fourth weeks of life, supporting a role of Gsa-dependent signaling in post- To define the cause of reduced postnatal b-cell growth, we studied bcell proliferation and death. We found fewer proliferating b cells in p28 b-GsaKO pancreases than in controls, both using ki67 and phospho-Histone 3 immunostaining ( Figure 2E,F). The number of double positive Insulinþ/p-Histone3þ cells was already decreased at p14, indicating that b-cells from lactating b-GsaKO pups underwent mitosis at a lower rate than control b-cells. Compatible with decreased proliferation, gene expression analysis of cell cycle machinery genes revealed the downregulation of Ccna2 and Cdk4 and the upregulation of the inhibitor Cdkn1a ( Figure 2G). Lastly, we examined whether the loss of Gsa was deleterious for b-cell survival but did not detect b-cell death by TUNEL assay at p14 or p28 in b-GsaKO or control pancreases (data not shown). Accordingly, apoptosis and endoplasmic reticulum stress genes were expressed at similar levels in animals from both genotypes ( Figure S4). Therefore, the loss of Gsa impairs postnatal b-cell expansion through decreased b-cell proliferation, ostensibly without changes in b-cell survival.
Deletion of Gsa in b cells impairs postnatal b-cell maturation
During early postnatal life, b cells not only expand in number but also acquire functional maturity [33]. To determine the extent to which the absence of Gsa affects this latter process, we surveyed the expression of functionally relevant genes in p28 b-GsaKO islets. We found that the gene coding for the prohormone convertases Pcsk1 and Pcsk2 as well as genes typically upregulated during b-cell maturation, such as the exocytosis regulator Syt4 [34] and the mature b-cell marker Ucn3 [6], were decreased in b-GsaKO islets. Conversely, other genes expressed in b cells, such as Iapp, Slc2a2, Gck, or Syt7 or the hormones genes b-GsaKO islets also displayed reduced insulin content as measured by ELISA, ranging from 52% to 8% of controls at p7 and 8wo, respectively ( Figure 3C). Magnification of the difference was mainly due to the absence of an age-dependent increase in the total islet insulin content in b-GsaKO islets relative to controls ( Figure 3C). Weaker insulin immunostaining in b-GsaKO islets, compared to size-matched control islets, suggest decreased insulin protein content per cell ( Figure 3D). Mafa and Pdx1 are transcriptional regulators of the Insulin gene, and therefore, we postulated that decreased Ins gene transcription might be responsible for reduced islet insulin content in knockout islets. We found that Ins1 mRNA levels were reduced by~50% (conceivably due to the inactivation of one Ins1 allele in Ins1 Cre/þ knock-in mice), whereas the Ins2 gene expression was unaltered at both p7 and p28 ( Figure 3E). However, because Ins1 is expressed much less than Ins2 in b cells [35], this gene is not considered the primary determinant for insulin production. Therefore, translational and/or post-translational Values were calculated by multiplying fractional insulin area x pancreas weight (p14/p28: n ¼ 5, 8wo: n ¼ 3e4). (E,F) Quantification of the percentage of b (insulinþ) cells that are Ki67þ (E) or p-HH3þ (F) in pancreases from p14 and p28 b-GsaKO (n ¼ 5) and control (n ¼ 4) mice. (G) Quantification of the expression of the indicated cell cycle and proliferation genes in p28 b-GsaKO (n ¼ 4e10) and control (n ¼ 4e9) islets as determined by qPCR. Expression was normalized with Tbp and expressed relative to control, given the value of 1. All bars represent the mean AE SEM. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001 vs. control animals by two-tailed Student's test.
Original Article 6 mechanisms are likely involved in reduced insulin islet content in knockout islets. We postulated that reduced processing of proinsulin into mature insulin might contribute to the diminished islet insulin content, as the genes encoding the processing enzymes Pcsk1 and Pcsk2 were downregulated in knockout islets ( Figure 3A). In line with this idea, proinsulin content was 1.8-fold higher in b-GsaKO islets relative to controls ( Figure 3F). This difference in proinsulin content translated into a 4.2-fold increase in circulating proinsulin levels and a Expression was normalized with Tbp and expressed relative to control, given the value of 1. (B) Pdx1 and Mafa mRNA levels in p7 b-GsaKO (n ¼ 5e6) and control (n ¼ 6) measured by qPCR. Expression was normalized with Tbp and expressed relative to control, given the value of 1. (C) Insulin content of b-GsaKO and control islets isolated at the indicated ages and determined by ELISA (p7, n ¼ 4e5; p14, n ¼ 11; p28, n ¼ 8e14; 8wo, n ¼ 28e32). (D) Representative immunofluorescence images of islets matched for size from b-GsaKO and control mice. Insulin is shown in green, glucagon in red, and nuclei in blue. Images were taken using the same exposure times for comparison purposes. Scale bars are 10 mm. (E) Quantification of the expression of the Ins1 and Ins2 genes by qPCR in b-GsaKO and control islets at p7 (n ¼ 3e6) and p28 (n ¼ 6e8). Expression was normalized with Tbp and expressed relative to control, given the value of 1. Figure 3G,H). Collectively, these observations indicate that Gsa-dependent signaling is involved in the acquisition of b-cell maturity during postnatal stages.
Loss of Gsa reduces intracellular cAMP and Creb-dependent signaling in postnatal islets
As an initial step to gain insight into the molecular mechanisms responsible for reduced functional b-cell mass in b-GsaKO mice, we studied intracellular cAMP levels in isolated islets from p28 and 8 wo mice. At both ages and under basal conditions, b-GsaKO islets presented cAMP levels of approximately 15% of controls of the same age ( Figure 4A). Further, despite the combination of the adenylyl cyclase activator forskolin and the phosphodieserase inhibitor IBMX elevating cAMP levels by 4-fold and 12-fold in p28 and 8 wo b-GsaKO islets respectively, the cAMP content remained much lower than in stimulated controls (8 and 14% of controls at p28 and 8wo, respectively; Figure 4A). These results show that loss of Gsa severely depletes intracellular cAMP levels, jeopardizing cAMP-dependent signaling in islets. Protein kinase A (PKA) is considered one of the primary mediators of cAMP signaling in the cell. We surveyed gene expression of components of the PKA branch and found that mRNAs for the regulatory and catalytic subunits of PKA, Prkar1b and Prkaca, as well as the anchor protein Akap11 were reduced in knockout islets as compared to controls ( Figure 4B). Likewise, the expression of the Creb3 gene encoding the cAMP response element binding (Creb) transcription factor, a main downstream effector of PKA signaling, was also decreased. To validate these results, we studied Creb at the protein 7). (E,F) Quantification of the indicated genes by qPCR in p28 b-GsaKO (n ¼ 4e8) and control (n ¼ 4e9) islets. Expression was normalized with Tbp and expressed relative to control, given the value of 1. All data points represent the mean AE SEM. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001 vs. controls by two-tailed Student's test.
Original Article level. At p28, b-GsaKO islets contained less Creb protein and displayed lower Creb phosphorylation (relative to total Creb) than controls ( Figure 4C). Our prior results showing downregulation of Ccna2, Mafa, and Pcsk1 in p28 b-GsaKO islets (see Figures 2G and 3A) reinforce this finding, as these genes have been reported to be direct Creb targets [15,36,37]. Therefore, these results support the reduction of PKA/Creb signaling in p28 b-GsaKO islets.
In addition to PKA, cAMP can trigger signaling through guanine nucleotide exchange proteins, directly activated by cAMP (Rapgef; formally known as Epac factors) [38], which can then regulate gene expression via the Calcineurin/Nuclear factor of activated T-cells (Nfat), a factor family of transcriptional regulators [39]. Importantly, Nfat signaling has been shown to regulate postnatal b cell development [40].
Among the Rapgef and Nfat genes expressed in islets, only Ragef3 gene expression was changed (30% increase) in b-GsaKO as compared to controls ( Figure 4B). In islets, ePAC/Rap1 regulates the MAP kinases Erk1/2 [41], which have been reported to phosphorylate Creb [42] and induce b-cell proliferation [43]. Similar to Creb, total Erk1/2 protein was modestly decreased in b-GsaKO islets ( Figure 4D). However, the degree of Erk1/2 activation was similar in knockout and control islets ( Figure 4D). Together, these results indicate that signaling through the ePAC branch is not grossly impacted by Gsa loss in p28 islets. Next, we investigated whether loss of Gsa had effects on the expression of other proximal components of the cAMP signaling machinery. Of the GPCR-Gs receptors tested, we found that the incretin receptors Glp1r and Gipr, the cannabinoid receptor Gpr119, and the PAPAC/VIP receptor Vipr were significantly downregulated in b-GsaKO as compared to control islets, whereas the glucagon receptor (Gcgr) was upregulated ( Figure 4E). By contrast, gene expression of GPCR desensitizers, including b-arrestins (Barr1, Barr2) or G protein-coupled receptor kinases (Grk2), was unaltered in p28 b-GsaKO islets ( Figure 4E). Lastly, we assessed whether Gsa inactivation exerted compensatory effects on the expression of other genes involved in regulating cAMP levels, including the mechanistically opposed G protein subclass Gi, phosphodiesterases, and adenylyl cyclases. However, we found that all the genes assayed were similarly expressed in b-GsaKO and control islets ( Figure 4F).
Loss of Gsa impairs insulin signaling in postnatal islets
Mouse models targeting one or more proteins in the insulin/insulin-like growth factor (Igf) transduction pathway show that this pathway regulates b-cell proliferation [44e47]. Expression of proximal components of this pathway, namely the Igf1 receptor (Igf1r) [20,48] and Insulin receptor substrate 2 (Irs2) [49], are known to be regulated by cAMP, indicating the possibility that impaired insulin/Igf signaling causes the b-cell mass phenotype of b-GsaKO mice. However, this hypothesis was rejected in RIP2-Cre/GsaKO mice due to the normality of the Irs2 expression in adult islets from this model. Here, we reexamined this notion in more detail using islets at a younger age. First, we assessed the activation status of the main intracellular effector of the insulin/Igf1 pathway, the kinase Akt/PKB. We found that phosphorylated Akt (active state) was significantly reduced in p28 b-GsaKO relative to control islets ( Figure 5A), confirming that the loss of Gsa negatively affects insulin-signaling activity in postnatal b cells. We then measured the phosphorylation of ribosomal protein S6, which is activated downstream of Akt and required for Akt-driven b-cell proliferation [50], and found that it was decreased in b-GsaKO islets ( Figure 5B), linking anomalies in this pathway to the postnatal b-cell expansion defect in b-GsaKO mice.
To reveal the molecular alterations responsible for abnormal insulin/Igf signaling activity, we first examined the expression of the cAMP/Creb target Irs2 in islets from p28 b-GsaKO mice. We found that, in agreement with the earlier study in the RIP2-Cre/GsaKO model, the Irs2 gene expression was unaltered ( Figure 5C). As Irs2 is expressed in all cell types of the islet [51], we also assayed its expression in sorted b cells and found that it was significantly downregulated in knockout b cells compared to controls ( Figure 5C). Though this result confirms that Irs2 is a target of Gsa signaling in b cells, b-cell specific inactivation of Irs2 had no impact on b-cell mass in the early postnatal period [46]. Thus, alterations in Irs2 alone cannot explain the b-cell expansion defect in b-GsaKO mice.
cAMP also regulates the expression of the Igf1r receptor (Igf1r) in islets [20,48]. Though genetic ablation of Igf1r has no impact on b-cell mass [52], b-cell specific compound deletion of Igf1r and the insulin receptor (Insr) was reported to reduce b-cell mass as early as p14 [44]. This body of knowledge prompted us to look at the status of both receptors in b-GsaKO mice. Using immunoblot analysis, we found that Igf1r and Insr protein levels were decreased in p28 b-GsaKO islets ( Figure 5D).
In contrast, though Igf1r gene expression was downregulated, Insr mRNA levels were similar in p28 b-GsaKO islets and controls ( Figure 5E), suggesting that different mechanisms cause the depletion of these receptors in b-GsaKO islets. The Insr has two isoforms (i.e., Insr-A and Insr-B), derived from alternative splicing of the same pre-Insr mRNA. We performed conventional RT-PCR using primers of exons 10 and 12, which permit amplification of the two splice variants. As shown in Figure 5F, we observed that b-GsaKO islets presented a higher InsrA:InsrB ratio. Using qPCR, we quantified Insr-B (including exon 11) transcripts and found that they were significantly reduced at p28, using both islet and sorted b cells RNA (Figure 5G), confirming that the loss of Gsa specifically affects the expression of the Insr-B isoform within b cells. The InsrA:InsrB ratio is regulated by several splicing factors, some of which promote inclusion (i.e., Srsf1, Srsf3, Mbln1), whereas others promote exclusion (Celf1) of exon 11 [53]. The relative expression of these factors determines the degree of exon 11 inclusion and thereby Insr isoform distribution. Nicely correlating with reduced Insr-B mRNA levels, we found that p28 b-GsaKO islets exhibited decreased Srsf1 and increased Celf1 expression ( Figure 5H). Together, these findings reveal that Gsa signaling not only regulates Igf1r but also Insr in b cells. Importantly, the downregulation of Igf1r and Insr-B was evident as early as p7 ( Figure 5I), placing defective signaling through these receptors at the correct time to negatively impact postnatal b-cell mass expansion in b-GsaKO mice.
Finally, we measured BrdU incorporation in dissociated islet cells (DICs) from p28 b-GsaKO and control islets following incubation with insulin, Igf1, and the Glp1 agonist Exendin as control. While control DICs augmented BrdU incorporation in response to the three molecules, b-GsaKO DICs only responded to Igf1 ( Figure 5J). Therefore, Gsa ablation impairs the proliferative activity of insulin in postnatal islets, indicating that deficient insulin signaling contributes to the bcell mass phenotype in b-GsaKO mice.
DISCUSSION
In mice, pancreatic b-cell development culminates in two essential milestones during the first weeks of postnatal life. First, the proliferation of neonatal b cells leads to the rapid expansion of the b-cell mass. Second, b cells acquire the ability to secrete appropriate amounts of insulin in response to glucose. Here we demonstrate that the specific disruption of Gsa signaling in b cells compromises both processes, resulting in an inadequate functional b-cell mass that cannot maintain proper glucose homeostasis in adult life. Importantly, these changes occur in the absence of other alterations described in RIP2-Cre/GsaKO mice [21], including poor postnatal growth, reduced survival, or enhanced in vivo insulin sensitivity, indicating that offtarget Cre-mediated recombination events likely caused these effects [24] in the former model. b-GsaKO (n ¼ 5e8) and control (n ¼ 3e8) islets as determined by qPCR. Expression was normalized with Tbp and expressed relative to control, given the value of 1. (I) Igf1r and Insr-B mRNA levels in p7 b-GsaKO (n ¼ 5e9) and control (n ¼ 3e6) measured by qPCR. Expression was normalized with Tbp and expressed relative to control, given the value of 1. (J) Proliferation determined by BrdU incorporation in DICS prepared from 5 wo b-GsaKO and control islets, and stimulated for 24 h with exendin-4 (200 nM), insulin (11 nM) or Igf1 (11 nM) (n ¼ 3, with 6 replicates per experiment). Values are expressed as fold-increase over un-stimulated DICS. All data points represent the mean AE SEM. P < 0.05, **P < 0.01, ***P < 0.001 vs. control animals by two-tailed Student's test.
Original Article
Glp1 and Gip, the receptor for PACAP/VIP and the orphan receptor Gpr119, all of which have been shown to potentiate glucose-induced insulin secretion and hence regulate b-cell function. Incretins, especially Glp1, are also inducers of b-cell proliferation and could therefore be potentially involved in the defects in postnatal b-cell growth of b-GsaKO mice. However, mouse models of disrupted incretin receptor action lack alterations in b-cell mass establishment under homeostatic conditions [54e57], challenging the idea that defective incretin signaling underlies the defects in postnatal b-cell growth observed in b-GsaKO mice. Likewise, the absence of b-cell mass phenotypes after the genetic disruption of Grp119 or Vip in mice challenges the dominant roles of these two molecules in postnatal b-cell expansion [58,59]. Multiple other Gsa-GPCRs exist, some of which have been described in young b cells [22]. It will be interesting to study if they are affected in b-GsaKO mice and involved in the b-cell growth defect observed in this model. An alternative view is that the b-cell defects observed in b-GsaKO mice do not derive from anomalies in individual ligands or receptors, but rather, they are a consequence of the complete blockade of Gas-GPCRs signaling and the resulting loss of counteregulation of Gia-GPCRs signaling, which is known to limit bcell expansion during the perinatal period [22]. In eukaryotic cells, two primary intracellular mediators, the kinase PKA and the exchange factors Epac, control the cellular functions of cAMP [38]. Our results indicate that the inactivation of Gsa impairs PKA signaling, as illustrated by the reduced activation of its principal effector, the transcription factor Creb. In accordance, several genes reported to be regulated by Creb activity and involved in b-cell proliferation (i.e., Ccna2, cdkn1a) and maturation (i.e., Pdx1 and Mafa) were downregulated in b-GsaKO islets [13e15, 35,36], directly connecting Gsa/PKA/CREB signaling to postnatal b-cell development. Interestingly, Mafa disruption has been shown to reduce b-cell proliferation and alter b-cell gene expression by 3 weeks of age [60]. The involvement of the Epac factors in abnormal b-cell development in b-GsaKO mice is less clear. Epac proteins are known to participate in exocytosis and influence the release of Ca2þ [10], which could potentially link Gsa/cAMP with the calcineurin/Nfat pathway [39], a well-known regulator of postnatal b-proliferation and maturation [40].
Interestingly, research demonstrates that the mitogenic effects of Glp1r activation in human islets require activation of the Nfat genes [63]. However, we found that neither Epac nor Nfat genes were modified in b-GsaKO islets. Likewise, genes found to be downregulated upon disruption of calcineurin/Nfat signaling, such as Ins2, Gck, Slc2a2, or Iapp, were unaffected, implying that Epac/Nfat proteins are unlikely to be the primary mediators of Gsa effects in early postnatal b cell development.
Our work shows that Gsa inactivation jeopardizes insulin signaling in postnatal b cells. Though the crosstalk between the Gsa/cAMP and insulin pathways has been previously recognized in adult b cells, this study reveals that this connection is present from early postnatal life.
At p28, Gsa-depleted b cells exhibit a diminished expression of Igf1r and Irs2, two previously recognized cAMP/Creb targets [20,48,49]. Intriguingly, here we uncover a new interaction at the level of the insulin receptor that may expound upon the postnatal b-cell expansion defect of b-GsaKO mice. Indeed, the b-cell mass phenotype of b-GsaKO mice is reminiscent of the phenotype described in mice carrying a compound deletion of Igf1r and Insr in b cells, namely reduced postnatal b-cell mass associated with decreased phosphorylated Akt and Mafa expression [44]. Of note, reduced b-cell mass was only observed in adult stages upon inactivation of Insr alone [45] or not observed at all upon a single deletion of Igf1r [52], suggesting that insulin might play a dominant role in postnatal b-cell growth. In this regard, it is significant that b-GsaKO islets exhibit severely depleted insulin content from shortly after birth. It may be argued that diminished autocrine insulin signaling (due to the combination of decreased insulin content and down-regulation of insulin signaling elements) underlies the b-cell expansion defect in b-GsaKO mice. We acknowledge that the autocrine actions of insulin are still a matter of debate [64,65]. However, most of the studies addressing this question have used adult b cells, and their intrinsic features and the microenvironment they reside in (i.e., proportion of other endocrine cells, islet vascularization, or innervation) are different from postnatal b cells.
Thus, it is plausible that autocrine insulin signaling exerts discrepant roles in young and adult b cells.
The molecular mechanisms that connect Gsa with the insulin receptor remain to be further elucidated. The insulin receptor protein has two isoforms (A and B) generated by alternative splicing of the Insr gene (A: exon 11 excluded; B: exon 11 included) that differ in binding affinities and activation of downstream signaling pathways [66]. Moreover, the relative proportion of these isoforms is cell-specific and can vary during development and changing environmental conditions. Here, we show that loss of Gsa results in decreased levels of the Insr-B isoform in b cells. In agreement, gene expression of the splicing factor Srf1, which promotes exon 11 inclusion, is reduced, though the gene expression of the factor Celf1, which causes exon 11 skipping, is increased in b-GsaKO islets. Remarkably, insulin induces the generation of Insr-B in islets [67]. Therefore, reduced Insr-B levels could be a consequence of impaired insulin signaling activity. In support of this possibility, the splicing factor Srsf1 has also been reported to enhance Insr exon 11 inclusion upon insulin stimulation in islets [67]. To date, the role of Insr alternative splicing and the importance of the different Insr isoforms for pancreatic b-cell proliferation, survival, or function is unclear. Interestingly, the Insr-B isoform is associated with stronger insulin binding and might play a role in b-cell survival [67]. It will be interesting to address how alternative splicing of the Insr regulates postnatal b-cell proliferation and/or maturation in the future. Collectively, our study, using conditional ablation of Gsa in b cells, reveals critical functions of Gsa-dependent signaling in postnatal b cell expansion and maturation. We also show that inactivation of Gsa has an early and broad impact on several proximal elements of the insulin signaling transduction machinery and propose that these alterations are involved in impaired postnatal b cell development in b-GsaKO mice. Remarkably, we identify the insulin receptor as a target of Gsadependent signaling in postnatal b cells. This finding encourages further work to decipher whether this interaction is conserved in adult b cells and whether it could be exploited to expand or preserve adult b cell mass in diabetes.
AUTHOR CONTRIBUTION
BSN conducted all experiments. RFR and AG provided assistance with mouse experiments and proliferation assays. MPJ, EFR, JMC, and YE provided assistance in several experiments. JM and SD performed cAMP studies. MH and UA performed OPT studies. LSW provided floxed Gnas mice. RGo and RGa conceived the project. BSN, JV, RGo, and RGa analyzed and discussed the data. BSN and RGa wrote the manuscript. All authors read and approved the manuscript.
|
2021-06-08T06:16:39.354Z
|
2021-06-03T00:00:00.000
|
{
"year": 2021,
"sha1": "2769508ac3f2c58562470c746e92edc8f263ee38",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.molmet.2021.101264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9bb850c194218aa8c260cb996ecbcb600e10b32",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1029400
|
pes2o/s2orc
|
v3-fos-license
|
Quantum tori, mirror symmetry and deformation theory
We suggest to compactify the universal covering of the moduli space of complex structures by non-commutative spaces. The latter are described by certain categories of sheaves with connections which are flat along foliations. In the case of abelian varieties this approach gives quantum tori as a non-commutative boundary of the moduli space. Relations to mirror symmetry, modular forms and deformation theory are discussed.
1.1
Mathematical models of dualities in quantum physics is a very interesting and intriguing area of research. It became clear after the work of Kontsevich (see [Ko1]- [Ko3]) that the "right" framework for general duality theorems is the (yet non-existing) theory of moduli spaces of A ∞ -categories. Informally, an A ∞ -category (with extra conditions imposed) models a "projective non-commutative space" together with the formal moduli space of its deformations. Moduli space of such non-commutative spaces (whatever it is) consists of many "connected components". The boundary of the compactification of a component may contain "cusps". A non-commutative space can degenerate into a commutative one at a cusp. In some cases one can assign to a pair (component, cusp) a "dual" pair. The corresponding A ∞ -categories are equivalent. In particular, their Hochschild cohomologies (interpreted as tangent spaces to the moduli space of A ∞ -categories) are isomorphic. This idea of Kontsevich has received spectacular confirmation in homological mirror symmetry program (see [Ko1], [KoSo2]).
1.2
Even if the degeneration of a non-commutative space at a cusp is not commutative, one hopes to extend the dualities to the boundary stratum. The purpose of this paper is to explain without details the idea of non-commutative compactification in an example of abelian varieties. It will be discussed at length elsewhere (see [So]). The paper is an extended version of a series of talks I gave at Ecole Polytechnique, MSRI, Stanford University, Max-Planck Institut für Mathematik in Bonn, Oberwolfach workshop on noncommutative geometry and Moshe Flato Euroconference in Dijon in [1999][2000]. In this paper we are going to treat many aspects informally, in order to explain main ideas.
Let us start with an example of what can be thought of as a noncommutative compactification.
Example 1 The universal covering of the moduli space of elliptic curves admits a "non-commutative" compactification with the boundary stratum con-sisting of the universal covering of the moduli space of quantum (= noncommutative) tori. Moreover, the SL(2, Z)-symmetry extends from the upperhalf plane to the boundary real line.
We understand a "space" as a category of certain sheaves on it. Thus the "moduli space" of "spaces" corresponds to the "moduli space" of categories. The latter "moduli space" M can be a usual manifold or orbifold (a compactification of M is often a compact manifold with corners). Let us consider a path γ : [0, 1] → M between an interior point of the compactified moduli space and a point of the boundary. Then we have a 1-parameter family of categories C t along the path. It can happen that C t , t ∈ [0, 1) is a category of sheaves on a topological space, while C 1 is not. We are going to treat C 1 as a category of sheaves on a "non-commutative" space. Such non-commutative spaces consititute a "non-commutative stratum" of the compactification M.
In the example above, we think about an elliptic curve E q = C * /q Z , q = e 2πiτ , Im(τ ) > 0 as about the bounded derived category D b (E q ) of coherent sheaves on it (we are going to discuss below reasons why derived categories appear in the story). Let us consider the universal covering of the moduli space of complex structures. Thus we have a family of categories D b (E q ) parametrized by the upper-half plane H = {τ |Im(τ ) > 0}. As Im(τ ) = 0 the elliptic curve does not exist as a "commutative" space. We will argue that the degenerate object is represented by the derived category of certain modules over the algebra of functions on a quantum torus. From this point of view elliptic curves and quantum tori belong to the same family of noncommutative spaces. Also, the SL(2, Z)-symmetry should be present for all q = 0. See Section 3 for more details. 1
1.3
Relevance of derived categories (more technically, triangulated A ∞ -categories with finite-dimensional Hochschild cohomology) can be roughly justified by the following arguments: 1) Deformation theory of an A ∞ -category is controlled by its Hochschild complex. Being defined properly (see [KoSo1]) such a category always appears together with the formal moduli space of its deformations.
2) Reconstruction theorems, mainly due to A. Bondal, D. Orlov and A. Polishchuk. Here are two examples.
Theorem 1 ([BO]). Let X be a smooth irreducible variety with ample canonical or anticanonical sheaf, and Y be a smooth algebraic variety. If the bounded derived categories of coherent sheaves D b (X) and D b (Y ) are equivalent as triangulated categories, then X is isomorphic to Y .
If X is a Calabi-Yau manifold (for example, an elliptic curve) then the theorem is not true. Nevertheless, often one can recover from D b (X) some information about X.
Theorem 2 ( [O1]). For each abelian variety X there are finitely many nonisomorphic abelian varieties which have the bounded derived category of coherent sheaves equivalent to D b (X).
1.4
Physicists discovered quantum tori in various theories ( see [CDS], [SW]). Morita equivalence of quantum tori was interpreted as a new duality (see [RS], [S1]). One hopes that the concept of non-commutative compactification will help to construct and investigate mathematical models for dualities in quantum physics.
As an example let us consider the Homological Mirror Conjecture (HMC) of Kontsevich (see [Ko1]). Homological Mirror Conjecture says that the category D b (E q ) is equivalent to the bounded derived category D b (F (T 2 q )) of the Fukaya category F (T 2 q ) (see for ex. [Ko1] for the definition of the Fukaya category) of the symplectic torus T 2 q = (T 2 , 1/Re(τ )dx 1 ∧ dx 2 ). Let us imagine, that as q approaches to a point at the circle |q| = 1, both derived categories degenerate into the same A ∞ -category. Hence non-commutative degenerations become manifestly equivalent. This might give an insight to HMC. We will speculate about the category that can appear as such degeneration. Roughly speaking, it is the (derived) category of bundles with connections which are flat along a foliation. The foliations appear as degenerate complex structures. Global sections of such bundles are modules over the algebra of functions on a quantum torus. Thus one gets a "non-commutative" description of the corresponding boundary stratum of the compactified universal covering of the moduli space of complex structures.
The foliations which appear in the story are not arbitrary. They carry affine structures on leaves. This makes the whole picture similar to the topological mirror symmtery of Strominger-Yau-Zaslow (see [SYZ]). In [SYZ] Calabi-Yau manifolds are foliated by special Lagrangian tori, hence fibers carry affine structures. On the other hand, the SYZ-picture is related to the "large" complex structure limit, while our approach seems to be of different nature. Hopefully, they both correspond to two different strata of the boundary of the compactified moduli space of N = 2 superconformal field theories. The commutative stratum was discussed in [KoSo2] in the case of abelian varieties (see also Section 5 below).
1.5
Two-dimensional quantum tori can be interpreted in many different ways. For example, they are Morita equivalent to algebras of foliations. Quantum tori can also be thought of as quantizations of tori equipped with constant symplectic structures (they appear in the open string theory in this way). Hence they fit into the framework of deformation theory. Deformation theory of a Poisson manifold X gives rise to a formal family of categories C of modules over a quantized algebra C ∞ (X) of smooth function on X, where is the formal parameter. The problem is to construct a "global" moduli space. The germ at = 0 of this moduli space contains families of A ∞categories C , ∈ R such that the formal completion of C at = 0 gives C . In general R can be replaced by some parameter space M X . If the global parameter space exists, one can speak about "dualities" for the "family of theories parametrized by M X ". In the case of quantum tori, the dualities can correspond to Morita equivalence of quantum tori, or to equivalence of some subcategories of the full categories of modules. The "duality group" should be SL(2, Z) in order to agree with the interpretation of quantum tori as degenerate elliptic curves.
For general Poisson manifolds one can ask the following question.
Question 1 For a given Poisson manifold X, is there a global parameter space M X ? If it exists, how to describe points corresponding to the equivalent categories? What is the "duality group" (and why it is a group)?
We are going to discuss later in the paper conjectures motivated by quantum groups. Interesting ideas in this direction can be found in [Fad].
1.6
Here is the content of the paper. In Section 2 we discuss analogies between quantum tori over R and abelian varieties over C. In Section 3 we explain why quantum tori can be thought of as points of the boundary of the compactified Teichmüller space of an elliptic curve. Main idea here is to treat the Thurston boundary (given geometrically in terms of foliations) as a non-commutative space (i.e. as a category). Section 4 is devoted to mirror symmetry for quantum tori. We suggest to "compactify" the conventional homological mirror symmetry, extending it to the boundary of the universal covering of the moduli space of complex structures (Teichmüller space in the case of elliptic curves). Section 5 contains a comparison with the "commutative" picture. This material is borrowed from [KoSo2]. In Section 6 we discuss possible relations to theta functions and quasi-modular forms. We end this section with conjectures about dualities in quantum groups.
Acknowledgements. I would like to thank A. Bondal, A. Connes, P. Deligne, M. Douglas, L. Faddeev, V. Ginzburg, K. Gawedzki, M. Gromov, D. Kazhdan, Yu. Manin, W. Nahm, D. Orlov, A. Polishchuk, A. Rosenberg, A. Schwarz, D. Zagier for useful discussions. I am grateful to Maxim Kontsevich, who shared with me many of his ideas and supplied with illuminating examples. Some of them are used in the paper (especially in Sections 2.3, 3.1, 3.2, 5). I thank to S. Barannikov for the comments on the paper.
I also thank IHES for hospitality and CMI for financial support.
2 Quantum tori and their representations
Generalities
We start with the definition of an algebraic quantum torus (see for example [M1]). Let K be a complete normed field .
More precisely, the coordinate ring A(T (L, α)) of the quantum torus is a K-algebra with the unit generated by generators e(n), n ∈ L subject to the relations e(m)e(n) = α(m, n)e(m + n).
Algebra of analytic functions A an (T (L, α)) is a completion of A(T (L, α)) which consists of formal series n∈L a n e(n) with the coefficients a n decreasing as |n| → ∞ faster than any power of |n|.
Assume that K carries an involution x → x, such that |xx| = |x| 2 . Suppose that α(m, n) −1 = α(m, n) for any m, n ∈ L. Then there is a natural involution of A(T (L, α)) given by e(m) * = e(−m). It makes A α = A(T (L, α)) into a K-algebra with involution. Clearly in this case α takes value in the subgroup K 1 = {x ∈ K, |x| = 1}. We will call such quantum tori unitary. We consider their coordinate rings as objects in the category of algebras with involutions.
Definition 2 Let T (L, α) be a unitary quantum torus. The algebra B α := C ∞ (T (L, α)) of smooth functions on T (L, α) consists of series f = n∈L a n e(n) where the sequence a n decreases faster than any power of |n|, as |n| → ∞. This terminology is justified by the case K = C, where one gets the algebra of smooth functions on a real torus. One has the natural embedding of algebras with involutions: We are going to consider unitary quantum tori over the field C unless we say otherwise. The corresponding classical torus L R /L will be denoted by T (L) (here L R = L ⊗ R). Let Φ : L R → L * R be a linear map such that Φ(x)(y) + Φ(y)(x) = 0. We set ϕ(x, y) = Φ(x)(y). Thus we get a skewsymmetric bilinear form ϕ : L R ×L R → R. Taking α(x, y) = exp(2πiϕ(x, y)) we obtain a unitary quantum torus, which will be denoted by T (L, ϕ) or T (L, Φ). The corresponding algebras of functions (algebraic and smooth) will be denoted either by A ϕ , B ϕ or by A Φ , B Φ .
One can define the Grassmannian Gr 0 (V L ) of Lagrangian subspaces in V L . Then the map Φ → graph(Φ) identifies quantum tori with an open subset of Gr 0 (V L ). Let O(V L , Q) be the group of linear automorphisms of V L preserving the form Q (orthogonal group), and SO(V L , Q) be the corresponding special linear group. Then O(V L , Q) and SO(V L , Q) act transitively on Gr 0 (V L ). We will denote by SO(L, L ∨ ) the subgroup of SO(V L , Q) which preserves the lattice L ⊕ L ∨ , where L ∨ = Hom(L, Z).
Morita equivalence
The following theorem was proved in [RS] in the framework of C * -algebras. Theorem 3 In the notation of the previous subsection, let graph(Φ) and graph(Φ ′ ) are conjugate by an element of the group SO(L, L ∨ ). Then the algebras B Φ and B Φ ′ are Morita equivalent.
Choosing a basis in L one can identify the group SO(L, L ∨ ) with the group SO(d, d, Z) of linear automorphisms of the vector space R 2d preserving the form 1≤i≤d x i x i+d and the lattice Z 2d . Surprisingly, the same group appears in a different problem concerning derived categories of coherent sheaves on complex abelian varieties. We recall the following result.
Let X and Y be complex abelian varieties, X and Y are dual abelian varieties. We denote by L X and L Y the lattices of first homologies of X and Y .
Y as odd symplectic lattices (both lattices are equipped with the canonical symmetric forms Q X and Q Y as above).
General algebraic scheme
We refer the reader to [KoSo2] for the background on A ∞ -categories. We are going to discuss here a general algebraic scheme which sheds some light on the similarity between the derived category of coherent sheaves on an abelian variety and the derived category of modules over the algebra of functions on a quantum torus. We will impose some restrictions on the objects. These restrictions can be relaxed. The reader should assume that "natural" conditions are imposed, so that "everything works".
Let C be an A ∞ -category over a field k of characteristic zero, such that its Hochschild cohomology HH i (C) is finite-dimensional for all i ≥ 0. In what follows we will assume that k = C.
We assume that HH 0 (C) is a 1-dimensional vector space. It can be thought of as a Lie algebra of the group Aut(Id C ) of automorphisms of the identity functor. The total cohomology space carries a graded Lie algebra structure (with the Gerstenhaber bracket on the Hochschild cohomology). Let C = D b (X), where X is a smooth complex projective variety. Then ⊕ i≥0 HH i (C) = ⊕ i,j≥0 H i (X, j T X ) where T X = T 1,0 X is the holomorphic tangent bundle.
We assume that the Lie subalgebra HH 1 (C) is abelian. It admits the following interpretation. Let us consider the group Aut(C) of automorphisms of the A ∞ -category C. It is the group of classes of isomorphisms [F ] of equivalence functors F : C → C. We assume that it carries a structure of a Lie group. The Lie algebra g C of the connected component of the unit of Aut 0 (C) := G C is isomorphic to HH 1 (C). Under our assumptions, the group G C is a finite-dimensional commutative Lie group over C.
There exists a bundle P over G C × G C with the fiber which is a C *torsor. Let us describe it in detail. Let F and H be two functors, such that their isomorphism classes [F ], [H] belong to the group G C . According to our assumption on HH 0 (C), the set of isomorphisms The following lemma is easy to prove. Thus we obtain a C * -torsor P = P C over G C ×G C with the fibers P The abelian group L C := H 1 (G C , Z) carries a bilinear form (, ) : where c 1 (P ) is the first Chern class of P . We will keep the same notation for the C-linear extension of the bilinear form to L C ⊗ C.
Since G C is a Lie group, its fundamental group is commutative, and hence it is isomorphic to H 1 (G C , Z). Let γ be a loop based at the unit e of G C . We define a linear map p : Then we have the following exact sequence of linear maps: where Λ C is defined as the kernel of p.
Conjecture 1
The symmetric bilinear form (x, y) is non-degenerate, and the subspace Λ C is maximal isotropic (i.e. Lagrangian) with respect to it.
It is not clear how to prove this conjecture for general A ∞ -categories. In two examples considered below (abelian varieties and quantum tori) the proofs are straightforward.
Assuming the conjecture, to the A ∞ -category C we have assigned canonically a Lagrangian subspace Λ C in the (odd) symplectic vector space L C ⊗ C. It is interesting to notice that the linear algebra data seem to contain certain information about "discrete" symmetries of the A ∞ -category. Apriori this was not obvious.
Remark 1 In the case of quantum tori we will need to consider non-Hausdorff Lie groups. Then the abelian Lie algebra L C should be defined not as the first homology, but as the kernel of the exponential map exp : g C → G C . Notice that g C is defined canonically: it is the Lie algebra of infinitesimal symmetries of our A ∞ -category. The group G C is not defined canonically. At the level of local Lie groups one can think about G C as about quotient of the local Lie group corresponding to g C by the discrete subgroup consisting of such γ ∈ g C that exp(γ) gives rise to an automorphism of the category C isomorphic to the identity functor.
Example: abelian varieties and quantum tori
The bundle P is basically the tensor square of the Poincare line bundle on X × X (this follows from the results of Polishchuk and Orlov, see [O1]). The odd symplectic form is the canonical symmetric form on C 4n = C 2n ⊕ (C 2n ) * . It is easy to see that HH * (C) ≃ * (C 2n ). The latter can be also interpreted as the algebra of functions on the odd Lagrangian subspace in C 4n . Thus the formal deformation theory of C as an A ∞ -category is the same as the deformation of the corresponding odd Lagrangian subspace as a (non-linear) Lagrangian submanifold Λ C ⊂ C 4n .
For a quantum torus T (L, Φ) of dimension n one takes as C the derived category of modules of finite rank over the algebra B Φ (L). It is expected that automorphisms of the category come from automorphisms of the algebra. Then G C ≃ T n /Z n is a non-Hausdorff Lie group (it is the group of automorphisms of the algebra B Φ (L) modulo inner automorphisms). Following the general scheme outlined above, one obtains a Lagrangian subspace (it coincides with graph(Φ)) in the odd symplectic vector space R 2n = R n ⊕ (R n ) * . For the categories discussed in this subsection the cohomology HH 0 is 1dimensional, and HH 1 is a commutative Lie algebra. Moreover, the Conjecture 1 holds in both cases.
Foliations and degenerate complex structures
Let X be an even-dimensional smooth real manifold, which carries a complex structure. The latter is given by an integrable subbundle T 0,1 X ⊂ T X ⊗ C of anti-holomorphic directions, where T X is the real tangent bundle of X. Suppose that we have a family of complex structures degenerating into a real foliation of the rank equal to dim C X. We will call such degenerations maximal. In fact one should consider subsheaves of the tangent sheaf T X because the foliation can be singular. Degeneration gives rise not just to a foliation F of X, but also to an isomorphism j of vector bundles F and T X /F . The isomorphism satisfies some integrability conditions, which give rise to an affine structure on the fibers of F . To be more precise, the space Hom(F, T X /F ) can be interpreted as a tangent space T F (M) to F in the "moduli space" M of all foliations on X. When T 0,1 X and the holomorphic subbundle T 1,0 X get close to each other, one obtains a tangent vector in the space T F (M). Similarly, for any fiber F y , y ∈ X of the foliation F , the vector space (T X /F ) y can be identified with the tangent space to the leaf F y in the "moduli space" space of all leaves of the foliation F (it does not depend on a point of the leaf). It is easy to check that the integrability condition implies the following result.
Proposition 1 The isomorphism j : F → T X /F gives rise to an isomorphism of the tangent space to a leaf F y in the space of leaves, with the space of commuting vector fields on the leaf. In particular it yields an affine structure on the leaf F y .
Informally this result can be explained in the following way. Tangent space to a "degenerating" complex manifold is invariant with respect to the rotation by 90 degrees (multiplication by i = √ −1). Let us move a point x along some leaf F y of the limiting foliation F . Then the rotation by 90 degrees of vectors from T x X/F x transforms them into F x . On the other hand, the bundle T X /F carries a canonical flat connection (Bott connection). We can identify the spaces T X,x /F x for close points x using the connection. Hence a choice of commuting vectors at the space T X,y /F y gives rise to commuting vector fields along the leaf F y .
One can show that the space of pairs (F, j) being factorized by the natutal action of the multilicative group R * (dilations of j) is generically a subspace of the real codimension 1 in the space of integrable subbundles in T C X. The latter can be considered as a subvariety of the total space of the bundle of Grassmannians Γ(X, Gr(T C X)). In this way one gets a compactification of the universal covering of the moduli space of complex structures on X. In the case of curves the compactification adds the Thurston boundary to the Teichmüller space.
One can try to "compactify" the category of coherent sheaves on a complex manifold. The category of sheaves equipped with flat connections along the foliation, which is the maximally degenerate complex structure, serves as a "point" of the "non-commutative" stratum of the boundary. In the case of elliptic curves the compactification is compatible with the natural SL(2, Z)-action. Indeed, in both cases (complex structure given by τ ∈ H and affine foliation on T 2 given by dt = ϕdx) if the values of parameters are SL(2, Z)-conjugate, then there is an automorphism of T 2 (as a smooth manifold) which identifies the corresponding subbundles (in T C T 2 and T R T 2 respectively).
More generally, one considers the space of complex polarizations on a given compact complex manifold X. To every polarization τ one assigns the corresponding bounded derived category of coherent sheaves. The space of polarizations admits a natural compactification by real foliations, as we discussed above. The question is: what is the category which should be assigned to a foliation F which is a point of the boundary? In the case of maximal degeneration we suggest to take the (derived) category of F -local systems (see next subsection).
Remark 2 Compactification by pairs (F, j) modulo the action of R * is not compatible with the action of the group of diffeomorphisms Dif f (X). In particular, it does not descends to a compactification of the moduli space of complex structures on X. In 1-dimensional case the action of the mapping class group extends to the Thurston boundary of the Teichmüller space.
Foliations and "small" modules over quantum tori
Let X be a smooth manifold of dimension 2n, and F be a foliation of X of rank n. Let W be a sheaf of C ∞ X -modules on X, where C ∞ X is the sheaf of smooth functions on X.
Definition 3 We say that W carries an F -connection, if we are given a morphism of sheaves ∇ : and v ∈ F (here we identify the subbundle F with the sheaf of sections), F * = Hom(F, C ∞ X ).
The category Sh(X, F ) of sheaves of finite rank, which carry an Fconnection form a tensor category. The curvature of an F -connection is defined in the usual way. Sheaves which carry F -connections with zero curvature are called F -flat. If an F -flat sheaf is locally free (i.e. it corresponds to a smooth vector bundle on X), it is called an F -local system. We denote by D b (X, F ) the bounded derived category of the category Loc(X, F ) of Flocal systems on X. If X carries a symplectic form ω, and F is a Lagrangian foliation then we will sometimes add ω to the notation. Foliations described in the Proposition 1 will be of the main interest for us. Here is an example. Let F be an affine foliation of rank n on the standard torus T 2n = R 2n /Z 2n . This means that F is defined as (V ⊕R 2n )/Z 2n for some n-dimensional vector subspace V ⊂ R 2n . Let us choose the subspace S such that S ⊕V = R 2n , and S defines a closed n-dimensional submanifold Y in T 2n . There is a pull-back functor from the category Loc(T 2n , F ) to the category of vector bundles on Y . On the other hand, if (Λ, ∇) is an F -local system on X, then the holonomy of the connection ∇ defines an action of the group Z n on the restriction Λ |Y . Since Z n acts on Y , we obtain a structure of an Z n ⋉ C ∞ (Y )-module on the space of section Γ(Y, Λ). We can complete the cross-product algebra thus getting the algebra of smooth functions on a quantum torus acting on Γ(Y, Λ).
Definition 4 a) Let B Φ (Y ) be the algebra of smooth functions on the quantum torus described above. We call a B Φ (Y )-module small if it is projective as a module over the commutative subalgebra C ∞ (Y ).
b) More generally, let B Φ be an algebra of smooth functions on a quantum torus T (L, Φ) such that rk(L) = 2n, and Φ defines a symplectic structure on L R = L ⊗ R. Let L 0 ⊂ L be a Lagrangian sublattice (i.e. L 0 ⊗ R is a Lagrangian subspace in L R ). We say that a B Φ -module M is small with respect to L 0 , if it is projective with respect to the maximal commutative subalgebra of B Φ spanned by e(λ), λ ∈ L 0 .
Clearly small modules with respect to a given Lagrangian sublattice L 0 form a category B Φ (L 0 ) − mod. Let us consider an example of 2-dimensional tori. Then rk(L) = 2, rk(L 0 ) = 1. We will identify L with Z 2 and L 0 with Z ⊕ 0 ⊂ Z 2 . Let ϕ ∈ R \ Q, and F be the affine foliation dt = ϕdx in the standard coordinates in R 2 = L ⊗ R.
Proposition 2 The category B ϕ (L 0 ) − mod is equivalent to the category of F -local systems on the torus T (L) = L R /L.
Proof.
We have already constructed a functor from F -local systems to the category of small modules. Namely, every F -local system V being restricted to the equator of T 2 , gives rise to a projective module over the algebra C ∞ (T 1 ). Since ϕ is irrational, the foliation defines an action of the group Z on T 1 = (x mod Z, 0). The holonomy of the flat connection defines the structure of a small module on Γ(T 1 , V |T 1 ).
An inverse functor is constructed such as follows. Let M be a small module over B ϕ . We denote the standard generators of B ϕ by e 1 and e 2 . Then M is a projective module over the subalgebra B ϕ (e 2 ) generated by e 2 . Thus we have a vector bundle M over consists of elements f such that f (x + 1, t) = f (x, t) and f (x, t+1) = e 1 (f (x−ϕ, t)) ( here we write formulas in coordinates (x, t) ∈ R 2 rather than in coordinates (x mod Z, t) ∈ T 1 × R). It is easy to check that they are global sections of the vector bundle V → T 2 such that for its pull-back to R 2 we have: the fiber V (x,t) is naturally isomorphic to M x . We define an F -connection ∇ F on V by identifying infinitesimally closed fibers: , t)). The action of the holonomy of ∇ F (shift on the period t = 1) is equivalent to the action of the generator e 1 on the module M. Thus the action of Z ⋉ C ∞ (T 1 ) on {f (x, 0)} = M given by the F -local system is the same as the structure of B ϕ -module on M. The Proposition is proved.
Remark 3 1) In order to identify the (derived) categories of coherent sheaves on the elliptic curves E τ 1 and E τ 2 we use a bimodule, which is a sheaf of regular functions on the graph of an automorphism f : T 2 → T 2 identifying the complex structures. If p i : E τ 1 × E τ 2 → E τ i , i = 1, 2 are natural projections, then the equivalence functor is given by M → p 2 * (p * 1 M ⊗ O graph(f ) ). In the case of tori with affine foliations we basically use the same description. Notice that it does not give a Morita equivalence of the corresponding quantum tori. It gives an equivalence of the categories of small modules.
2) One should notice that vector bundles with F -connections form a tensor category, while projective (or all finite) modules over quantum tori do not.
3) Small modules are similar to holonomic D-modules. Their algebraic version was studied from this point of view in [Sab].
Suppose we have a free abelian Lie group G together with a dense embedding of G into the group Aut(T n ) of the affine automorphisms of the torus T n . Then to have a vector bundle V over T n together with a lifting of the action of G to F , is the same as to have a module over the quantum torus defined as a completed cross-product of the group algebra of G and C ∞ (T n ). We used this simple observation in the case when the action of G = Z was induced by an affine foliation of T 2 . In general we have a functor {Affine foli-ations} → {Small modules}. Notice that for n > 1 there is no inverse to this functor. In other words we cannot recover a foliation from a small module over the algebra B ϕ . Indeed, we have less affine foliations than quantum tori. Quantum torus is given by a skew-symmetric form. Hence the dimension of the moduli space of quantum tori of the rank 2n is equal to n(2n − 1). At the same time the moduli space of affine foliations has the dimension n 2 (every such a foliation is given by the graph of a linear morphism R n → R n ). These two numbers coinside only if n = 1.
Coherent sheaves on elliptic curves and quantum tori
Here we recall a result from [BG] and use it to provide another link between the derived category of coherent sheaves on elliptic curves and the derived category of certain modules over quantum tori. Assume that we have fixed a non-zero complex number q such that |q| < 1. Let us consider a complex alge-braĀ q which is generated by the field of Laurent formal power series C((z)) and an invertible element ξ such that ξ f (z) = f (qz) ξ for any f ∈ C((z)).
One defines an abelian category M q such as follows. Objects of M q arē A q -modules M, which are C((z))-modules of finite rank. In addition, it is required that there exists a free C[[z]]-submodule M 0 ⊂ M of maximal rank, such that M 0 is invariant with respect to the subalgebra C[ξ, ξ −1 ] ⊂Ā q . Clearly M q is a C-linear rigid tensor category. It is proved in [BG] that M q is equivalent to the tensor category V ect ss 0 (E q ) of degree zero semistable holomorphic vector bundles on the elliptic curve E q = C * /q Z . Let us recall the idea of the proof. Using the Fourier-Mukai transform, applied to the vector bundles in question, one gets the category of sheaves with finite support. The latter category admits a description in terms of linear algebra data (a vector space equippped with an endomorphism). The same data describe objects of M q .
If we forget about tensor structures, then the category V ect ss 0 (E q ) generates a subcategoryD b f in (E q ) of D b (E q ). Here we understand the word "generate" in the following sense: one allows to take extensions, direct summands of objects and all shifts of an object. We will use the notation D b (M q ) for the derived category generated by M q . Then the previous discussion implies the following result.
One can use this theorem as a definition of D b f in (E q ) in the case when |q| = 1.
Question 2 Let |q| = 1. Is it true that D b (M q ) is equivalent to a subcategory of the derived category of small modules over the quantum torus B q ?
Question 3 Let |q| < 1. How to describe the category D b (E q ) in terms of modules over B q (or some related algebra)?
In order to answer the last question, one can consider the algebra generated by holomorphic functions on C * and shifts z → qz. The question is whether it is possible to replace the algebra of holomorphic functions on C * by something "more algebraic". Then one would have a uniform description of the derived category of coherent sheaves on an elliptic curve and its "non-commutative" degeneration.
Hopefully, the above considerations can be generalized to the case when C((z)) is replaced by a complete normed field (for example, to the p-adic case). Then it can be used as a definition of the derived category of coherent sheaves on quantum tori defined over such fields (cf. [M2]).
Mirror symmetry and deformation quantization 4.1 Reminder on Homological Mirror Conjecture
Homological Mirror Conjecture (HMC) was formulated by Kontsevich in 1993. We are not going to recall all the details here (see [Ko1], [KoSo2]). HMC is a claim about equivalence of two triangulated A ∞ -categories D b ∞ (X) and F (X ∨ ) for given mirror dual complex Calabi-Yau manifolds X and X ∨ . The category F (X ∨ ) is the Fukaya category of X ∨ . It can be defined for any symplectic manifold (M, ω). Objects of F (M) are pairs (N, L) where N is a Lagrangian submanifold of M and L is a unitary local system on N. For two objects (N 1 , L 1 ), (N 2 , L 2 ) such that N 1 and N 2 intersect transversally, one defines the space of morphisms as where L ix , i = 1, 2 are fibers of the local systems.
The structure of A ∞ -category on F (M) is given in terms of the following data: 1) A structure of complex on all spaces Hom((N 1 , L 1 ), (N 2 , L 2 )). In particular they are Z-graded complex vector spaces with the grading defined by means of the Maslov index.
2) Higher compositions, which are morphisms of complexes for given objects X i = (N i , L i ) ∈ F (M). The composition m k is defined in terms of the moduli space of holomorphic maps of (k + 1)-gons C k+1 to M, such that the ith side of C k+1 belongs to N i . Thus, m 1 is basically the differential in the Floer complex associated with the pair of Lagrangian submanifolds N 1 and N 2 . Formulas for m k , k ≥ 2 involve also the monodromies of flat connections along the sides of polygons as well as areas of the polygons computed with respect to the symplectic form on M. There are compatibility conditions for the morphisms m k .
The category D b ∞ (X) is an A ∞ -version of the derived category of coherent sheaves on X. Its objects are bounded complexes of holomorphic vector bundles. Essentially the same A ∞ -category is given by the dg-category of dg-modules over the dg-algebra of Dolbeault forms Ω 0, * (X).
The case of elliptic curves
In the case of elliptic curves the HMC was proved in [PZ] (see also [AP]). One starts with a symplectic 2-dimensional torus (T 2 , ω), where ω is a constant symplectic form. In fact one should also fix a real number B (more precisely, the cohomology class [Bdx ∧ dy] ∈ H 2 (T 2 , R)/H 2 (T 2 , Z)) = R/Z). Then one defines the Fukaya category F (T 2 ) using the complexified symplectic form Bdx ∧ dy + iω. All the definitions are standard, only the areas of polygons become complex numbers (see details in [PZ]). The symplectic form ω determines a unique flat metric A(dx 2 + dy 2 ) on T 2 such that A = T 2 ω.
It gives a unique complex structure on T 2 (because the size of the chamber of the lattice Z 2 is fixed). In this way we obtain an elliptic curve (i.e. 1dimensional Calabi-Yau manifold). Let X = E τ = C * /q Z be this curve, q = exp(2πiτ ), Im(τ ) > 0. Then the dual Calabi-Yau manifold is the elliptic curve E ρ = C * /e 2πiρZ , ρ = B + iA.
Mirror symmetry functor can be described explicitly. On the symplectic side of HMC one has closed 1-dimensional submanifolds in T 2 which carry unitary (quasi-unitary for the version of HMC considered in [PZ]) local systems. On the complex side of HMC one has complexes of holomorphic vector bundles (they generate the derived category of coherent sheaves). The dictionary between symplectic and complex sides translates standard (m, n) geodesics equipped with trivial 1-dimensional local systems into holomorphic vector bundles of rank n with the first Chern class m. For general abelian varieties a version of HMC was proved in [KoSo2] using methods of Morse theory and non-arhimedean analysis.
"Compactified" Homological Mirror Symmetry
Homological mirror conjecture implies that formal moduli spaces of deformations of D b ∞ (X) and F (X ∨ ) for dual Calabi-Yau manifolds are isomorphic. The tangent space to the moduli space of deformations of an A ∞ -category C is ⊕ k≥0 Ext k (Id, Id). Here one takes Ext groups in the category of functors C → C, and Id is the identity functor. In other words, the tangent space is given by the Hochschild cohomology of the category. The Yoneda product of Ext groups gives rise to the product on the tangent space.
In the case of D b ∞ (X) the tangent space is ⊕ p,q≥0 H p (X, q T X ). In the case of F (X ∨ ) it is H * (X ∨ ). The product in the latter case is the small quantum cohomology product defined in terms Gromov-Witten invariants (see [Ko1] for details). The formal moduli spaces are bases of semi-infinite variations of Hodge structures (see [B2]). The mirror symmetry functor should identify the corresponding Hodge filtrations for D b ∞ (X) and F (X ∨ ) (or, better, semi-infinite variations of Hodge structures, as suggested by Barannikov, see [B3]) . We mention here that non-commutative analog of the variations of Hodge structures with applications to mirror symmetry was introduced in [B2-B3].
In the case of elliptic curves (more generally, abelian varieties) there are no Gromov-Witten invariants, but the rest of the picture is present. Then one can ask the following question.
Question 4 What happens with both sides of HMC as Im(τ ) → 0?
We have argued that the derived category of coherent sheaves on the elliptic curve E τ degenerates into the derived category of F -local systems, where F is a foliation on T 2 (degenerate complex structure). Equivalently, it is the derived category of the category of small modules over the algebra of functions on the corresponding quantum torus. Another way to treat this degeneration is to consider the derived category of "coherent sheaves on the non-commutative upper-half plane". This means that we start with an algebra B which is the algebra over C[q, q −1 ] consisting of series f = m,n a m,n x m y n where xy = qyx, the coefficients a m,n are rapidly decreasing, m, n ∈ Z, and n runs through a finite set. If |q| = 1 modules of finite rank over this algebra are the same as Z-equivariant sheaves of finite rank on C * , i.e. the same as coherent sheaves on E q = C * /q Z , q = e 2πiτ . If |q| = 1 then every such module admits a structure of module over the algebra B τ (i.e. we allow infinite sums in n). Then the derived category of coherent sheaves over B can be treated as a family of derived categories, which has as fibers elliptic curves and quantum tori for different values of q. Now the SL(2, Z)-invariance of categories is manifest. In fact one has two copies of SL(2, Z) acting on the family of categories. One copy acts by fractional transformations of τ . It gives rise to an isomorphism of elliptic curves if Im(τ ) > 0, and equivalence of the categories of small modules if Im(τ ) = 0. The equivalence in this case is given by a bimodule H τ,g(τ ) , where g ∈ SL(2, Z). One can check that H τ,g 1 g 2 (τ ) ≃ H g 2 (τ ),g 1 g 2 (τ ) ⊗ H τ,g 2 (τ ) , and H τ,τ corresponds to the identity functor.
Another copy of SL(2, Z) acts by automorphisms of the algebra B q for a fixed q which is not a root of 1. Namely, x → x a y b , y → x c y d is an algebra automorphism iff ad − bc = 1. Taking a = d = 0, b = −c = 1 one gets an analog of the Fourier-Mukai transform in the case of quantum tori.
We treat this picture as a non-commutative compactification of the universal covering of the moduli space of elliptic curves.
It is natural to expect that there is similar compactification of the "symplectic" side of HMC. In the next section we are going to present some arguments in favor of the idea, that the degenerate Fukaya category of the symplectic torus (T 2 , (1/τ )ω) is again the category of small modules over the algebra B ϕ , where ϕ = Im(τ ), and B ϕ is interpreted as a quantized algebra of functions on the torus.
The "compactified" HMC should be a statement about an equivalence of families of A ∞ -categories, both parametrized by τ ∈ H = {z ∈ C|Im(z) ≥ 0}. For τ ∈ H one has A ∞ -categories from the conventional HMC, and for τ ∈ R one has two equivalent categories of modules for the same quantum torus, but described in two different ways: a) as the category of F -local systems on T 2 ; b) as the category of modules over a quantized symplectic torus, which correspond to Lagrangian submanifolds.
Although two descriptions a) and b) give rise to equivalent categories, the mirror symmetry functor produces a non-trivial identification of the Hodge filtrations on the cohomology of X = T 2 . Hodge filtrations are filtrations on the periodic cyclic homology of the A ∞ -categories under considerations. Periodic cyclic homology of either A ∞ -category is isomorphic to the total cohomology of the underlying space. Mirror symmetry functor should interchange the Hodge filtrations on periodic cyclic homology.
In the case of the 2-dimensional quantum torus T 2 (Z 2 , ϕ) one has two filtrations on the periodic cyclic homology (the latter is isomorphic to the cohomology of the usual torus T 2 ). First filtration arises from the interpretation of the quantum torus as a quantized symplectic manifold. It has one non-trivial term corresponding to the line spanned by the vector (1, [ω]), where 1 ∈ H 0 (T 2 ) is the unit in cohomology, and [ω] is the cohomology class of the symplectic form ω = 1 ϕ dx ∧ dt. Second filtration arises from the interpretation of the quantum torus in terms of the foliation. The only non-trivial term of this filtration corresponds to the straight line spanned by the class of the foliation 1-form dx − ϕdt. Now we have filtrations in even and odd cohomology respectively. The mirror symmetry functor interchanges odd and even cohomology, interchanging also the Hodge filtrations. This picture is a "limiting" one for the homological mirror symmetry in the case of elliptic curves. Therefore the mirror symmetry functor in the case of quantum tori identifies the (derived) category of modules over the algebra B ϕ of smooth functions on T (L, ϕ) with itself. One can say that the quantum torus is mirror dual to itself. The mirror functor acts non-trivially interchanging odd and even cohomologies, and identifying the Hodge filtrations described above (cf. with the case of complex abelian varieties considered in [GLO]).
Modules over quantized algebras and Fukaya category
In order to define the Fukaya category one needs a symplectic manifold. There is a simpler abelian category depending on symplectic structure. Let (M, ω) be a symplectic manifold. Then it admits a deformation quantization (i.e. formal family of associative products with the prescribed first jet). The quantized algebra C ∞ (M) is not defined canonically. On the other hand, the abelian category C M of C ∞ (M) -modules is defined canonically. Let Λ ⊂ M be a Lagrangian submanifold. Identifying a neighborhood of Λ with a neighborhood of some manifold X in T * X, one can construct a C ∞ (M) -module W Λ , which is canonically defined as an object of C M . For the modules W Λ one expects the following theorem (cf. [Gi]).
Theorem 5 Let Λ i , i = 1, 2 be two transversal Lagrangian submanifolds in a symplectic manifold (M, ω), dim M = 2n (i.e. they are transversal at all intersection points). Then Let us consider the simplest case of 1-dimensional Lagrangian subspaces in the standard symplectic R 2 . We will be considering algebraic quantization, not a smooth one. Let p, q be coordinates in R 2 such that {p, q} = 1, and A be the Weyl algebra (standard quantization of this symplectic manifold, so that [p, q] = · 1). Let Λ 1 be the line q = 0, and Λ 2 be the line p = 0. Then W Λ 1 = A/Aq, and W Λ 2 = A/Ap. Clearly we have the following free resolution of W Λ 1 : where the first map is a multiplication by q, and the second map is the natural projection. Then Hom(E * , W Λ 2 ) gives the complex W Λ 2 → W Λ 2 , where the only map is given by the multiplication by q. It is clear that there is an isomorphism of C[q]-modules: W Λ 2 ≃ C[q]. Therefore the complex W Λ 2 → W Λ 2 has trivial cohomology H 0 , and H 1 is isomorphic to C[q]/qC[q] ≃ C. Hence the only non-trivial Ext-group is Ext 1 (W Λ 1 , W Λ 2 ) = C. This proves the theorem in the simplest case.
Similarly one can check that for a given Λ there is an isomorphism . More generally, let W Λ i ,L i , i = 1, 2 be small modules, corresponding to closed Lagrangian transversal submanifolds Λ i which carry local systems L i . Assume that the Lagrangian submanifolds intersect transversally. Then Ext n (W Λ 1 ,L 1 , W Λ 2 ,L 2 ) = x∈Λ 1 ∩Λ 2 Hom((L 1 ) x , (L 2 ) x ), and all other Extgroups are trivial.
In the case of 2-dimensional tori there are two different interpretations of the category of modules over the corresponding quantum torus: a) as the category of small modules; b) as the category of modules over a quantized algebra.
To a Lagrangian submanifold with a local system on it, one assigns (in both cases a) and b)) an object of the corresponding category of modules. It is natural to expect that these objects correspond to each other under the equivalence of categories a) and b).
Although these observations show certain similarity of the Fukaya category F (M) with the category C M of modules over the quantized function algebra, there is a difference in their structures. Namely, Maslov index is not visible in C M , and there is no non-trivial A ∞ -structure on the latter category. On the other hand, Maslov index appears in the definition of the structure of A ∞ -category on F (M).
Question 5 Is there an A ∞ -extension of the category C M which involves "graded" objects (similarly to graded Lagrangian manifolds in the construction of F (M))?
This question is interesting even in the linear case (i.e. when M is the standard symplectic R 2n ). If the answer to the question is affirmative, then for any three Lagrangian submanifolds intersecting transversally there is a generalized Yoneda composition Ext which involves "instanton corrections", i.e. counting of holomorphic polygons with the edges belonging to Λ i , i = 1, 2, 3.
Fukaya category for Lagrangian foliations and Moyal formula
Fukaya suggested in [Fu1] to construct a version of the category F (T 2n ) for Lagrangian foliations. To a foliation he assigned the algebra of foliation, to a pair of "good" foliations he assigned a bimodule over the corresponding algebras. This bimodule is a kind of a Floer complex. The bimodule is not projective over either of the algebras. Fukaya proposed an A ∞ -structure on the category of Lagrangian foliations. Objects of the category are Lagrangian foliations together with transversal measures. Spaces Hom(F 1 , F 2 ) are analogs of Floer complexes contsructed for "good" foliations F i , i = 1, 2. The structure of A ∞ -category is given by the "higher compositions" m k , k ≥ 1. Composition map m 2 is defined by the formula which is similar to the formula for the Moyal * -product on a symplectic torus (see for example [We]): The formula contains the summation of the exponents of symplectic areas of triangles, which makes it similar to the formula for m 2 . The latter contains the exponents of the symplectic areas of holomorphic polygons. Let us recall Fukaya's formula in the case when Lagrangian foliations are obtained from real vector subspaces in the standard symplectic vector space R 2n .
If one has three affine Lagrangian foliations: F i , i = 1, 2, 3, such that the Maslov index of this triple is zero, then the formula looks such as follows: (a,b,c,ω) f (x, a, y)g(y, b, z)dτ 2 (F 2 ).
The notation in the formula is explained in [Fu1]. Roughly speaking, τ 2 is a transversal measure for F 2 , so it determines a section of |Λ top T (T 2n )/T F 2 |. Functions f and g live on the holonomy groupoids of the pairs of foliations. The most interesting datum is the function Q which is defined as where ∆ a,b,c is the geodesic triangle with vertices in a, b, c ∈ C n . This leads to the multi-dimensional theta-functions (see [Fu1]).
Question 6 Can one explain the similarity of these two formulas from the point of view of degeneration of the Fukaya category?
Apparently, the Fukaya category and the category of certain modules over a quantized algebra of functions belong to the same connected component of the "moduli space of A ∞ -categories".
5 The "large complex structure" limit The content of this subsection is borrowed from [KoSo2].
Non-commutativity can appear as a result of degeneration of the complex structure into a foliation. The conventional approach to mirror symmetry suggests to consider the "large complex structure" limit, so that conjecturally Calabi-Yau manifolds become foliated by special Lagrangian tori (see [SYZ]). This gives rise to a stratum in the compactified "moduli space of conformal field theories. It is described in terms of "classical" theories. The formal neighborhood of a classical theory in the moduli space can be reconstructed from classical data by means of some formal rules.
In this subsection we will discuss an example of this kind. For more details and applications to mirror symmetry see [KoSo2]. In [GW] the conjecture similar to ours was formulated independently. It was verified in [GW] in the case of K3 surfaces.
Here is the idea. Suppose that we study degenerations of a family of n-dimensional Calabi-Yau manifolds X ε as ε → 0. Every X ε carries the Calabi-Yau metric g(ε) = g ij (ε), 1 ≤ i, j ≤ N. Let us rescale g(ε) in such a way that the diameter of X ε will be O(1) as ε → 0. We consider the limiting space X 0 , where the limit is taken in Gromov-Haudorff metric.
The following conjectural description of X 0 was suggected in [KoSo2]: 1) X 0 is a metric space. It contains an n-dimensional Riemannian manifold X sm 0 . The dimension of X sing 0 = X 0 \ X sm 0 is less or equal than two. 2) The manifold X sm 0 carries an affine structure (i.e. flat torsion-free connection on the tangent bundle T X sm 0 ). 3) There is a covariantly flat lattice Γ ⊂ T X sm 0 . In affine local coordinates it is given by Z n ⊂ R n . 4) Let us identify locally X sm 0 with R n . Then the Riemannian metric g on X sm 0 is Kähler-Einstein. This means that g = ∂ 2 H for some function H, such that det(g) = const (the Monge-Ampere equation).
These data give rise to a fibration of flat tori on X sm 0 . This conjectural picture is related to the mirror symmetry in the following way. Consider (locally) the graph dH in V = R n ⊕ R * n . The latter space is a symplectic manifold equipped with two Lagrangian foliations arising from the coordinate spaces. The graph dH is a Lagrangian submanifold in it. Since H is defined up to the adding of an affine function, the graph itself is defined up to translations. Translated submanifolds are still Lagrangian in V , so we will not pay much attention to this ambiguity. Let p i , i = 1, 2 be the canonical projections of V to the coordinate subspaces. Then the Monge-Ampere equation corresponds to the condition p * 1 (vol R n ) = p * 2 (vol R * n ) where vol denotes the standard volume form.
Now we see that the whole picture is symmetric, so we can interchange dual affine structures. Then H gets replaced by its Legendre transformation. It does not change X sm 0 , and the limiting metric g. It interchanges the dual affine structures and the dual lattices. Hence it interchanges the corresponding dual fibrations of the flat tori. This duality is geometric mirror symmetry. Conjecturally, degenerations of families of dual Calabi-Yau manifolds in the limit of "large complex structure", lead to the two dual fibrations of flat tori over the same base X sm 0 . From the point of view of N = 2 superconformal field theory, the whole picture is classical, not quantum. It is shown in [KoSo2] how it simplifies the counting. In turns out that the counting of pseudo-holomorphic discs can be reduced to the counting of certain binary trees in X sm 0 . It is also explained in [KoSo2] that the geometric degeneration is compatible with certain degenerations of N = 2 superconformal field theories.
Example 2 Moduli space of conformal field theories with the central charge c = 2 is a product of two modular curves. These theories can be described as sigma-models with a target space, which is a 2-dimensional torus. Two separate "infinite" limits for each modular curve give rise to the two moduli spaces. Maximal degenerations give rise to bundles over a circle with the fibers which are circles themselves. The mirror symmetry relates two dual circle bundles over the same base.
Non-commutative stratum
As we have seen above, degenerations of the complex structure can be described in terms of either commutative or non-commutative geometry. It would be interesting to understand what kind of non-commutative theories can be obtained in this way.
Let us consider the case of K3 surfaces elliptically fibered over P 1 . We can deform the complex structure on a surface in such a way, that it becomes a foliation on each non-degenerate fiber, and singular foliation on the degenerate fibers and on the base (which is a 2-sphere S 2 ). It is expected that to such geometric picture one can assign a family of "generically non-commutative" theories on quantum tori (foliated fibers).
Question 7 How to describe these theories?
One also expects that similar non-commutative degenerations exist for arbitrary Calabi-Yau manifolds. Loosely speaking, one can make into noncommutative all special Lagrangian tori in [SYZ] picture, thus adding a new non-commutative stratum to the moduli space of N = 2 superconformal theories.
6 Other related topics 6.1 Quantum theta functions Let T (L, exp(2πiϕ)) be a quantum torus, ϕ ∈ R. We fix a quadratic form Q : L×L → C with negative imaginary part and linear functional l : L → R. Let us also fix a symmetric bilinear form (,) on L R . We write Q(x) = (Ωx, x) for some matrix Ω from the Siegel upper-half space. In particular Ω is symmetric with respect to the bilinear form.
For any t ∈ Hom(L, C * ) we define an automorphism of T (L, 2πiϕ) by the formula: t * (e(a)) = t(a)e(a).
Proposition 4 For an arbitrary ξ as above we have: Question 8 We plan to return to this question in the next paper. Notice that standard proofs of the functional equation for theta-functions which use the Poisson summation formula do not work in quantum case.
Quasi-modular forms
There is an approach to the mirror symmetry for elliptic curves due to Dijkgraaf (see [Di1]). It is the "elliptic" version of the counting of higher genus curves in the Calabi-Yau manifold. Main result of [Di1] is a statement that the generating function counting certain coverings of the elliptic curve is a quasi-modular form in the sense of Kaneko and Zagier (see [KZ]). Dijkgraaf pointed out that for a fixed modular parameter τ and fixed Kähler class t the related quantum field theory is based on a four-dimensional lattice of signature (2, 2) in C 2 (see [DVV]). Thus one has the group O(2, 2, Z) as the duality group of the quantum theory. This group contains Z 2 as a subgroup. The latter is responcible for the mirror symmetry. The theory depends on a flat metric on a 2-dimensional torus and a skew-symmetric bilinear form on the lattice defining the torus.
The partition function of the theory is quasi-modular. It can be modified, so that it enjoys modular properties, but becomes non-holomorphic.
More precisely, for any g > 1 Dijkgraaf defines a formal power series where q = exp(2πiρ). and N g,d counts the virtual number of ramified coverings of a complex elliptic curve E by smooth complex curves of genus g. There are 2g − 2 ramification points of index 2, and coverings have degree d. One can interpret ρ as the complexified area of the underlying torus.
The following result can be found in the cited paper by Dijkgraaf. It was rigorously proved in [KZ].
Since E 2 is not a modular function, same is true for F g . But E 2 is quasi-modular in the sense of [KZ], so does F g .
It is natural to ask about the non-commutative version of the Proposition. If it exists, then the corresponding counting function should be be a kind of non-commutative limit of F g . It is interesting to understand whether quantum tori can be related to certain degenerations of quasi-modular forms.
Informally, "boundary modular forms" should correspond to the limits of quasi-modular forms as modular parameter approaches to a real number. It is natural to ask how this limit should be understood. For example, one can consider limiting functions as distributions or hyperfunctions.
The question about such a limit might be related to a different question about "boundary" modular forms in the sense of Zagier. In Zagier's work, rational points of the boundary line of the upper-half plane play a special role. More precisely, he constructs embeddings of various spaces of "honest" modular (or, hopefully, quasi-modular) forms to the space of "smooth function" on Q, modulo "rational functions". The image of this embedding consists of functions having modular properties.
Then one can give a precise meaning to the following statement: the function s(x) is modular as an element of the set of functions on P 1 (Q), modulo "smooth" rational functions.
Question 9 What is the image of the Dijkgraaf 's partition function under the "boundary embedding"?
From the point of view of the classical theory of Eichler, Manin and Shimura, the boundary modular forms might be related to integrals of modular forms over geodesics between cusps in Lobachevsky plane. It is natural to ask about the meaning of other geodesics. One can expect that geodesics between arbitrary boundary points should be treated within the framework of the "non-commutative" geometry. Indeed, for non-cuspidal points the geodesics are dense in the corresponding modular curve.
On the Morita equivalence and deformation quantization
Hopefully Morita equivalence of quantum tori can be generalized to other quantized function algebras. It should be a statement that quantized algebras corresponding to different Poisson structures and different values of the quantization parameter produce Morita equivalent algebras. Notice, however, that the conventional deformation theory deals with formal series in a parameter . From this point of view the transformation → −1/ does not have sense. In order to allow such transformations (obviously we need them in order to treat Morita equivalent quantum tori) we need to have a "global" moduli space of quantizations, not just a formal scheme over k [[ ]]. The problem cannot be resolved at the level of Poisson structures. It is "quantum" problem. One can have a discrete group of symmetries of the quantum problem, which do not exists at the level of Poisson structures.
Question 10
Are there examples of deformation quantization problems which model this phenomenon?
In quantum groups one often has series convergent in . Let q = exp(2πi ). It is interesting to look at the quantized homogeneous spaces (for example quantum groups themselves) in the case |q| = 1.
For example, let us consider the categories of projective modules over the quantized coordinate rings C[G] q , where q = e 2πi/l , where l is a positive integer number, and G is a simple complex Lie group. One can see that the quantized coordinate ring becomes a projective module of finite rank N = l dim(G) over the subalgebra of the center, which is isomorphic to the algebra C[G] of functions on the Poisson-Lie group G.
Question 11 Is it true that for all primitive roots of 1 of the same order the quantized coordinate rings are Morita equivalent?
We would like to know the answer to a more general question. Namely, what is the "duality group" acting on the quantization parameter, such that if parameters q and q ′ belong to the same orbit, then the corresponding algebras of functions on the quantized homogeneous G-spaces C[X] q and C[X] q ′ are Morita equivalent? More generally, the duality group should act on Poisson structures as well.
In the case of quantized coordinate rings of simple Lie groups at roots of unity, the centers are non-isomorphic, so one cannot expect the group SL(2, Z) be the "duality group" of the theory. If the answer to the above question is positive, then the "duality group" in question is the Galois group Gal(Q ab /Q), where Q ab is the maximal abelian extension of the field of rational numbers. This is a toy-model for the global dualities in quantum physics. Indeed, we have local quantizations given by perturbative series, and we search for a global non-perturbative theory.
Another interesting question is to find an analog of the mirror symmetry for the deformation quantization. In the case of quantum tori we discussed it in Section 4.
|
2014-10-01T00:00:00.000Z
|
2000-11-01T00:00:00.000
|
{
"year": 2000,
"sha1": "5eb0b64d405cf79400966e6586a8ca678c3f7946",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9b1a9341fc60a48bac981610d3c3758eb7b0662a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
9880177
|
pes2o/s2orc
|
v3-fos-license
|
Retrieval of Consolidated Spatial Memory in the Water Maze Is Correlated with Expression of pCREB and Egr1 in the Hippocampus of Aged Mice
Objective To study the relationship of the expression of phosphorylated cyclic AMP response element-binding protein (pCREB) and early growth response protein 1 (Egr1) in the hippocampus of aged mice with retrieval of consolidated spatial memory in a water maze. Methods Twenty-four aged mice were allocated into no training or probe test (naïve), no training but exposed to the same probe test (NTPRT), received training and probe test (PRT), and received training but no probe test (NPRT) groups. Twelve mice were trained in a water maze over 14 days. After the final probe trial on day 15, all mice were anesthetized and the brains were removed. pCREB immunoreactivity (pCREB-ir) and Egr1 immunoreactivity (Egr1-ir) in the hippocampal CA1 and CA3 areas were examined. Results pCREB-ir and Egr1-ir in the CA1 and CA3 areas of the NPRT and PRT groups were significantly higher than those of the naïve and NTPRT groups, and those in the PRT group were significantly higher than in the NPRT group. In all groups, pCREB-ir was significantly higher in the CA3 area compared to the CA1 area, while Egr1-ir was significantly higher in the CA1 area compared to the CA3 area. Conclusion Retrieval, as well as formation, of consolidated spatial memory in the water maze is correlated with expression of pCREB and Egr1 in the hippocampus of aged mice.
Introduction
According to their temporal stability, memories can be divided into at least 2 distinct forms: short-term memory (STM) and long-term memory (LTM). Formation of LTM is dependent on gene regulation and de novo protein synthesis that are required for synaptic enhancement [1] . The cyclic AMP response element-binding protein (CREB) is a key transcription factor that has been implicated in LTM formation across different species [2][3][4][5] . Phosphorylation/activation of the transcription factor CREB (pCREB) on Ser 133 by cyclic AMP-or Ca 2+ -dependent protein kinase is critical for LTM consolidation [6][7][8] . Numerous studies have shown that spatial memory formation was associated with increased pCREB within the hippocampus [9][10][11] , while decreased levels of CREB or disruption of CREBdependent transcription in the dorsal hippocampus interferes with spatial memory without affecting STM [5,[12][13][14] . The early growth response protein 1 (Egr1), also known as zif268, Krox24, TZs8, Zenk, or NGF-I-A [15] , is an immediate early gene encoding the zinc finger transcription factor of the Egr family and is regulated by CREB activity [16] .
Previous work has shown that CREB is crucial for the consolidation of long-term conditioned fear memories, but not for encoding, storage, or retrieval of these memories [17] . The present study is aimed to investigate the expression of pCREB and Egr1 in aged mice during spatial probe test to evaluate retrieval of consolidated spatial memory in the Morris water maze.
Animals and Methods
This study was approved by the Committee on the Care and Use of Laboratory Animals, Zhongshan Hospital, Fudan University, Shanghai, China.
Animals
A total of 24 male mice of the C57BL/6 strain (8 months old at arrival; Shanghai Laboratory Animal Center, Chinese Academy of Sciences) were studied. Mice were housed in groups of 6 to a cage. They were maintained on a 12-hour light-dark artificial cycle (lights on at 7: 00 a.m.) in a temperature (22 ° C)-and humidity-controlled (60%) colony room and had ad libitum access to food and water. The experiment started when the mice were 16 months old and weighing approximately 25-35 g. They were tested during the light phase between 7: 00 a.m. and 7: 00 p.m. C57BL/6 mice were chosen because they are one of the background strains commonly used to construct transgenic mouse models with the goal of identifying molecular mechanisms critical for learning and memory function. In the strategy preference test, C57BL/6 mice preferred a place strategy and expressed higher levels of pCREB in the hippocampus after place training [18] .
Apparatus
The water maze consisted of a swimming pool based on that described by Morris [19] and adapted for mice. It consisted of a circular tank (120 cm in diameter, 50 cm in height), filled to a depth of 30 cm with water maintained at 25 ± 1 ° C. We chose a white maze to improve the image quality and avoid the addition of white nontoxic paint or milk which might have had an impact on the results. The pool was located in a room uniformly illuminated by a halogen lamp and equipped with various distal cues. Located inside the pool was a removable, circular (10 cm in diameter) platform made of transparent Plexiglas, positioned such that its top surface was 1.0 cm below the water surface. The platform, which served as refuge from the water, is generally located in the center of an arbitrarily defined quadrant of the maze.
The located quadrant platform was defined to be the target quadrant. Data were collected using a video camera fixed to the ceiling of the room and connected to a video recorder and to a video tracking system (Videotrack; Viewpoint).
Behavioral Procedures
Pretraining Three days prior to the acquisition phase, each mouse received a pretraining session that consisted of (1) placing the mouse on the platform where it had to stay for at least 15 s, (2) a 60-second swimming period, and (3) several trials of climbing onto the platform until each animal was able to climb without help. This nonspatial procedure was required to avoid confusion between procedural aspects of the task and subsequent spatial performance [20] .
General Training Procedure During the place navigation test, which lasted for 14 days, mice were subjected to a daily 4-trial session. Each trial consisted of releasing the mouse into the water facing the outer edge of the pool at one of the quadrants and letting the animal escape to the submerged platform before 60 s had elapsed. A trial terminated when the animal reached the platform, where it remained for 15 s. Mice that failed to find the platform within this time limit were placed onto the platform by the experimenter and had to stay there for 15 s before being removed and placed back in their home cage for an equal intertrial interval. The releasing point differed in each trial and different sequences of releasing points were used from day to day. On the 4th, 7th, 11th, and 14th day, the animals underwent probe trials at 2 different releasing points, replacing the 4-trial session, which consisted of letting the mouse swim in the pool while the platform was removed for a fixed duration (60 s) as averaged. Animal movements were recorded using the video track system. The computer data were processed using Excel. This processing allowed us to calculate the escape latency (time required to find the platform, in seconds) and the percentage of time spent in the target quadrant. In percentages, more than 30% meant the acquisition of the spatial reference memory. On day 14, all 12 aged mice were familiar with the maze and they were randomized into the no probe test (NPRT) group and the probe test (PRT) group. On day 15, the PRT group was subjected to a probe test while the NPRT group was not. Mice of the NTPRT (no training but exposed to the same probe trial experience) group (n = 6) were submitted to a probe test after a 3-day pretraining session and sacrificed together with the PRT and NPRT groups. The undisturbed mice of the naïve group (n = 6) were handled similarly to the other trained animals prior to the acquisition phase, and taken out of their home cages to be sacrificed altogether with the PRT, NPRT and NTPRT groups.
All behavioral measures were analyzed on day 14 by a repeated measure analysis of variance (ANOVA) and measures of the NPRT and PRT groups using the independent samples t test (SPSS Statistics 17.0), with p < 0.05 considered statistically significant.
Immunohistochemical Procedures
On day 15, mice were deeply anesthetized at 15 min after the probe trial and perfused transcardially with ice-cold phosphate-buffered solution (PBS) and 4% paraformaldehyde. The brains were removed, postfixed overnight in 4% paraformaldehyde/PBS, and placed in 30% sucrose until they sank. Coronal sections (30 μm) were cut on a microtome and collected in PBS.
Immunofluorescence
Blocking of nonspecific epitopes was performed with 3% serum and 0.1% bovine serum albumin in PBS with 0.3% Triton X for 30 min at room temperature and then in the primary antibody (Phospho-CREB Rabbit mAb, 1: 200 or Egr1 Rabbit mAb, 1: 200; Cell Signaling). Sections were incubated for 48 h at 4 ° C and then washed in PBS and incubated for 2 h with a 1: 200 dilution of Cy3 goat anti-rabbit IgG antisera (Jackson). Nuclear counterstaining was performed with 4 ′ ,6-diamidino-2-phenylindole (DAPI; Sigma) for 30 min at room temperature before 3 washes with dH 2 O for 1 min each. Afterwards the sections were coverslipped.
Quantification and Analysis of Immunohistochemical Data
Quantitative analysis was performed using an imaging analysis system (Leica QWin). The experimenter was blind to the experimental groups of the examined mice. For all analyses of regions, counts were taken bilaterally with a 20× objective from at least 5 consecutive sections per animal and the grey of positive nuclei per square micrometer was averaged to produce a mean. Quantification of pCREB-and Egr1-immunoreactive neurons was performed in the CA1 and CA3 area of the dorsal hippocampus. Immunohistochemical data were expressed as mean ± SEM and analyzed by ANOVA (4 groups) and the paired samples t test (CA1 vs. CA3; SPSS Statistics 17.0). Significance of results was accepted at p < 0.05.
Behavioral Results
The training protocol led to a good consolidated spatial memory of the platform location. Figure 1 shows the performance of the mice from day 1 to day 14. The mice progressively learned the task, as indicated by decreasing latencies from day 1 to day 13 and increasing percentages from day 4 to day 14. This was confirmed by analyses of variance performed on data from the whole training period (from day 1 to day 14), yielding the main effect time (F 9, 126 = 22.08, p < 0.0001; fig. 1 a), and the main effect was on day 12 (F = 152.41, p < 0.0001). When analyzing performance from day 4 to day 14, statistical analyses showed significant effect times (F 3, 39 = 11.32, p < 0.0001; fig. 1 b), preceding the asymptotic level of performance reached at day 11 (data not shown). Memory of the platform location was assessed with the probe trial on days 4, 7, 11, and 14. Percentages of the time spent in the target quadrant relative to the opposite quadrant (t = 229.448, p < 0.0001; fig. 1 c) revealed a great retention of the platform location. There was no difference in the performance of day 14 between the NPRT and PRT groups (t = 0.075, p = 0.789; fig. 1 d). There was no thigmotaxis and floating behavior during training, and statistical analyses of the mean swim speed during training did not reveal any significant group effect (F < 1, p > 0.3).
Immunohistochemical Results
Can the probe test trigger the expression of pCREB and Egr1 in the hippocampus of mice? To solve this question, we compared the levels of pCREB and Egr1 in the naïve, NTPRT, NPRT and PRT groups. All mice were sacrificed at 15 min after the final probe trial on day 15 ( fig. 2 ). Activation of the ERK/CREB pathway in the CA1 area was at 0-15 min after paired conditioning, but at 0-1 and 9-12 h after unpaired conditioning as revealed by immunocytochemistry and Western blotting. Egr1 protein induction reached its maximum levels after approximately 1-2 h by various types of stimulation [21,22] and a biphasic pattern of CREB phosphorylation in the hippocampal CA1 area (at 15 min and 9 h), whereas a monophasic pattern was detected in the CA3 area (15 min) after the final probe trial on day 9 [23] . So we selected the time point of 15 min after the final probe test on day 15 to sacrifice the mice. The results showed that the levels of pCREB immunoreactivity (pCREB-ir; fig. 2 a, c) and Egr1 immunoreactivity (Egr1-ir; fig. 2 b, c) in the CA1 and CA3 areas of the NPRT group were significantly lower than those of the PRT group (all p values <0.05). The pCREB-ir and Egr1-ir levels in both the CA1 and CA3 areas of the NPRT and PRT groups were significantly higher than those of the naïve and NTPRT controls (all p values <0.05), and there was no difference between . 2 a, c) was significantly higher in the CA3 area compared to the CA1 area (all p values <0.05), while the level of Egr1-ir ( fig. 2 b, c) was significantly higher in the CA1 area compared to the CA3 area (all p values <0.05).
Discussion
The present study demonstrated an increased expression of pCREB and Egr1 in the dorsal hippocampus during spatial probe test to evaluate retrieval of consolidated spatial memory. In this study, all mice in the PRT and NPRT groups acquired spatial memory and did not differ in the acquisition of a platform location in the Morris water maze by the end of Fig. 1. Acquisition of the spatial reference memory task. a Mice acquired the reference memory task as expressed by the mean escape latency after training sessions. b , c Memory of the platform location was assessed by probe tests on days 4, 7, 11, and 14. The percentages of the time spent in the target quadrant reveal a great retention of the platform location. d No difference was observed in the percentages of time spent in the target quadrant between the PRT and NPRT groups. Data are mean ± SEM; n = 6 for the NPRT or PRT groups. Behavioral measure data of both groups on day 14 were analyzed using the independent samples t test: p = 0.982, and other data were analyzed using a repeated measure ANOVA: * p < 0.0001. training. Therefore, we can conclude that in the PRT group, the increase of pCREB and Egr1 in the hippocampus is due to the probe test rather than enhanced learning. However, expression of pCREB and Egr1 in the NTPRT group is significantly lower than in the PRT group. Our findings are supported by a previous study, which found that a strong CA1 CREB phosphorylation was observed immediately after training irrespective of acquisition of the behavior. In contrast, at 15 min after training, the changes in the CA1 CREB phosphorylation state were specifically related to the individual behavioral performance [23] .
Considerable studies have reported that acquisition of spatial learning was associated with a progressive increase in pCREB in the hippocampus [11,[23][24][25] , and place learning was facilitated by expression of CREB in the dorsal hippocampus [7] . Overexpression of dCREB2-α, the Drosophila equivalent of mammalian CREB, enhanced the formation of long-term olfactory learning [26] . Blockade of ERK1/2 activation disrupted retention of unpaired and paired conditioning, but did not affect STM [8] . Since CREB is a downstream target of ERK1/2, inhibition of ERK1/2 activation led to the blockade of CREB activation [27,28] . In addition, overexpression of dominant negative CREB in the dorsal hippocampus eliminated spatial bias in the water maze [12] , and overexpression of mutant CREB blocked LTM, but not STM, for a socially transmitted food preference [29] . Costa-Mattioli et al. [30] examined the effects of forskolin or 4 trains at 100 Hz [both induce late long-term potentiation (LTP) and stimulate CREB-mediated gene expression] in wild-type and GCN2-/-slices. Both protocols showed that ATF4 was increased and that CREB and expression of the immediate early gene Egr-1, regulated by CREB, were decreased in the GCN2-/-mice, which was associated with a specific impairment of hippocampus-dependent learning and memory, including contextual fear conditioning and spatial memory tested in the Morris water maze after more intense training. CREB inactivation at approximately 4 h after training blocked memory storage of different tasks [5,12] and also LTP late maintenance (i.e. persistence beyond 2-4 h) [31,32] . The study of Guzowski and McGaugh [5] showed that pretraining intrahippocampal infusion of CREB antisense oligodeoxynucleotides produces disruptions in CREB protein levels, and this disruption impairs retention in rats trained in the Morris water maze, suggesting an importance of CREB in the consolidation of memory processes. The activation of Egr1 is necessary for the generation of late LTP or for the formation of LTM [33] . Genetic studies in mice have also supported the conclusion that Egr1 is critical for memory formation. Egr1 knockout mice show impaired LTM but intact STM, including spatial learning [15] .
We used immunohistochemistry to investigate whether probe trials induce expression of pCREB and Egr1 in the dorsal hippocampus (both in the CA1 and CA3 areas). We chose the hippocampus because it has been demonstrated that the CA3 network can support a recall mechanism named pattern completion [34] , which could be involved in some instances of contextual fear recall [35] . Our results show that expression of Egr1 is significantly higher in the CA1 area compared to the CA3 area, while pCREB is significantly higher in the CA3 area. An in situ hybridization study [36] demonstrated that basal expression of Egr1, without intentional neuronal stimulation, is strongest in the CA1 area but very low in the dentate gyrus. Our findings are consistent with the view that has emerged from a previous study in contextual fear memory, which reports that contextual information is rapidly processed in the autoassociative CA3 network and sent to the CA1 network to be stored ultimately in the neocortex [37] . Our results are also strengthened by previous work showing that CA3 NMDA receptors are crucial for rapid hippocampal encoding of unique events [38] . The levels and activities of CREB in the neuron might differ dramatically during acquisition and retrieval. The current results show that the expression of pCREB and Egr1 increases in the hippocampus during probe test after retrieval of consolidated spatial memory. Several studies have shown that the expression of Egr1 increases in several corticolimbic brain structures after retrieval of consolidated fear memories [39,40] . Nevertheless, lesion studies in contextual fear memory did not show any disruption of contextual fear retrieval following inactivation of the dorsal hippocampal CA1 or CA3 subregions during a contextual memory test [37] .
The molecular events underlying learning and memory have increasingly become an area of intense interest. The present findings provide evidence for spatial probe test-induced changes in pCREB and Egr1 in the hippocampus of aged mice. The findings of this and other studies [41] support the hypothesis that expression of pCREB and Egr1 in the hippocampus is a crucial step for spatial memory reactivation. Hippocampus-dependent memory impairments are prevalent in both aged humans and rodents [42] . Based on these findings, we are currently carrying out further experiments to examine whether retrieval is impaired when pCREB and Egr1 expression is pharmacologically abolished, and to show whether there is a causal relationship between pCREB and Egr1 expression and retrieval of spatial memory.
|
2018-01-25T05:50:01.090Z
|
2013-02-28T00:00:00.000
|
{
"year": 2013,
"sha1": "39aa6de07102f53db5bf0c55c7724403f395e39c",
"oa_license": "CCBYNCND",
"oa_url": "https://www.karger.com/Article/Pdf/348349",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39aa6de07102f53db5bf0c55c7724403f395e39c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
95020960
|
pes2o/s2orc
|
v3-fos-license
|
Azapentalenes . XLIV . 1 H and 13 C-NMR study of mesoionic pyrazolo [ 1 , 2-a ] pyrazoles ∗
The H and C chemical shifts as well as the H–H and H–C coupling constants of sixteen pyrazolo[1,2a]pyrazole derivatives have been measured. The most relevant features are discussed using resonance forms and simple additive models. AM1 semi-empirical calculations have been carried out to provide a rationale for the NMR results.
These compounds were discovered simultaneously by Solomons [6][7][8][9] and by Trofimenko [10,11] who described their preparation, chemical reactivity and some spectroscopic properties.Relevant for the present study are two of their conclusions: i) Positive and negative charges are in general delocalized in the aromatic system 3; however, when one of the two pyrazole rings bears substituents able to delocalize the negative charge such as COMe, COPh, CN then the predominant resonance form is best represented by formula 4 ('charge fixed' structure [11]).ii) In 1 H-NMR, for non 'charge fixed' compounds, the 3 J 'ortho' coupling constant has a value of 2.6 Hz [8][9][10][11], larger than in neutral pyrazoles [12] but similar to that found in pyrazolium cations [13].Moreover, there is a 6 J coupling constant between H 2 and H 5 (see formula 5, Fig. 2) of 1.1 Hz [10,11].
Although derivatives of this ring system continue to be studied (mainly for their chemical and biological properties) [14,15], no further NMR studies have appeared since 1966 [9,11].We will report in this paper the 1 H and 13 C-NMR spectroscopy of nine 3a,6a-diazapentalenes 6-14 and two precursors in their syntheses 15 and 16 (Table 1).
Experimental
All the compounds here discussed have already been described [10,11]. 1 H and 13 C-NMR spectra were obtained using a Bruker AC-200 instrument.The chemical shifts are accurate to 0.01 and 0.1 ppm for 1 H and 13 C-NMR respectively.Coupling constants are accurate to 0.2 Hz for 1 H-NMR and 0.5 Hz for 13 C-NMR.A series of 2D-experiments [16] were used to assign the 1 H and 13 C signals of compound 14.
NMR results and discussion
The spectroscopic data are reported in Tables 2 ( 1 H) and 3 ( 13 C).The assignment was straightforward; only in the case of compound 14, were 2D-experiments necessary to confirm the assignment.
The most interesting results of Table 2 are the 3 J and 6 J coupling constants.The greater accuracy of modern measurements does not change the old results: since all compounds 6-13 contain COMe, COPh, CN groups, 'charge-fixed' structures, 3 J = 2.8-2.9Hz coupling constants were measured.In compound 14, 3 J 1,2 = 4.1 Hz (this compound had not been studied previously although it was also classified as 'charge fixed' [11]).
We have collected in Fig. 3 the information available on 3 J in pyrazoles: the value for NHpyrazoles is an average value between 3 J 34 and 3 J 45 due to prototropy; it is clear, nevertheless that a positive charge results in an important increase in the value of the coupling constant.In mesoionic compounds, the value is only 2.6 Hz and this value increases to about 2.85 Hz when the positive charge is localized in one moiety.Compound 14 has a very large 3 J coupling constant which corresponds to a non-aromatic derivative: in ∆ 3 -pyrazolines an equivalent 3 J coupling constant of 4.0 Hz has been measured [17].
Derivatives 15 and 16 are classical 1,2-disubstituted pyrazolium compounds: the 1 H-NMR of compounds in which the pyrazolium ring nitrogens are linked by a trimethylene chain has been described [11,13].The analysis of the multiplets corresponding to the AA'BB'C system of the diazacyclopentene ring, in first-order approximation, yields the following values: a geminal coupling constant J AB = −12.7 to −13.9 Hz and two vicinal coupling constants, one J BC = 4.5-5.6Hz and the other J AC = 0-2.0Hz.Molecular mechanics modelling and a Karplus type relationship indicates an envelope conformation with a pseudo-axial position of the bromine substituent (the J BC coupling corresponds to a dihedral angle of 30 • and the J AC coupling to a dihedral angle of 95 • ).
Since there is no previous report on 13 C-NMR spectroscopy of these compounds, the results of Table 3 deserve some comments.The CH, =C(CN) 2 and (CN) 2 carbon atoms of the [CH=C(CN) 2 ] − group in compound 14 appear at 126.7, 41.8 and 115.8/117.0ppm; in dicyanomethylylides they appear at 148, 46 and 116/117 ppm, respectively [18].The double bond character of the central bond is responsible for the anisochrony of the CN carbon atoms in both cases.
The chemical shifts of compounds 6-13 can be discussed assuming the additivity of substituent effects and considering compound 6 as an internal reference.A multiregression treatment leads to the following conclusions: on carbons C1 and C3, the only important effect is produced by the replacement of 1,3-( Concerning the substituent effects on the same pyrazole ring, they are similar to those observed for neutral pyrazoles, for instance, the ipso-bromine effect of -12.4 ppm on C2 corresponds to the same effect on the C4-pyrazole atom (-12.5 ppm) [19].On the other hand, the substituent chemical shifts (SCS) on the other moiety are characteristic of this family of pyrazolo [1,2-a]pyrazoles showing the sensitivity of the whole mesoionic structure to electron redistribution and, at the same time, verifying the consistency of the assignments (the effects have been calculated by multiregression with r 2 = 0.998).
AM1 semiempirical calculations
To obtain a better description of the systems discussed in this paper we have carried out AM1 calculations [20] on two representative compounds 12 and 14 and on three model compounds 17-19 (Fig. 4).All the structures have been optimized using the original AM1 parametrizations as implemented in the MOPAC6.0package [21].
Since there is no X-ray structure of any member of this family of azapentalenes, we report in Fig. 5 the AM1 optimized geometries (assuming planarity, distances in A ˚, angles in • ) of the parent compound 17 and the dicyano derivative 12.The introduction of two cyano groups alters significantly the geometry of both rings: the C3-N3a (N6a-C1) increases while the N3a-C4 (C6-N6a) bond lenght decreases.
For symmetry reasons there cannot be charge fixation in compounds 17 and 18; on the other hand, compound 19 was selected as a model of pyrazolium ring (the substituents present in compounds 15 and 16 are not necessary for modelling purposes).Comparison of the total charges in compounds 17 and 12 shows the perturbation of the electronic distribution (Fig. 6).
When tetracyano and dicyano derivatives 18 and 12 are compared only the latter has a dipole moment (3.24 D) directed from the positive lower part to the negative upper part along the C5-C2 atoms.If the sum of the total charges for a pyrazole ring (17, -0.5184, 12, -0.3647, 19, -0.0876 electrons) and the3 J( 1 H-1 H) coupling constants reported in Fig. 3 This equation corresponds to the fact that the more positive charge bears the five-membered ring the larger the 3 J( 1 H-1 H) is.δ 13 C and 1 J( 1 H-13 C) are also dependent on the charge distribution.
According to AM1 calculations, compound 14 is planar with an E conformation (the =C(CN) 2 group directed towards H2; the Z conformer lies 4.0 kcal mol −1 higher).In this case, the very large 3 J( 1 H-1 H)= 4.1 Hz is due to the olefinic character of the C1-C2 bond (bond order = 1.57 to compare with 1.41-1.42for compounds 17 and 19).The resonance form that contributes most to the system is represented in Fig. 7, with a double bond between C7 and C8 (1.364A ˚, bond order 1.63) in agreement with the 13 C-NMR results.The large calculated dipole moment, 8.63 D, for compound 14 reflects its 'charge-fixed' structure.
We have calculated, within the same AM1 approximation, compound 20 (which, like cyclooctatetraene is a tub shaped structure) [22].The resonance energies are -222.60 and -213.12eV respectively, that is 219 kcal mol −1 higher for the aromatic structure 17 than for the antiaromatic one 20.
Table 1
Structure of the different compounds studied
|
2018-12-27T02:14:34.272Z
|
1997-01-01T00:00:00.000
|
{
"year": 1997,
"sha1": "d96c227b515d3f26e46d4fc77d12916f061c4ead",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jspec/1997/124259.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d96c227b515d3f26e46d4fc77d12916f061c4ead",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
21457979
|
pes2o/s2orc
|
v3-fos-license
|
Calcium Signaling through the β2-Cytoplasmic Domain of LFA-1 Requires Intracellular Elements of the T Cell Receptor Complex*
The β2 integrin LFA-1 is an important cell-cell adhesion receptor of the immune system. Evidence suggests that the molecule also participates in signaling and co-stimulatory function. We show here that clustering of the intracellular domain of the β2 chain but not of the αL- or β1-cytoplasmic domains, respectively, triggers intracellular Ca2+ mobilization in Jurkat cells. A β2-specific NPXF motif, located in the C-terminal portion of the β2 tail, is required for Ca2+ signaling, and we show that this motif is important for the induction of allo-specific target cell lysis by cytotoxic T cells in vitro. Significantly, the Ca2+-signaling capacity of the β2 integrin is abrogated in T cells that do not express the T cell receptor but may be reconstituted by co-expression of the T cell receptor-ζ chain. Our data suggest a specific function of the cytoplasmic domain of the β2 integrin chain in T cell signaling.
Activation of a T cell by cognate antigen is a complex series of events of which specificity and efficiency are critical for the development and maintenance of adaptive vertebrate immunity. The final outcome of T cell activation, i.e. triggering of effector functions or induction of cytokine gene expression, is dependent on the precise orchestration of plasma membrane proximal events, both at the level of the involved receptors and the subsequent cytoplasmic signal transduction cascades.
The specificity of T cell activation relies on the interaction of an idiotypic TCR 1 with antigenic ligand on an antigen-presenting cell (APC). The activated TCR in turn couples to an intracellular signal transduction apparatus, which is predominantly based on specific tyrosine phosphorylation events (1,2). After phosphorylation of so-called immunoreceptor tyrosine-based activation motifs (ITAMs), present in the intracellular elements of the TCR-associated chains or the CD3 complex (3,4), a cross-talk of non-receptor tyrosine kinases of the Src and Syk/ZAP-70 families is initiated (5,6) that relays signals between the TCR and distal functions (i.e. cytokine promoter activation) (1,7). Specific hematopoietic adaptor proteins play important roles in this information flow.
Signal transduction from the TCR, however, is not sufficient to fully activate T cells. It has become evident that so-called accessory or co-stimulatory receptors, e.g. CD28, expressed at the surface of the T cell, are important determinants of this process. Co-stimulatory interactions between other T cell surface proteins and their self-ligands on APC have been hypothesized to deliver a qualitatively different "signal 2" (as to distinguish it from "signal 1" triggered by the TCR) (8,9) and to influence cell-cell tethering (10).
LFA-1 (␣ L  2 , CD11a/CD18) belongs to a family of heterodimeric cell surface proteins termed  2 integrins, which have primarily been shown to play important roles in T cell adhesion to both endothelial cells and APC (11,12). Recently, LFA-1 has also been implicated in signal transduction (13)(14)(15)(16)(17)(18)(19). In one study it was shown that the interaction of LFA-1 with its APC ligand ICAM-1 was required for potentiation of Ca 2ϩsignaling by the TCR but was dispensable for T cell adhesion and spreading to major histocompatibility complex-antigen complexes embedded in lipid membranes (15). It was, therefore, suggested that the signaling function of LFA-1 might even be more important for T cell activation than its adhesive properties. On the other hand, Zuckerman et al. (16) demonstrated that although LFA-1 cooperated initially with the TCR to induce proliferation of naive T cells, this signal led to apoptotic cell death after prolonged incubation periods. Finally, recent studies indicate that after stimulation with specific antigen, both the TCR and LFA-1 undergo specific reorientation and reorganization processes at the plasma membrane, forming so-called supramolecular activation clusters (20). A different study documented similar structures, which were referred to as the immunological synapse (21,22). Supramolecular activation clusters also include other co-stimulatory receptors as well as intracellular signaling molecules, such as protein kinase C (20).
All these observations are consistent with the view that LFA-1 may not deliver a second signal but rather could aid in amplifying the TCR-dependent signal 1. However, its mode of action in this function is obscure. Here we use a chimeric receptor approach to investigate the formal requirements of LFA-1 cytoplasmic domain elements in T cell signal transduction.
EXPERIMENTAL PROCEDURES
Cell Lines and Antibodies-TAg-Jurkat cells and Jurkat J.RT3-T3.5 T cells deficient in surface expression of the TCR were maintained in RPMI 1640 supplemented with 10% fetal calf serum and 10 g/ml gentamicin sulfate. The following antibodies were used in this study: antigen affinity-purified goat anti-human IgG, Fc-␥ fragment-specific polyclonal antibody (Jackson ImmunoResearch, West Grove, PA), monoclonal anti-CD4 antibody MT-151 (kindly provided by Peter Rieber, University of Dresden), anti-phosphotyrosine antibody 4G10 (Upstate Biotechnology, Inc., Lake Placid, NY), and anti-LFA-1 antibody MEM-95 (kindly provided by Vaclav Horejsi, University of Prague, Czech Republic). All secondary anti-IgG reagents were purchased from Jackson ImmunoResearch.
Western Blot Analysis-To verify protein levels of the respective sIg chimeras, transfected TAg-Jurkat cells were lysed by adding SDS to a final concentration of 1%. After adding 3ϫ loading buffer, samples were boiled for 5 min, separated by SDS-polyacrylamide gel electrophoresis and transferred onto nitrocellulose membranes. Immunodetection was performed using horseradish peroxidase-conjugated secondary antibodies and chemiluminescence detection (PerkinElmer Life Sciences). Alternatively, analysis of protein tyrosine phosphorylation was performed by incubating transfected J.RT3-T3.5 T cells with anti-human IgG antibody at 2 g of antibody/6 ϫ 10 6 cells for 5 min in Hanks' buffered saline solution at 37°C before lysis with radioimmune precipitation buffer containing 1 mM Na 3 VO 4 . Tyrosine-phosphorylated proteins were visualized by Western blot analysis using 4G10 monoclonal antibody.
Intracellular Calcium Measurement-Calcium mobilization analysis was performed as described before (23). Briefly, TAg-Jurkat and J.RT3- Cell-mediated Lympholysis-CTL clone 234 was infected with recombinant vaccinia viruses and incubated for 4 h. Cells were harvested and subsequently incubated with target cells (BW-LCL, HLA-A24 ϩ or DS-LCL, HLA-A24 Ϫ ). Cell-mediated lysis was quantified with the help of a standard 4-h chromium-51 release assay as described (25). Spontaneous release was determined by incubating target cells alone in complete medium. Total release was determined by directly counting an aliquot of labeled cells. The percent cytotoxicity was calculated according to the formula % lysis ϭ (experimental cpm Ϫ spontaneous cpm/total cpm Ϫ spontaneous cpm) ϫ 100. Duplicate measurements of three to six step titrations of effector cells were used for all experiments. To evaluate the influence of LFA-1 on cytotoxicity, CD18 specific monoclonal antibody was incubated with clone 234 at room temperature for 30 min before the addition of target cells. The anti-CD18 monoclonal antibody MEM-95 (ascites) was used at a dilution of 1:100. After preincubation with the antibody, the standard 4-h chromium release assay was performed.
Interleukin-2 Promoter Reporter Assays-Procedures for transient transfection of TAg-Jurkat cells and measurement of luciferase activity were described before (23). Briefly, 10 g of pIL2-GL2 reporter plasmid and 25 g of respective sIg constructs were cotransfected by electroporation. After 20 h, respective samples were stimulated for 8 h as indicated with calcium ionophore A23187 (0.5 g/ml) or phorbol 12myristate 13-acetate (50 ng/ml) or by the addition of cross-linking anti-human Ig antibody (5 g/ml) or were left untreated. This was followed by the addition of reporter lysis buffer (Promega) and scintillation counting.
RESULTS
Integrin adhesion receptors do not bind constitutively to their ligands. A large body of data indicates that the propensity of integrin molecules to interact with cell surface receptors or extracellular matrix ligands is regulated by intracellular signaling events that in turn are triggered by "activating" receptor/ligand interactions (integrin avidity regulation, "inside-out" signal transduction) (11, 12, 26 -29). One example is the aforementioned TCR, the activation of which triggers enhanced binding of LFA-1 to its ligand ICAM-1 on an APC (30). However, this property of the molecule complicates the study of LFA-1-dependent signaling functions because the integrin-mediated signal will be perturbed by the activation stimulus required for the engagement of LFA-1 with its ligand. We hypothesized that clustering of chimeric single-chain receptors bearing isolated integrin cytoplasmic domains may be sufficient to induce signaling events that would, under physiological conditions, emanate from these elements in the context of the much more complex T cell activation scenario. First, fusion proteins have been employed very successfully in delineating signal transduction events triggered by the complex T cell or B cell antigen receptors (4,5,31,32). Second, strong evidence supports the notion that integrin-dependent signal transduction normally follows receptor aggregation or clustering. This not only holds true for T cells, as evidenced by the observed distribution of LFA-1 in supramolecular activation clusters (20), but also is valid for non-hematopoietic cells adhering to extracellular matrix components in which integrins signal through the formation of higher order protein complexes (33, 34).
FIG. 3. Characterization of sIg-CD18-mediated signal transduction in TAg-Jurkat cells.
Calcium mobilization by clustered sIg chimeras was measured as described above but was specifically tested in the presence of Src kinase inhibitor PP2 (a and b) or in the absence of extracellular calcium (c). d, interleukin-2 (IL-2) promoter-dependent luciferase induction by sIg-CD18 chimera was measured as described under "Experimental Procedures." Luciferase activity of individual samples was normalized against the maximal induction obtained by simultaneous stimulation of cells with phorbol ester phorbol 12-myristate 13-acetate (PMA) and calcium ionophore. wt, wild type.
FIG. 4. Aggregation of sIg-CD18 fails to induce calcium mobilization in T cells lacking TCR cell surface expression.
The TCRnegative mutant Jurkat cell line J.RT3-T3.5 was infected with recombinant vaccinia virus expressing sIg chimeras and analyzed as described before. Cross-linking of the CD18 fusion protein did not result in a detectable rise of intracellular calcium concentration in this cell type. Conversely, calcium signaling mediated by sIg-TCRor sIg-CD28 appears to be at least partially independent of TCR surface expression.
The structure of the single chain receptors used in this study is shown in Fig. 1a, and the principal design of these fusion proteins was described earlier (4,5,23). The cytoplasmic domains of CD18 ( 2 chain), CD11a (␣ L chain), or the -cytoplasmic domain of the related  1 integrin CD29 were genetically fused to the transmembrane domain of the CD7 antigen and extended extracellularly by the CH2 and CH3 domains of human immunoglobulin G 1. Secondary reagents directed against human IgG Fc fragments are used to efficiently cluster the constant immunoglobulin domains. Furthermore, the endoplasmic reticulum import signal sequence of CD5 (35) was used to mediate transport of the tripartite fusion proteins to the cell surface of TAg-Jurkat cells (Figs. 1, b and c).
We first investigated whether the chimeras induced signaling events in leukemic TAg-Jurkat T cells (36). We chose intracellular calcium mobilization, a generally accepted and important parameter of receptor proximal signaling events in T cells. Moreover, LFA-1 has been implicated in this function (13,15). To this end, the fusion proteins were expressed in TAg-Jurkat cells by recombinant vaccinia viruses as described earlier (5). Intracellular calcium mobilization was monitored by flow cytometry using the fluorescent calcium chelator Fluo-3 (37). Fig. 2 shows that base-line calcium levels are similar for all chimeric constructs used. However, after clustering with antibody, specific induction of calcium mobilization was observed.
A strong and persistent Ca 2ϩ -mobilization was induced by a control fusion protein bearing the full-length cytoplasmic domain of the TCR-associated chain (Fig. 2a), as described previously (4,31). Significantly, however, an increase in cytoplasmic calcium concentration was also detected when the sIg-CD18 chimera was clustered but not when control protein sIg bearing no intracellular domain or sIg-CD29 was employed. The CD18-dependent Ca 2ϩ -flux appeared different from the sIg-TCR--induced signal both in onset and amplitude. We conclude from these data that the clustered cytoplasmic domain of CD18 is sufficient to induce calcium signaling in TAg-Jurkat cells. It was subsequently analyzed whether the intracellular portion of the ␣ L chain (CD11a) bore a similar capacity. Fig. 2b shows that this is not the case. Furthermore, co-clustering of CD18 and CD11a did not enhance the calcium signal induced by sIg-CD18 alone (not shown), which led us to conclude that the aggregation-dependent signaling elements of LFA-1 that lead to cytoplasmic calcium mobilization are located exclusively within the CD18 cytoplasmic domain.
We attempted a preliminary characterization of intracellular signaling events induced by sIg-CD18 clustering and of the routes of cytoplasmic calcium mobilization. To this end, the Src kinase inhibitor PP2 was employed in calcium mobilization assays. As shown in Figs. 3, a and b, the addition of 10 M PP2 to the medium abrogated both sIg-and sIg-CD18 mediated signals completely, suggesting that CD18-mediated calcium signaling depends on Src kinase activity.
Is the CD18-dependent calcium signal dependent on intracellular calcium stores ? To answer this question, calcium was omitted from the medium, and the Ca 2ϩ -selective chelator EGTA was added before flux measurement. Fig. 3c shows that sIginduces a transient calcium flux in the absence of extracellular calcium ions, consistent with the known TCR-dependent calcium mobilization from intracellular stores. However, sIg-CD18 was almost completely incapable of inducing calcium transients in the presence of EGTA. This result hints at differential requirements for the two receptors at a point further downstream in the signaling cascade.
Reporter assays were then employed to study T cell signaling events that are located far downstream and which are known to FIG. 5. Reconstitution of sIg-CD18-mediated signal transduction in TCR-negative Jurkat cells by co-expression of a chimeric TCR-chain. a, schematic diagram of the CD4-TCR-fusion protein employed to reconstitute TCR negative Jurkat cells. Jurkat J.RT3-T3.5 T cells were co-infected with recombinant vaccinia viruses expressing sIg fusion proteins in combination with either native, full-length CD4 (CD4-control) (b) or alternatively with the CD4-TCR-chimera (d). Cell surface expression of chimeras was analyzed by flow cytometry using anti-human IgG or anti-CD4 antibodies. c, co-expression of the sIg chimeras with the CD4-control protein in TCR-negative Jurkat cells revealed similar results as shown in Fig. 3. e, calcium signaling after clustering of the sIg-CD18 fusion protein is rescued by co-expression of the CD4-TCRchimera. Both types of chimeric proteins could be stimulated independently, as demonstrated by the failure of sIg-control fusion protein to induce intracellular calcium influx after anti-human IgG cross-linking. Additionally, sIg-CD28-triggered calcium signaling is boosted by the co-expression of CD4-TCR-. f, tyrosine phosphorylation of cellular proteins is induced after aggregation of sIg-CD18 in Jurkat J.RT3-T3.5 T cells reconstituted with CD4-TCR-(fourth lane) but not in cells expressing full-length CD4 (CD4-control, second lane). g, left panel, tyrosine phosphorylation of a 38-kDa cellular protein is induced by aggregation of sIg-CD18 or sIg-but not by sIg-CD29 or control chimera (sIg). The phosphoprotein likely corresponds to the adaptor protein LAT (right panel). ␣-P-Tyr, phosphotyrosine. be dependent on intracellular calcium mobilization. SIg-CD18 or a control construct were co-transfected with a luciferase reporter construct driven by the intact promoter/enhancer region of the human interleukin-2 promoter (23). Fig. 3d shows that cross-linking of sIg-CD18 resulted in a 5-fold, specific induction of the interleukin-2 promoter under these conditions. Moreover, co-transfection of a dominant-negative, kinase-deficient Lck construct, but not of intact Lck, abrogated CD18-dependent interleukin-2 promoter stimulation completely. These data are fully consistent with the loss of CD18-dependent calcium induction in the presence of Src kinase inhibitor PP2, and confirm a dependence of CD18 signal transduction on an Src kinase, which is likely Lck (Fig. 3a).
We were interested in determining whether the expression of the TCR was important for sIg-CD18-mediated signal transduction. For these analyses Jurkat J.RT3-T3.5 cells that do not express a TCR on the cell surface were employed (38). Expression of all constructs was highly comparable in TCR ϩ TAg-Jurkat cells and in J.RT3-T3.5 cells (Fig. 5 and data not shown). Fig. 4 shows that Ca 2ϩ mobilization was induced in J.RT3-T3.5 cells by the sIg-TCR-chimera as predicted, since this construct is thought to function as a surrogate TCR. The sIg-CD18 fusion protein, however, did not induce cytoplasmic Ca 2ϩ -influx in the absence of the TCR. We were interested in analyzing whether this deficiency was because of a global inability of co-stimulatory molecules to function properly in the absence of the TCR or the signaling components it might assemble. Therefore, a different fusion protein was employed that bore the intact cytoplasmic domain of CD28, which has previously been implicated in Ca 2ϩ signaling. The corresponding data are also shown in Fig. 4. It was observed that the CD28 fusion protein was functional both in TAg-Jurkat cells and in J.RT3-T3.5 cells (Fig. 4 and data not shown), although the amplitude of the Ca 2ϩ flux in J.RT3-T3.5 was on the average lower than in TAg-Jurkat cells (data not shown). These data indicate that the CD28 fusion protein was capable of delivering FIG. 5-continued signals that were independent of the TCR to a significant extent, whereas the signal induced by the CD18 chimera strictly required TCR cell surface expression.
These findings prompted us to analyze which components of the TCR were needed for CD18-dependent functions. Therefore, we adapted our system to the simultaneous use of two fusion proteins that could be independently clustered on the surface of the same cell. Fig. 5a shows the design of the additional constructs. In this system, the cytoplasmic and transmembrane portions of TCR-were fused to the extracellular domain of CD4, or alternatively, full-length CD4 was used as a control.
Experiments were performed to determine whether the sIg or CD4 derivatives could be co-expressed and independently manipulated on the surface of J.RT3-T3.5 cells. Fig. 5 shows that this is the case. Co-expression was monitored by flow cytometric analysis using anti-Ig antibodies and anti-CD4 antibody MT151 (Figs. 5, b and d). Clustering of the CD4-TCRfusion protein resulted in Ca 2ϩ mobilization as expected (not shown). Moreover, sIg-CD18-dependent signal transduction was not reconstituted by co-expression of CD4 (Fig. 5c), and aggregation of an sIg control protein did not result in Ca 2ϩ mobilization even when CD4-TCR-was present on the same cell surface (Fig. 5e). These data indicate that the antibodies employed targeted the surface chimeras in a highly specific fashion. Therefore, inadvertent antibody-mediated co-aggregation of these molecules could be excluded. It was consequently determined whether the expression of CD4-TCR-was sufficient to rescue the sIg-CD18-mediated Ca 2ϩ flux. Fig. 5e shows that this was indeed the case. We conclude that the TCRassociated chain suffices to promote CD18-dependent Ca 2ϩ signaling.
Tyrosine phosphorylation of cytoplasmic components is an important event in receptor-mediated T cell activation. Therefore, experiments were performed to determine whether sIg-CD18 was capable of inducing cytoplasmic tyrosine phosphorylation events in J.RT3-T3.5 cells. The results of this experiment are shown in Fig. 5f. The left lane shows that in the absence of a co-expressed CD4-TCR-fusion protein only the other TCR-chimera (sIg-TCR-), but not sIg-CD18, was capable of inducing tyrosine phosphorylation of a number of protein bands. This was different, however, when sIg-CD18 and CD4- TCR-were co-expressed on the surface of J.RT3-T3.5 cells. After clustering of the sIg-CD18 construct, we observed a tyrosine phosphorylation pattern that was qualitatively similar to that induced by sIg-TCR-. We conclude that signal transduction by the CD18 cytoplasmic domain progresses intracellularly through tyrosine phosphorylation events and that the TCR-chain plays an important role in this process. To further corroborate this evidence, Fig. 5g shows protein tyrosine phosphorylation in TAg-Jurkat cells with or without specific surface chimera aggregation. Both antibody-aggregated sIg-CD18 and sIg-strongly induce phosphorylation of a 38-kDa band, which corresponds well to the T cell receptor-dependent phosphorylation target LAT, which is the predominant T cell activationinduced phosphoprotein of the respective molecular weight range. The 38-kDa phosphoprotein was not detectable in total lysates when sIg-CD29 or sIg alone were employed (Fig. 5g). All these observations are compatible with the notion that CD18 couples to a signaling apparatus that shares important components with the TCR-associated machinery, at least with respect to Ca 2ϩ signaling.
In the following we dissected the requirements of CD18 cytoplasmic domain elements for Ca 2ϩ signal transduction. To this end, C-terminal deletion mutants were generated (Fig. 6, a and b). Fig. 6c shows that deletion of the C-terminal seven amino acids (sIg-CD18 -762*) abrogated the signal completely. Further deletion (sIg-CD18 -747*) had no effect, confirming that the C-terminal residues were required for the observed function. This C-terminal element bears an NPXF motif, and similar motifs have been implicated in receptor internalization pathways. Interestingly, the cytoplasmic domains of CD18 and CD29 ( 1 integrin) display significant differences in this region (Fig. 8a). We, therefore, produced a series of point mutants to test whether these structural differences were responsible for the observed functional specificity. For this purpose, phenylalanine 766 of CD18 was mutated into either alanine or tyrosine by standard molecular biology techniques, and the resulting mutants (Fig. 7, a and b) were tested for their respective abilities to induce cytoplasmic Ca 2ϩ mobilization after clustering. Fig. 7c shows that Ca 2ϩ signaling was completely abrogated when the F766A mutant was employed. A strong reduction of measurable signal was also observed for the F766Y mutant, leading to an unstable flux.
Consequently it was analyzed to determine whether the CD29 fusion protein could be induced to couple to the Ca 2ϩ pathway by exchanging tyrosine 795 (of the corresponding  1 NPXY motif) for phenylalanine (Fig. 8, a and b). Indeed, it was observed that the sIg-CD29 chimeras became partially functional by this manipulation (Fig. 8c). On the other hand, re-FIG. 7. Analysis of intracellular calcium influx after cross-linking of CD18 point mutants within the C-terminal NPXF motif. a, sequence alignment of CD18 point mutants. b, Jurkat T cells were transfected by recombinant vaccinia viruses expressing sIg fusion proteins of CD18 or CD18 point mutants and analyzed by flow cytometry. c, the CD18-F766A mutation completely abrogated the calcium-signaling capacity of the CD18 cytoplasmic tail, whereas crosslinking of the CD18-F766Y chimera resulted in significant reduction of intracellular calcium influx as compared with CD18 wild-type signaling in Jurkat T cells.
placement of the C-terminal eight residues of CD29 with the ones of CD18 (CD29-cyt/ex) resulted in an inactive construct (Fig. 8c). Taken together, these data indicate that phenylalanine 766 of the CD18 cytoplasmic domain is an important determinant of LFA-1-dependent Ca 2ϩ signaling. However, our data also indicate that the C-terminal amino acid environments of CD18 and CD29 influence the capacity of the homologous Phe or Tyr residues, respectively, to actively engage with the downstream machinery.
We finally attempted to demonstrate that our findings bear significance for more complex T cell activation events. Cytotoxic T cell function was investigated because the requirement for LFA-1 has been very well documented for both cytotoxic T lymphocytes (CTL) and for natural killer cells (39). Allo-recognition-dependent target cell killing of HLA-A24-restricted cytotoxic T cell clone 234 was employed as an experimental system (40). It was first determined whether 234-mediated killing of BW-LCL, i.e. the target cells that express the correct haplotype HLA-A24 alloantigen, was LFA-1-dependent. To this end, anti-LFA-1 antibody MEM-95 was utilized to abrogate LFA-1 binding to ICAM-1 (41). As expected, Fig. 9b shows that 234-dependent killing of BW-LCL was strongly dependent on LFA-1/ICAM-1 interaction since MEM-95 specifically inhibited cell lysis.
SIg fusion proteins were then expressed in 234 cells by recombinant vaccinia viruses, and the infected cells were employed in killing assays (Fig. 9c). The rationale underlying this experiment was as follows. It has well been documented that integrin function may be inhibited by isolated overexpression of -chain cytoplasmic domains or cytoplasmic domain fusion proteins similar to those employed in our study (42)(43)(44). This observation was interpreted as a dominant block that the -cy- toplasmic domains exert on the endogenous integrins by titrating important functional, cellular components of the membrane or the cytoplasm. Moreover, this approach has been developed into a functional complementation system in which overexpres-sion of a cDNA library was utilized to overcome the dominant block exhibited by the -chain construct, thus leading to the identification of novel components of the integrin "inside-out" signaling pathway (45). We reasoned that if the sIg-CD18 fusion protein acted in an inhibitory fashion on the cytotoxic potential of 234, this should suffice to document the importance of the cytoplasmic domain of CD18 for the allo-recognition-dependent activation of cytotoxic T cells. Fig. 9 shows that this was indeed the case. SIg-CD18 but not an sIg-control construct significantly inhibited 234-mediated lysis of BW-LCL. Moreover, and importantly, this inhibition was released when the sIg-CD18-F766Y mutant was employed (Fig. 9c). This observation is consistent with the notion that residue Phe-766 of CD18 is involved in signaling events important for CTL activation. Furthermore, these findings are in full concordance with the Jurkat experiments on calcium signaling described above. DISCUSSION We describe here signal transduction events that are specifically initiated by the cytoplasmic domain of the  2 integrin CD18. Clustering of single-chain fusion proteins was employed to induce changes in intracellular calcium levels. By exploring this system it was found that aggregation of the intact  2cytoplasmic domain was sufficient for triggering a calcium signal in TAg-Jurkat cells. Significantly, neither the ␣ L -cytoplasmic domain nor the cytoplasmic tail of the  1 integrin CD29 bore this capacity. These results suggest a previously unknown differential ability of specific integrin cytoplasmic domains to stimulate signal transduction events in T cells. The system was chosen because  2 integrins require intracellular activation to facilitate ligand binding. This adhesion-dependent signaling will normally be difficult to discern from the processes initiated by the activation stimulus. Our system circumvents such an activation requirement and, furthermore, operates independently of adhesion and spreading, which in turn might trigger complex signaling systems (e.g. through cytoskeletal reorganization). Although changes in cell shape and morphology might contribute to cellular activation in important ways, it is necessary to be able to discriminate among these parameters to determine the minimal requirements for signaling.
The functional elements of integrin cytoplasmic domains have been analyzed in great detail in terms of their relative contribution to cell adhesion and spreading, but their potential roles in signal transduction have not been explored. For the  2 -cytoplasmic domain, these regions can be grouped into membrane proximal and distal functional elements. The binding sites for actin-cytoskeletal linker proteins, such as ␣-actinin or filamin (46,47), are located in the N-terminal portion of the  2 tail in addition to the response element for regulatory protein cytohesin-1, as recently described (41). More distal elements include the 758 -60 TTT motif, which was shown to be required for constitutive adhesion of LFA-1 to ICAM-1 in COS cells (48) and for spreading of  2 / 3 chimera in an ectopic expression system (49). Our results show that the deletion of the C-terminal seven amino acids abrogates the  2 -dependent calcium signal. Therefore, none of the above-mentioned elements appears to be sufficient for calcium mobilization because they are all present in the non-functional construct CD18 -762*. However, the C terminus of two so-called NPXF motifs is deleted in this mutant. Moreover, mutation of the NPXF-phenylalanine into either a non-conserved alanine or a conserved tyrosine residue abrogated or strongly impeded the abilities of the resulting chimeras to induce a calcium flux. These data strongly suggest that the calcium signaling capacity of the  2 -cytoplasmic domain is largely dependent on the C-terminal NPXF motif. Interestingly, the  1 -cytoplasmic domain of integrin CD29 bears an NPXY motif at the homologous position. This finding prompted us to analyze whether the reversal of a single amino acid in  1 (Y759F) would rescue the calcium signaling capacity of the  1 fusion protein. Fig. 7c shows that this is partially the case. We conclude from these data that phenylalanine 766 of the distal NPXF motif is a specific determinant of  2 -dependent intracellular calcium mobilization. However, the functionality of this motif appears to be somewhat dependent on the neighboring amino acids. This might be because of specific conformations that the different  tails assume; in fact, the contribution of integrin cytoplasmic tail conformation to function has recently been suggested (50). In an earlier study, F766 was shown to be an important determinant of LFA-1-dependent, constitutive adhesion of COS7 cells to ICAM-1. However, mutation of this residue into alanine had a strong inhibitory effect on adhesion whereas exchange of phenylalanine into tyrosine had yielded no phenotypic changes (48). The involvement of this region in signal transduction apparently is a different one. Firstly, the F766Y mutant bears little capacity to flux calcium and, secondly cytotoxic T cell function was strongly inhibited by the sIg-CD18 fusion protein but not by the F766Y mutant, suggesting that this mutant could not exert a dominant-negative block on the activation of specific cytolysis. Taken together, our data suggest that the distal NPXF has a different function in T cell signal transduction, as compared with COS cell adhesion to ICAM-1 mediated by ectopic LFA-1 expression. It cannot fully be ruled out, however, that some of the observed differences may be because of the cell types employed. The relative contribution of the C-terminal NPXF motif to signal transduction or T cell adhesion, respectively, would thus have to be analyzed in the future.
NPX(F/Y) motifs have also been implicated in receptor internalization. Specifically, the cytoplasmic tail requirements for endocytosis of LFA-1 have recently been analyzed. Based on this study, the determinant for the internalization of the  2 integrin lies further N-terminal in the  2 -cytoplasmic domain and, thus, does not overlap with the C-terminal NPXF motif (51).
Recent evidence suggests that platelet function in vivo is dependent on 3-integrin signaling through NPXY and NXXY motifs. Interestingly, in the case of the ␣ IIb  3 receptor this loss-of-function correlates with abrogation of receptor tyrosine phosphorylation (52). Thus, in different contexts both NPXF and NPXY motifs may contribute to specific signaling events.
We observed that the ability of the CD18 fusion protein to promote calcium mobilization is dependent on the expression of either an intact TCR or the chain fusion protein. These results suggest that LFA-1 acts on elements that are utilized or organized by TCR-, or it acts through the chain itself. It may be possible that co-clustering of the receptors is mediated through links between their cytoplasmic portions. However, initial experiments (not shown) on TCR or chain co-aggregation after clustering of the integrin chimera do not support this idea. Intriguingly, one study has shown that the -associated tyrosine kinase ZAP-70 functions in an LFA-1 to LFA-1 adhesion regulation pathway important for cell invasiveness, but a direct link between the molecules has not been established (53). In light of our observations it is possible that the T cell receptor complex and LFA-1 coordinately relay information important for both cell activation and migratory functions. Several groups have recently shown that integrin-mediated matrix adhesion and growth factor receptors coordinately regulate cell proliferation (54 -56). The underlying mechanisms are poorly understood, but it was suggested that extracellular matrix enhances PDGF-dependent responses by increasing the association of SHP-2 with platelet-derived growth factor receptor (57). In the light of our results, one is tempted to hypothesize that  2 integrin signaling may also provide a means of modulating ITAM (immunoreceptor tyrosine-based activation motifs) dependent immunoreceptor function. It should further be noted that signaling of other receptors in T cells (CD2, CD4) had been shown to display similar requirements for the presence of TCR functional elements (58,59).
It is currently not known at which level the NPXF motif of the  2 -cytoplasmic domain and the TCR-associated chain functionally interact. Recently, a transcription factor termed JAB1 had been found to interact with the  2 -cytoplasmic domain; furthermore, this protein was translocated to the nucleus clustering of LFA-1 (60). It is presently not known, however, whether this interaction is important for T cell activation and, if so, whether it affects calcium signaling. Moreover, the precise binding site for JAB1 within the  2 -cytoplasmic domain has not yet been determined.
Substantial evidence supports the notion that Src family kinases are important downstream effectors of integrin signaling (61, 62). Furthermore, the Lck kinase in T cells plays a critical role in T cells with respect to ITAM phosphorylation, and its subsequent downstream interaction with ZAP-70 is critical for phospholipase C␥ activation and calcium signaling (63). However, convincing molecular links between Src kinases and integrin cytoplasmic domains have not yet been determined. Our results support the contention that functional interaction occurs between the C-terminal NPXF motif of the  2 tail and Src kinases in T cells. Moreover, there is evidence for functional interaction of LFA-1 with the cytotoxic T cell surface receptor DNAM-1 (64). DNAM-1 has been shown to be phosphorylated on tyrosine residues after aggregation of LFA-1, and this process appears to involve the Fyn kinase. Interestingly, we found that the sIg-CD18 chimera did not signal in the absence of extracellular calcium (Fig. 3c), nor did it induce phospholipase C␥ phosphorylation. On the other hand, induction of a 38-kDa phosphotyrosine was observed, consistent with the activation of LAT, a major phosphorylation target of ZAP-70 in T cells (65). We were unfortunately not capable of proving this point directly, since immunoprecipitation of LAT from TAg-Jurkat cells failed under the conditions used. These findings point to distinct similarities but also to differences between  2 integrin and T cell receptor-induced signaling pathways. Significantly, Takata et al. (66) describe differential signaling routes to calcium mobilization in DT40 B cells; these authors concluded that the Src kinase Lyn might regulate Ca 2ϩ mobilization through a process independent of inositol 1,4,5trisphosphate generation. It is possible that CD18 couples to a similar pathway in T cells Our data provide a specific and testable hypothesis on the role of  2 integrin-mediated signaling events in T cell activation. This may now be verified in more complex systems, which allow the analysis of heterodimeric molecules. In the course of this study we have attempted to reconstitute wild-type and mutant  2 chain expression in  2 Ϫ allo-specific leukocyte adhesion deficiency T cell clones (not shown). However, this model system did not in principle allow visualization of calcium signaling induced by the allotype and was therefore not useful for our purposes. Therefore, reconstitution of wild-type and mutant  2 chains in  2 knock-out animals and subsequent analysis of their T cell function in vivo appears as a plausible direction for future work.
Taken together our findings suggest that  2 integrins specifically contribute to Ca 2ϩ signaling in T cells through an NPXF motif of the  2 -cytoplasmic domain. These data further support and extend an emerging general theme of signal integration in cell growth regulation operating through functional interactions of integrins with growth factor receptors.
|
2018-04-03T01:57:15.785Z
|
2001-11-16T00:00:00.000
|
{
"year": 2001,
"sha1": "b96ffce979cb708b9e2c82fb395ba3413b04a447",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/276/46/42945.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4646b11c382e8231d92c82c55bd715862c97ffa1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255922561
|
pes2o/s2orc
|
v3-fos-license
|
MERRF Mutation A8344G in a Four-Generation Family without Central Nervous System Involvement: Clinical and Molecular Characterization
A 53-year-old man approached our Neuromuscular Unit following an incidental finding of hyperckemia. Similar to his mother who had died at the age of 77 years, he was diabetic and had a few lipomas. The patient’s two sisters, aged 60 and 50 years, did not have any neurological symptoms. Proband’s skeletal muscle biopsy showed several COX-negative fibers, many of which were “ragged red”. Genetic analysis revealed the presence of the A8344G mtDNA mutation, which is most commonly associated with a maternally inherited multisystem mitochondrial disorder known as MERRF (myoclonus epilepsy with ragged-red fibers). The two sisters also carry the mutation. Family members on the maternal side were reported healthy. Although atypical phenotypes have been reported in association with the A8344G mutation, central nervous system (CSN) manifestations other than myoclonic epilepsy are always reported in the family tree. If present, our four-generation family manifestations are late-onset and do not affect CNS. This could be explained by the fact that the mutational load remains low and therefore prevents tissues/organs from reaching the pathologic threshold. The fact that this occurs throughout generations and that CNS, which has the highest energetic demand, is clinically spared, suggests that regulatory genes and/or pathways affect mitochondrial segregation and replication, and protect organs from progressive dysfunction.
Introduction
The A-to-G transition at nucleotide 8344 (m.8344A > G) of mtDNA is the prevalent mutation found in a multisystem disorder, and is known with the acronym MERRF (myoclonus epilepsy with ragged-red fibers). It is characterized by myoclonus, generalized epilepsy, ataxia, weakness, dementia as well as signs of multisystem involvement [1][2][3]. The histopathological study of the skeletal muscle tissue typically shows ragged-red fibers (RRFs) with the modified Gomori trichrome (MGT) stain and hyperactive fibers with the succinate dehydrogenase (SDH) stain. Histochemical reaction for cytochrome c oxidase (COX) shows lack of activity in RRFs and some non-RRFs [4][5][6]. Occasionally, RRFs may not be observed [7]. The presence of lipomas has often been reported in patients affected with MERRF and/or in their maternally-related family members [8][9][10].
The pathologic mutation affects all tissues; however, since mtDNA mutations are heteroplasmic, the variable tissue distribution of mutated mtDNA usually occurs in the same individual. The m.8344A > G variant is usually heteroplasmic in tissues collected from classical MERRF patients and the level of mutational load required to display a biochemical phenotype (biochemical threshold) is in the range between 60 and 90%, suggesting a moderate detrimental behavior for this nucleotide change. Both the heteroplasmy and the selective tissue vulnerability to impaired oxidative metabolism (skeletal/cardiac muscle and brain have a higher energetic demand) are important factors in determining the clinical expression of mtDNA mutations. These aspects, along with the different regional levels of mutant DNA, the compensatory increase in global mtDNA content, and the presence of nuclear modifiers, hamper the establishment of a clear correlation between the genotype and clinical phenotype [21,22].
Although there may be a high clinical variability among members of the same family, central nervous system (CNS) manifestations have always been reported throughout family generations. In this study, we describe a patient carrying the m.8344A > G mutation in mitochondrial DNA, presenting with late onset myopathy, multiple lipomas, and diabetes.
Neither the proband nor the other affected family members show signs of CNS involvement.
Case Report
A 53-year-old man approached our Neuromuscular Unit due to an incidental finding of hyperckemia (between 300 and 400 U/L, n.v. < 180 U/L). He was diabetic and had a few lipomas, but otherwise asymptomatic. Almost 2 years later, he developed fatigue which progressively worsened. CK levels had moderately increased to 970 U/L. The neurological examination was normal and his past medical history was unremarkable. The patient's mother, who had died at 77 years of age, was diabetic, cardiopathic, and displayed multiple lipomas. Proband's sisters, aged 60 and 50 years, presented normal serum CK levels. The elder sister has mild bilateral eyelid ptosis and had suffered from thyroiditis; while the younger sister had undergone surgery for colon cancer at the age of 48 years and the follow-up was negative.
The first sister has three children, two males and one female, aged 33, 23, and 32 years, respectively. The 33-year-old son has two children, one male and one female. The 32-year-old daughter has a son. The second sister has a 13-year-old daughter and a son aged 11 years. They are all reported asymptomatic ( Figure 1A). Descendants of two maternal aunts are reported healthy and did not undergo any genetic evaluation.
After an EMG examination, which showed a myopathic pattern in proximal four-limb muscles, the proband underwent left biceps skeletal muscle biopsy. Following plain brain CT scan ( Figure 1D) and EEG, both examinations were normal.
Moreover, we evaluated both sisters whose neurological examination was normal except for the mild eyelid ptosis in the elder one. Blood and urinary samples were taken from both subjects for DNA extraction.
Materials and Methods
After the patient had signed a written informed consent on 28 February 2005, a skeletal muscle specimen (biopsy code number: 96974) from his left biceps brachii muscle was obtained by an open biopsy, according to a protocol approved by the Institutional Review Board of the "IRCCS Ca' Granda Foundation Ospedale Maggiore Policlinico, Italy".
Immunohistochemistry for sarcolemmal proteins (dystrophin, alpha-, and gammasarcoglycan) and for an evaluation of possible inflammatory signs were performed [24].
Furthermore, after obtaining written consent, genomic DNA was extracted from peripheral blood, urine, and muscle from both the proband (muscle, blood, urine) and his sisters (blood, urine). The extracted mtDNA was PCR-amplified using MitoSEQ Resequencing System (Applied Biosystem, FosterCity, CA, USA) and sequenced on an ABI PRISM 3100 Genetic Analyzer (Applied Biosystem).
Mutational loads in the patient's tissues were assessed by PCR-RFLP performed using a modified primer that creates a BglI-restriction site in mutant molecules. Aliquots of PCR products were digested and electrophoresed on a 4% agarose gel. The proportion of mutant mtDNA was evaluated by densitometry using the NIH ImageJ 2.
Results
Sections stained with MGT showed quite a few ragged-red fibers. Histochemical reactions for mitochondrial enzymatic activity showed several COX-negative fibers, many of which were intensely stained with SDH (RRFs) ( Figure 1C). Glycogen content was normal and no lipid storage was present.
Materials and Methods
After the patient had signed a written informed consent on 28 February 2005, a skeletal muscle specimen (biopsy code number: 96974) from his left biceps brachii muscle was obtained by an open biopsy, according to a protocol approved by the Institutional Review Board of the "IRCCS Ca' Granda Foundation Ospedale Maggiore Policlinico, Italy".
Immunohistochemistry for sarcolemmal proteins (dystrophin, alpha-, and gammasarcoglycan) and for an evaluation of possible inflammatory signs were performed [24]. Furthermore, after obtaining written consent, genomic DNA was extracted from peripheral blood, urine, and muscle from both the proband (muscle, blood, urine) and his sisters (blood, urine). The extracted mtDNA was PCR-amplified using MitoSEQ Resequencing System (Applied Biosystem, FosterCity, CA, USA) and sequenced on an ABI PRISM 3100 Genetic Analyzer (Applied Biosystem).
Mutational loads in the patient's tissues were assessed by PCR-RFLP performed using a modified primer that creates a BglI-restriction site in mutant molecules. Aliquots of PCR products were digested and electrophoresed on a 4% agarose gel. The proportion of mutant mtDNA was evaluated by densitometry using the NIH ImageJ 2.
Results
Sections stained with MGT showed quite a few ragged-red fibers. Histochemical reactions for mitochondrial enzymatic activity showed several COX-negative fibers, many of which were intensely stained with SDH (RRFs) ( Figure 1C). Glycogen content was normal and no lipid storage was present.
Enzymatic activities for PKF, PYGM, and MAD were normal. The staining for PA showed a slightly increased subsarcolemmal signal in a few fibers. Immunohistochemistry for sarcolemmal proteins was normal (data not shown).
Genetic analysis of the entire mtDNA sequence in proband's muscle revealed the presence of the A8344G mtDNA mutation. The same mutation was detected in the two sisters.
The degree of heteroplasmy of this mutation was analyzed in proband's skeletal muscle and in blood leukocytes and urinary sediment samples from both the proband and his sisters. Densitometric analysis in the patient's tissues revealed that the A8344G mutation accounted for 25.2% of the total mtDNA in muscle, 60% in blood, and 61% in urine ( Figure 1B). The younger sister presented a mutational load of 35% in blood and 18% in urine, while the elder sister had 12.1% mtDNA mutation in blood, but no mutation was detected in urine. Detailed quantification of the mutational load in the family members is presented in Figure 1B.
Our patient did present RRFs and lipomas, the latter was also diagnosed in his mother. These findings, along with the presence of other features indicating multisystem involvement in both patient and family (diabetes, cardiopathy, endocrine dysfunction) as well as lack of lipid storage at muscle biopsy, prompted a diagnosis of atypical MERRF syndrome.
Our patient showed an atypical clinical presentation, with isolated hyperckemia at onset followed by development of myopathy in his fifties. The associated presence of maternally inherited diabetes and lipomas suggested a diagnosis of mitochondrial disorder, which was confirmed by both morphological and biomolecular findings.
Lipomas have often been reported in patients bearing the A8344G mutation in association with MERRF syndrome or other central nervous system involvement [8][9][10]31,32]. Indeed, the presence of maternally inherited lipomas associated with the involvement of other organs/systems is an almost unequivocal indication of the presence of mutations in the tRNA Lys. It is not known how an impaired mitochondrial function, due to mutations in tRNA Lys, causes this effect on adipose tissue; however, there is evidence that mitochondrial function is important for normal development of adipose tissue in humans [33].
The A8344G mutation has been considered a relatively "benign" mutation, since a high degree of mutational load is required to produce clinical manifestations. In skeletal muscle tissue, the threshold level beyond which the pathological phenotype becomes evident is estimated to be higher than 60% mutational load [34]. Indeed, it has been suggested that approximately 15% of residual wildtype mtDNA is sufficient to restore translation and COX activity to near-normal levels, thus "rescuing" the clinical phenotype. Interestingly, our patient had a low mutational load in muscle (25%), and this can explain the late onset of symptoms.
Furthermore, the A8344G mutation is usually present in high proportion in DNA from urine and blood, the mutational load being ordinarily higher in urine than in blood [35]. In accordance with the data reported in the literature, our patient has a mutational load of 61% in urine and 60% in blood; however, both his sisters have a higher mutant load in blood than in urine. We were unable to establish the mutational load in the lipomas since the patient refused to undergo lipoma biopsy.
In our family, we established a certain positive correlation between the severity of clinical signs and instrumental evidence. Indeed, only the proband, who, unlike his sisters, has increased serum CK levels, is symptomatic. We could not make any correlations in terms of skeletal muscle mutational load since the two sisters did not undergo skeletal muscle biopsy [31][32][33].
Muscle biopsy showed typical histopathological features of MERRF [5,6,36], in particular the presence of RRFs at MGT, confirmed by increased SDH activity, the absence of COX activity in most RRFs, and a number of COX-negative/deficient non-RRFs.
The absence of central nervous system involvement is a peculiar feature in our family since no other MERRF families without any central nervous system involvement have been reported to date. A possible explanation is that the mutational load remains low, especially in CNS, and prevents tissues/organs from reaching the pathologic threshold. The fact that this occurs throughout generations and that the tissue with the highest energetic demand is clinically spared, suggests that regulatory genes and/or pathways affect mitochondrial segregation and replication, and protect organs from progressive dysfunction. As suggested by Letrit et al., a lack of correlation between the degree of mtDNA heteroplasmy and clinical symptoms related to a particular organ can indicate the presence of tissue-specific nuclear factors that modify the phenotypic expression of the A8344G mutation, or, perhaps rather than a specific nuclear factor, there are merely tissue differences in the requirements for the particular subunit of the respiratory chain involved [21].
Conclusions
In conclusion, our report highlights the broad clinical spectrum of MERRF syndrome, which can also occur as a pure myopathy. Given the large number of atypical cases, we would like to emphasize that it is easy to underestimate progressive and even potentially invalidating diseases.
The combination of a high clinical suspect, histological, molecular genetics, and biochemical investigation remains essential for the diagnosis of MERRF.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee of the IRCCS Ca' Granda Foundation, Ospedale Maggiore Policlinico, Milano, Italy.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data for this article are not publicly available to ensure patient anonymity. Requests to access the data should be directed to the corresponding author.
|
2023-01-17T17:54:30.629Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "292ddf1b11e7841d3788a3361d9d4ea4c0973820",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3b2e5d3d1db928bd831122a89023e3bbc31b3a50",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
115160613
|
pes2o/s2orc
|
v3-fos-license
|
Counterexamples in Cake-Cutting
This article contains counterexamples to theorems and claims in Brams, Jones and Klamler's article"Better Ways to Cut a Cake"in the December 2006 Notices of the American Mathematical Society.
Moving-Knife Procedure
. "A knife is slowly moved at constant speed parallel to itself over the top of the cake. At each instant the knife is poised so that it could cut a unique slice of the cake. At time goes by the potential slice increases monotonely from nothing until it becomes the entire cake. The first person to indicate satisfaction with the slice then determined by the position of the knife receives that slice and is eliminated from further distribution of the cake. (If two or more participants simultaneously indicate satisfaction with the slice, it is given to any of them.) The process is repeated with the other n-1 participants and with what remains of the cake." The only implicit assumptions here are that the values are nonnegative, they are additive, and, in the direction the knife is moving, they are continuous. The knife need not even be perfectly straight and need not be moved perfectly parallel to itself, and the cake need not be simply connected (as traditional angel-food cakes, with a hole through the center, are not), nor even connected (the cake could have been dropped on the table and landed in pieces).
In the first section of [BJK], the authors state (p 1314) that in problems of fair division of a divisible good, "the well-known 2-person, 1-cut cake-cutting procedure 'I cut, you choose'" is Pareto-optimal, that is, "There is no other allocation that is better for one person and at least as good for the other." Cut-and-choose is not even Pareto optimal among 1-cut procedures, a weaker form of Pareto optimality , as the following example shows.
Counterexample 1. The "cake" is the unit square, and player 1 values only the top half of the cake and player 2 only the bottom half (and on those portions, the values are uniformly distributed). If player 1 is the cutter, and cuts vertically, his uniquely optimal cut-andchoose solution is to bisect the cake exactly, in which each player receives a portion he values exactly ½. Or if player 1 cuts horizontally, his uniquely optimal risk-adverse cutand-choose point is the line y = ¾, in which case he receives a portion he values at ½ the cake, and player 2 chooses the bottom portion and receives a portion he values at 100% of the cake. But an allocation of the top half of the cake to player 1, and the bottom half to player 2 is at least as good for player 2 in both cases, and is strictly better for player 1, so cut-and-choose is not Pareto optimal in either direction.
In subsequent sections of [BJK], the authors assume that the cake is the unit interval and the value measures are absolutely continuous [BJK p 1315]. But if the cake is indeed the unit interval (which is not the case in classical fair division settings such as or [DS], and certainly not the case for the Talmudic scholars who discussed cut-and-choose), the statement [BJK, p 1315] that "We assume that only parallel, vertical cuts, perpendicular to the horizontal x-axis, are made" does not make sense. Even under the hypotheses that the cake is the unit interval (or, equivalently, the unit square with only vertical cuts allowed) and the values are absolutely continuous, the statement in [BJK, footnote 3 p 1318] "an envy-free allocation that uses n -1 parallel cuts is always efficient [i.e., Pareto optimal]", and the corresponding Proposition 7.1 of [BT,p 150], are not true, as the next example shows.
Counterexample 2. The cake is the unit interval; player 1 values it uniformly, and player 2 values only the left-and right-most quarters of the interval, and values them equally and uniformly. (In other words, the probability density function (pdf) representing player 1's value is a.s. constant 1 on [0,1], and that of player 2 is a.s. constant 2 on [0, ¼] and on [3/4,1], and zero otherwise.) If player 1 is the cutter, his unique cut point is at x = ½, and each player will receive a portion he values at exactly ½. The allocation of the interval [0, ¼] to player 2 and the rest to player 1, however, gives player 1 a portion he values ¾, and player 2 a portion he values ½ again, so cut-and-choose (which is an envy-free allocation for 2 players) is not Pareto optimal.
The two new fair cake-cutting procedures described in [BJK], Surplus Procedure and Equitability Procedure, are not well defined. If a player's value measure does not have a unique median (which absolute continuity does not imply -see Counterexample 2 above), then Surplus Procedure is not well defined. Part (2) of the definition of Equitability Procedure [BJK, p 1318] assumes the existence of "cutpoints that equalize the common value that all players receive for each of the n! possible assignments of pieces to the players from left to right." As the next example shows, such cutpoints may not exist, so Equitability Procedure, too, is not well defined.
Counterexample 3. The cake is the unit interval. Player 1 values the cake uniformly, player 2's value is uniform on (0,1/3) (i.e. his pdf is a.s. constant 3 on (0,1/3) and zero elsewhere), and player 3's value is uniform on (2/3,1). Then for the ordering 1-3-2 (from left to right), there do not exist two cutpoints that equalize the values. If the second cutpoint is in (0, 2/3] then player 3 receives 0 but player 1 receives a positive amount. If the second (and hence both cutpoints) are at 0, players 1 and 3 receive 0, and player 2 receives 1. If the second cutpoint is in (2/3, 1), then player 3 receives a positive amount, but player 2 receives 0. If the second cutpoint is at 1, and the first cutpoint is 0, then players 1 and 2 get 0, but player 3 gets 1. If the second cutpoint is at 1 and the first is in (0,1), then player 2 gets 0 but player 1 gets a positive amount. Finally, if both the first and the second cutpoints are at 1, then player 1 gets 1, and both players 2 and 3 get zero.
Even if one imposes extra conditions that guarantee that the system of equations has a solution, step (2) of Eqitability Procedure in [BJK] requires the referee to solve n! (possibly highly-nonlinear) systems of n-1 integral equations in n-1 unknowns. But finding an exact solution, even of one equation in one unknown, is not always possible in closed form. For example, if player 1's value is uniform on (0,1) and player 2's value is the standard normal distribution (conditioned to have values in (0,1)), there is no known closed form to the solution of the corresponding integral equation that determines the cutpoint. And without an exact solution, fairness (and equitability and Pareto optimality, etc.) may be lost.
In [BJK, p 1316] the authors define a cake-cutting procedure to be strategy-vulnerable if by misrepresenting his value function sent to the referee, a player may "assuredly do better, whatever the value function of the other player", and otherwise the procedure is said to be strategy-proof. The article [BJK] contains exactly three new theorems, namely: Theorem 1. Surplus Procedure is strategy-proof, whereas any procedure that makes e the cut-point is strategy vulnerable.
Theorem 2. Equitability Procedure is strategy-proof.
Theorem 3. If a player is truthful under Equitability Procedure, it will receive at least 1/n of the cake regardless of whether or not the other players are truthful; otherwise it may not.
The following trivial Theorem A shows that the second part of Theorem 1 is false (using Equitability Procedure, which is fair), and that both the first part of Theorem 1 and Theorem 2 are trivial, since both Surplus Procedure and Equitability Procedure are fair.
An allocation procedure is fair if each player can guarantee receiving a portion he values at least 1/n. (An extreme example of an unfair procedure is one that always gives everything to player 1. A procedure which gives the entire cake to each player with probability 1/n does give each an expected value of 1/n, but does not guarantee any player a portion worth 1/n each, so it, too, is not fair in this sense.) It is easy to see that cut-and-choose (for 2 players) and moving-knife are fair procedures.
Theorem A. Every fair procedure is strategy-proof.
Proof. Fix an arbitrary cake-cutting procedure with two players, suppose both players have identical values v, and that both also misrepresent their values identically as a different measure u. Then the procedure allocates a subset S of the cake to one player and allocates its complement ~S to the other. Then at least one player receives a portion he values no more than ½, so that person has not done "assuredly better" than the fair share of ½. Q.E.D.
Even if only one of the players is allowed to misrepresent his strategy, and in case of ties, the pieces are randomly assigned [BJK,(3) p 1315], then every fair procedure is strategyproof: if player 1 misrepresents his value as u, and u happens to be the true (and declared) value for each of the other players, player 1 will with positive probability receive a portion he values at most 1/n, so again he does not do "assuredly better" than fair.
The argument for Theorem 3 is fallacious. The fifth sentence in the proof says that "By moving all players' marks rightward … one can give each player an equal amount greater than 1/n", and the following example shows this is not correct.
Counterexample 4. There are three players, the cake is the unit interval [0,1], and all players value it uniformly (i.e., their pdf's are a.s. constant 1 on [0,1]). Then the unique moving-knife marks are at x = 1/3 and x = 2/3, and moving the cuts to the right of those marks will allocate one of the players less than 1/3. The rest of the argument in [BJK] for Theorem 3 is also incomplete, since the first part only proves a claim about the moving-knife procedure, whereas the desired conclusion concerns Equitability Procedure.
On [BJK, p 1318], the authors claim that their new Equitability Procedure is Pareto optimal (efficient), and on [BJK, p 1320], that their Surplus Procedure is Pareto optimal. Both those claims are false, even when all the value measures are strictly positive everywhere. The underlying reason is that both EP and SP allocate contiguous portions to each player, and as noted in [BT,p 149], "satisfying contiguity may be inconsistent with satisfying efficiency". This is illustrated in the next two examples, which show that EP and SP, respectively, are not in general Pareto optimal.
Counterexample 6. The cake is the unit interval [0,1], and there are two players A and B. A's value function is 1.6 on (0, 1/4) and on (1/2, 3/4) and is .4 elsewhere; and B's is 1.6 on (1/4, 1/2) and (3/4, 1) and .4 elsewhere. Then SP cuts the cake at 1/2, and each player receives a portion worth exactly .5. But allocating (0, 1/4) and (1/2, 3/4) to A, and the rest to B, gives each player a portion he values .8, which is strictly better for each player than the SP allocation, so SP is not Pareto optimal.
Other arguments in [BJK] are also erroneous or confused. On [BJK, p 1315] the authors "postulate that the players have continuous value functions…and their measures are finitely additive". Does "finitely additive" mean that the Radon-Nikodym theorem may not hold? The authors then state "we assume that the measures of the players are absolutely continuous, so no portion of cake is of positive measure for one player and zero measure for another player". Absolute continuity, of course, even implies countable additivity, but the statement that absolute continuity implies that a set of positive measure for one player is also positive for all other players is false, as is easily seen in Counterexample 2 above.
As mentioned above, some of the ideas in the [BJK] article may perhaps be salvaged by making two simple but fundamental modifications: changing the hypotheses to require that all value measures (probability density functions on the unit interval) are almost surely strictly positive, and changing the basic definition of "strategyproof" from a strong Paretooptimality condition to a weaker one.
The difference between strong and weak properties is crucial in many fields -strong/weak topologies, convergence, etc. -and it is in fair division as well, as can be seen here where basic theorems are true using one definition, false using the other. Perhaps replacing the requirement that a strategy of Player A be "assuredly better against all strategies of B", by the weaker requirement that it only be "at least as good against every strategy of B, and strictly better against at least one strategy of B" may lead to interesting correct and non trivial analogs of Theorems 1 and 2, but even that is not clear. Are the players allowed to bluff only with absolutely continuous measures?
Requiring that the absolutely continuous value measures are also mutually absolutely continuous, or mutually absolutely continuous with respect to Lebesgue measure, may seem like an innocent nonnegativityversuspositivity technicality, but in fair division such differences often imply important philosophical changes in the problem. For example, suppose that the cake is an inhomogeneous mixture of various ingredients including chocolate. Then absolute continuity simply means that any piece of zero volume is worth zero to every player -a reasonable hypothesis, but not one that is standard in classical or modern fair division theory (e.g., [B,DS,EHK,Hil12,J,K,R,RW,Ste14,Str]). But further requiring the values to be mutually absolutely continuous means that if one player likes chocolate, all the other players must like chocolate as well. And requiring that the measures are absolutely continuous with respect to Lebesgue measure (i.e., the corresponding pdf's are almost surely strictly positive), means that every player must like chocolate, period (and must like every other part of the cake). It is no great surprise that making such unnatural assumptions may lead to "strong" results.
Perhaps some of the logical errors in [BJK] illuminated above can be corrected by adding additional assumptions, but if the goal is to find "better ways to cut a cake", then imposing extra conditions (such as absolute continuity and mutual absolute continuity and strict positivity of the measures and connectivity and 1-dimensionality of the cake and use of an outside referee), conditions not required by classical procedures like moving-knife or cutand-choose, can hardly be called an improvement.
The [BJK] article does perhaps contain several new ideas that may serve as a challenge and opportunity for students and mathematicians to develop and make rigorous. As can be seen in the moving-knife procedure above, it is often possible to express practical, yet clean, clear, and beautiful logical conclusions without using highly technical language.
|
2019-04-10T23:58:09.291Z
|
2008-07-14T00:00:00.000
|
{
"year": 2008,
"sha1": "657d932d2afd42f48984bd482480a27bf73f502e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "67af7b610bc4b5648b4566f62640dbda334ff5cf",
"s2fieldsofstudy": [
"Mathematics",
"History"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
236690184
|
pes2o/s2orc
|
v3-fos-license
|
Logic Synthesis for Emerging Technologies
— Emerging technologies are being considered to replace the conventional CMOS-based design that seems arriving to its end of life due to the limits of MOS transistor shrinking. However, since those novel devices are not necessarily switch-based ones, the traditional AND/OR logic synthesis process in the digital integrated circuit design flow tends to become inefficient, whereas threshold logic paradigm seems to be more appropriate for them. In this context, different meth-ods for threshold logic synthesis, suitable for emerging technologies, are reviewed in this paper. The majority logic based design is also discussed herein since it represents a subset of threshold logic domain, and many new technologies have presented the 3-input majority Boolean function as the most basic logic gate. Experimental data, presented in previous works, are used to illustrate and compare the performance of the state-of-the-art logic synthesis methods related to.
I. INTRODUCTION
Continuous scaling of MOSFETs has been the principal strategy for improving integrated circuit performance. However, as MOS transistor dimensions shrink, manufacturing imperfections and quantum effects become more critical and threaten to stop the CMOS scaling [1]. As a result, much effort has been done in order to develop new devices that may allow further progress in computation capability. Among these emerging technologies are carbon nanotubes and related graphene structures [2], single electron transistors (SET) [3], nanowire transistors [4], quantum-dot cellular automata (QCA) [5], nanomagnetic logic (NML) [6], resonant tunneling diodes (RTD) [7], spintronic-based devices [8], memristors or memristive devices [9], multiple independent-gate field effect transistors (MIGFETs) [10], and many other possibilities.
For these emerging technologies be successful, it has to present some characteristics that represent advancement and improvement over traditional MOS transistors [11]. These can be related to higher frequency operation, lower power consumption, smaller area, and so on. Additionally, it is welcome that the knowledge used in designing and fabricating CMOS circuits can be somehow adapted to the new technology to make the conversion process easier. In this sense, the logic synthesis process plays a crucial role in the digital circuit design, taking into account that many of those new technologies do not provide switch-based devices, becoming inefficient and even incompatible the conventional switchbased design circuitry based on AND/OR synthesis. Particular design synthesis effort has been presented for specific technologies as observed for MIGFETS [12], spindiode [13] and memresistive IMPLY [14]. On the other hand, many of these emerging devices have been demonstrated to be more appropriate for threshold logic design paradigm, like RDT, SET, QCA, NML and memristors. As a consequence, algorithms addressing threshold logic synthesis have been presented in the literature [15,16,17,18,19,20,21]. Moreover, some of emerging technologies are better suited for majority (MAJ) logic paradigm which can be considered as a subset of threshold logic domain [11] [22]. Therefore, new EDA algorithms have been proposed in order to improve digital circuit design by exploiting MAJ gates [22,23,24,25,26,27]. Such a change from AND/OR logic paradigm to the threshold logic one, in particular majoritybased logic, impacts significantly the circuit synthesis process.
This paper presents an extended survey about threshold logic and MAJ-based logic synthesis, pointing out the main contributions of each work and the challenges for future improvement. Some experimental results already published in other papers have been shown and discussed herein in order to provide to the reader a better comprehension of the performance obtained on related work. In the organization of this paper, initially the more general threshold logic synthesis is reviewed followed by the discussion about the more specific MAJ-based synthesis.
II. THRESHOLD LOGIC DESIGN
This section presents the fundamentals about threshold logic field for a better understanding of this survey.
A. Terms and Definitions
A threshold logic function (TLF) is a Boolean function satisfying the following condition [28]: where x i represents each Boolean input value {0, 1}, w i is the weight of each input, and T is the function threshold value. Therefore, each input has a specific weight and the function has a threshold value. If the sum of active input weights (i.e., inputs are equal to 1) is greater or equal than the threshold value, then the function evaluates to 1. Otherwise, the function evaluates to 0.
A TLF is completely represented by a compact vector [w 1 ,w 2 ,. . . ,w n ;T ], where w 1 ,w 2 ,. . . ,w n are the input weights and T is the function threshold value. A TLF has also been called 'linearly separable' function.
Although some complex functions are TLFs, there exist some simple functions that are not TLF. For instance, the function h = x 1 x 2 + x 3 x 4 cannot be represented in terms of input weights and threshold value. The threshold logic identification process verifies if a given Boolean function is TLF (or not) and compute the input weights and the corresponding threshold value. Many TLF identification algorithms are based on integer-linear-programming (ILP) [15,16,16,17,18,29]. In [30], on the other hand, a complete system of inequalities is built using a similar strategy to ILP inequalities generation algorithms. However, unlike ILPbased approaches, the inequalities system is not solved. Instead, the algorithm speeds up the process by selecting some of the inequalities as constraints to the associated variables and computing the variable (input) weights in a bottom-up strategy. After this assignment, the consistency of the entire system is verified in order to check whether the weights have been correctly computed. Such a strategy has resulted in fast runtime and good quality of results (QoR). In a more recent work, in [31], the authors proposes a new necessary condition and the corresponding speedup strategies to the threshold function identification problem, claiming to obtain TLFs not found by the method presented in [30].
In the context of technology mapping, more specifically structural cuts, threshold cut is defined as a cut that the corresponding Boolean function is TLF.
Threshold logic gate (TLG), in turn, is a single primitive or a non-decomposable circuit that physically embodies the behavior expressed in Equation (1). TLGs can represent the implementation of complex functions. For instance, the TLG defined by [4, 3, 3, 1, 1; 7] It is worth to notice that using larger threshold functions has the potential benefit of reducing the total number of gates needed to implement digital circuits. Several topologies of TLGs have been proposed for CMOS and emerging nanotechnologies. A survey with more than 50 TLG circuitries are presented by Beiu et. al, in [32]. Moreover, TLG designs based on memristor [33], spintronic device [34] and RTD [15] technologies have also been proposed, as illustrated in Fig. 1.
Threshold logic network (TLN), on the other hand, is a netlist of TLFs and interconnections. A TLN can be implemented using TLGs, since each TLF can be directly implemented through a single TLG. The total area of TLN corresponds to the sum of the TLG areas. Notice that the TLG area depends directly on the technology adopted to build such a gate. However, since no threshold-oriented technology currently available is mature enough to fabricate reliable TLGs in large scale, synthesis tools usually consider that each threshold gate presents the same area. Therefore, the total circuit area is commonly estimated through the overall number of TLGs instantiated in the mapped circuit.
Although it is hard to define a general TLG area estimation, some of them are more suitable to particular technologies. For instance, when designing a TLG by using RTDs, each input weight and function threshold values determine the diode physical area. Therefore, the gate area is directly related to the sum of the weights and the threshold value, as follows [15]: where k is the number of TLG inputs (fanin), w i is the weight of input i, A u is the unit area of an RTD with w=1, and T is the threshold of the gate.
In other technologies, such as memristors [9][33] and spintronic-based devices [34] [35], the input weight is set by applying a voltage over the device during a moment, so not impacting the device physical dimensions. In these cases, each input is associated to a single device (with the same area) and, as a consequence, the most appropriate gate area estimation metric is the number of gate inputs.
B. Threshold Logic Synthesis
The goal of threshold logic synthesis is to generate a TLN where each TLF can be directly implemented through a single TLG. The design flow adopted by many works starts by performing a LUT-based technology mapping [15,16,17,18,36]. This first mapping task results in a netlist of Boolean functions with restricted fanin. Afterwards, these methods identify which Boolean functions are TLF and then include them into the final solution. For each non-TLF, they generate a sub-network. The main differences among these approaches are: (i) the procedure of the threshold logic identification, and (ii) the generation of the sub-TLN from a nonthreshold function.
Zhang et al., in [15], and Subirats et al., in [16], use ILP to perform the TLF identification. For non-TLFs, the Zhang's method decomposes the function into AND/OR sub-fuctions, which are always TLFs, and select nodes through a heuristic way in order to combine them, checking whether the resulting function is TLF. The Subirats' approach, on the other hand, is based on the function truth table description. It selects recursively a variable and performs the Shannon decomposition up to find TLF sub-functions. The Subirats' method improves the Zhang's results in terms of the number of gates and circuit logic depth. However, the Subirats' approach produces only two-level TLNs without fanin restrictions, being more suitable to neural network design.
In [17], Gowda et al. propose a heuristic approach to identify TLF. They adopt both binary decision diagrams (BDD) and a factorized tree structure (called max literal factor tree -MLFT) in order to generate a TLN. The method breaks recursively the initial expression tree into sub-expressions, identifying sub-trees that represent TLF and assigning input weights. The method proposed by Palaniswamy et al., in [18], improves the Gowda's approach [17]. It looks for circuit outputs that can be implemented as a single TLF. Both the Gowda's and the Palaniswamy's methods suffer from execution time as the main bottleneck, and the solutions depend strongly on the initial structure (BDD or MLFT), in particular, on the ordering of tree nodes.
In [36], it is proposed a method based on a TLG association process through the principle called functional composition, which is based on dynamic programming [37]. The algorithm associates simpler sub-solutions with known design costs, e.g. the number of gates, in order to produce the final solution with minimum cost. In order to identify whether the Boolean function created from such an association is TLF, the method adopts the heuristic method proposed in [38]. This approach presents improved results in terms of TLG count when compared to previous approaches. However, the design optimization in respect to the circuit logic depth is not so significant. Moreover, the execution time is also a limitation and the approach does not scales for TLFs with more than six variables (inputs).
The threshold logic synthesis methods mentioned above generate a TLF network from a general Boolean function netlist. Another set of approaches focuses in the optimization starting from a TLN. Methods for TLG-based circuit rewiring are presented by Kuo et al.,in [39], and by Lin et al., in [20]. Kuo's approach focuses only on circuit restructuring for satisfying a new fanin constraint and does not take into account the area and logic depth minimization issue. Lin's approach, on the other hand, represents a heuristic for rewiring the circuit by minimizing the summation of input weights and threshold value. In [21], Chen et al. propose an analytical approach based on collapsing two threshold gates in order to minimize the total number of TLGs. Annampedu's method, in [40], receives as input a given (already identified) single-output threshold function. Therefore, this method is not able to treat a general logic circuit with multiple outputs corresponding to Boolean functions not necessarily identified as threshold ones. The main goal of Annampedu's method is to restrict the fanin of a given threshold function implementation. Furthermore, Kulkarni et al.,in [41], propose TLG-based approaches to reduce the circuit area and power consumption without loss of performance. However, the technology mapping is performed by a commercial tool using conventional (i.e., non-threshold based) standard cell library. This method replaces some standard flip-flops by threshold logic sequential cells [41]. As a consequence, a hybrid netlist comprising both TLGs and conventional logic gates is provided.
Notice that all of mentioned threshold logic synthesis approaches focus basically on synthesizing single-output non-TLF. The first step of previous methods relies on a complete synthesis process which disregards threshold logic domain. Therefore, they do not explore the entire circuit where they would find, for instance, TLFs between different functions in the netlist. Such a bootleneck has been overcome in [42] [19], where Neutzling et al. propose a logic synthesis flow that identifies TLFs before the circuit covering task. It is based on a three-stage procedure, as depicted in Fig. 2(b): (i) a complete cut enumeration, storing Boolean functions of cuts in the design; (ii) the identification of TLFs related to this set of computed cuts; and (iii) the technology mapping considering thresholdness of pre-computed functions. By doing so, this approach is able to discard non-TLF cuts and provides the corresponding threshold network from the first covering action. Such a strategy allows the exploitation of multi-objective technology mapping algorithms.
Finally, in [43], the authors propose an optimization method for TLN based on observability don't-care-based node merging. To reduce gate count in a TLN, it iteratively merges two gates that are functionally equivalent or whose differences are never observed at the primary outputs. Furthermore, it is able to identify redundant wires and replace wires for removing more gates. Basically, the proposed method is primarily adapted from an ATPG-based nodemerging approach which works for conventional Boolean logic networks. To extend the approach for TLN, a method for computing mandatory assignments of a stuck-at fault test on a threshold gate and another one for conducting logic implication in a TLN have been developed. Additionally, to achieve a better optimization quality, the proposed method has been integrated with other optimization methods. The same authors, in [44], have recently improved this work by proposing don't-care-based node minimization.
C. Comparing Threshold Logic Synthesis Approaches
Some previous threshold logic synthesis approaches have been compared through three experiments presented in [19]. The first one compares the number of TLGs and the circuit logic depth to the results obtained from both strategies presented by Chen et al., in [21], and by adopting a commercial tool. In the second one, the Neutzling's approach is compared to the Gowda's method, in [17], and the Palaniswami's method, in [18], in terms of the number of TLGs. It has been done because the Chen's work does not compare itself to those approaches. In the third experiment, the circuit area results obtained by the Neutzling's method are compared to the ones presented in [15] and in [20] in terms of the sum of input weights and threshold value.
Since the Chen's method already provides improvement of 28% in TLG count and 14% in logic depth when compared to the Zhang's approach [15], it has been considered in the comparison to the Neutzling's approach, in [19]. It was carried out taking into account the IWLS 2005 benchmark suite [45]. Table I shows the obtained results in terms of TLG count and circuit logic depth. When limiting the TLGs to six inputs, the Neutzling's method reduced the TLG count in 94% of the circuits, with reductions up to 39% of such a count, being 20% on average. The logic depth has been reduced in all applied benchmark circuits, with reductions up to 64% and being 53% on average. The runtime is less than one second per circuit synthesis. [42]; (c) proposed in [19].
In order to exploit the gate level scalability, circuits have been synthesized taking into account TLGs with up to 15 inputs. In this case, the TLG count has been reduced in all circuits, with reductions up to 47% and being 25% on average. The reduction in terms of logic depth has been up to 67%, being 57% on average. In the same experiment, circuits have also synthesized by adopting a commercial tool. To do that, it has been provided the tool with a cell library composed by all NPN threshold functions with up to six variables. Notice that, although the commercial tool improves the Cheng's results in terms of TLG count, the second Neutzling's approach method has been able to improve these results even more. On average, the commercial tool improves 7% whereas the Neutzling's approach improves 15%. Moreover, TLG count and circuit logic depth are simultaneously reduced by the proposed flow, whereas the commercial tool increases the Cheng's results in around 38% in terms of logic depth.
In [18], the authors present two different improvements, named BDD decomposition method (BDM) and ZDD decomposition method (ZDM), to the max literal factor tree (MLFT) method proposed by Gowda et al., in [17]. The results shown in Fig. 3 present the TLG count obtained by these methods and the Neutzling's one. The ISCAS'85 set of benchmarks has been applied for this evaluation. When compared to the MLFT approach, BDM and ZDM methods provide an average TLG count reduction of 12% and 17%, respectively. The average reduction obtained the Neutzling's method is about 65% and 48% when compared to MLFT and ZDM, respectively.
Finally, in [19], the Neutzling's approach has been compared to the work presented by Lin et al., in [20], in terms of the summation of input weights and threshold value. The Lin's method starts from a TLG netlist (i.e., a given TLN) generated by the Zhang's method, in [15], and performs a rewiring procedure, optimizing the TLG area cost function. Table ?? shows the results from this experiment. In [20], the Lin's method improves the Zhang's results for all benchmarks, obtaining a reduction of 4% on average. The Neut- zling's approach does not depend on a preliminary threshold synthesis and optimizes the cost function performing a threshold logic technology mapping directly over the original circuit description. Therefore, the Zhang's results have been improved for all benchmarks, so reducing the circuit area up to 46%, being 31% on average. The Neutzling's threshold logic synthesis flow explores the LUT-based technology mapping strategy, which allows for near-optimum circuit logic depth covering [46]. From such a covering, three different mapping goals can be targeted in circuit area optimization. The first one chooses a cut that decreases the area even when increasing the logic depth (area oriented). The second one never replaces a cut if the logic depth is increased (delay oriented). Finally, an intermediate strategy chooses a cut that decreases the area whether the increasing in logic depth is less than a given predefined percentage (relaxed delay). In the experiments, it has allowed to increase the delay up to 30%. A comprehensive set of experimental results when addressing the mapper proposed in [19] to the aforementioned goals and targeting different threshold-based circuit area estimations is presented.
Notice that two ternary possibilities have been varied in this experiment: the mapping goal, which can be area oriented, delay oriented or relaxed one; and the area estimations that can be considered as the number of TLGs, the summation of input weights and threshold values, or the overall gate fanin. This yields nine different solutions to each synthesized circuit.
In [19], the results obtained by the Neutzling's approach are presented in terms of circuit area and logic depth taking into account different benchmark suites, such as the EPFL more-than-million (MTM), arithmetic and random-control [47], ISCAS'85 [45] and the opencore circuits [48]. The circuit level scalability has been verified by synthesizing benchmarks comprising more than 20 million AIG nodes. Table I.: Comparison of TLG count between the Chen's and the Neutzling's approaches, and by applying a commercial tool, using the Chen's one as reference [19].
III. MAJORITY-BASED LOGIC DESIGN
Some emerging nanotechnologies are well suited for digital integrated circuit design based on majority logic [11] [22].
The most prominent case is the quantum-dot cellular automata (QCA) technology. The basic gate in QCA technology is the MAJ-3 one, as illustrated in Fig. 4(a). Nevertheless, majority gates with fanin larger than three have also been proposed [49,50]. In QCA circuitry, larger majority gates can be obtained by using a staircase pattern, as depicted in Fig. 4(b). It is still unclear the maximum size of a QCA-based majority gates using such a structure. Spinbased devices can also potentially implement high fanin majority gates [51] [8]. However, the area overhead when high fanin majority gates are considered must be carefully evaluated.
Moreover, it is known that the class of functions that can be implemented by majority gate with unbounded fanin is equivalent to the class of threshold logic functions [28]. This equivalency allows to extend the threshold logic synthesis flow to perform majority logic synthesis while taking into account the impact of fanin on the gate area.
A. Terms and Definitions
An n-input majority function (MAJ-n), where n is odd, can be seen as a special case of TLF where each input weight is equal to 1 and the threshold value T is given by [28]: For instance, MAJ-3 and MAJ-5 functions are equivalent to the following TLFs [1, 1, 1; 2] and [1, 1, 1, 1, 1; 3], respectively.
B. Majority Logic Synthesis
Logic synthesis methods that focus on majority logic aim to obtain an optimized network of majority gates. The most adopted design flow strategy can be summarized into two steps [22,23,24,25,26]: (i) to perform an FPGA-based technology mapping over the look-up table (LUT) structure, and (ii) to decompose each LUT into a network of majority gates. Notice that during the second step a single LUT may require several gates to be implemented. Therefore, minimizing the number of LUTs does not necessarily lead to the minimal number of majority gates. Alternatively, in [52] and in [53], the authors propose the majority-inverter graph (MIG) structure, which is based on MAJ-3 logic, as an efficient way to improve the digital circuit design. However, it is not clear how majority gates with more than three inputs can be effectively exploited in such a MIG-based synthesis.
On the other hand, since there are many efforts to build majority gates with more then three inputs [49,50,51,8], the development of logic synthesis methods for MAJ-n logic becomes necessary [26]. In [19], an effective technology mapping for maj-n logic is proposed. In such approach, after an FPGA-based technology mapping, each LUT is converted to a majority gate such that no further decomposition is required.
C. Comparison of MAJ Synthesis Approaches
A set of experiments presented in [27] compares the proposed method in such a reference to the approaches proposed by Wang et al.,in [25], by Amarù et al., in [52] and by Soeken et al.,in [53] for MAJ-3 synthesis.
Moreover, it has also been explored the use of MAJ-5 not addressed by previous works. The MAJ-3 and MAJ-5 gates are assumed to have the same area. The main idea of this analysis is to simply illustrate possible gains obtained through majority gates with more inputs. However, such improvements are only possible when the gate area does not increase too rapidly with the number of inputs.
In Table IV, columns 2 and 3 present, respectively, the MAJ-3 gate count and the circuit logic depth obtained by Wang's method [25]. Columns 4 and 5 show the results from Neutzling's method [27] when restricting the synthesis to MAJ-3 logic. The improvement with respect to the number of gates and logic depth have been up to 36% (being 16% on average) and up to 32% (being 13% on average), respectively. In turn, columns 6 and 7 present the results taking into account also the MAJ-5 logic in the synthesis. In this case, the Neutzling's maj synthesis approach have reduced up to 55% the gate count (being 46% on average) and up to 53% the logic depth (being %26 on average).
The work presented by Amarù et al., in [52], introduces the MIG data structure. Although such a structure does not originally target majority-based integrated circuits, it can be directly translated into a MAJ-3 logic network. Table V shows the comparison between both approaches, restricted to MAJ-3 and considering also MAJ-5. When only MAJ-3 logic is addressed, the Neutzling's MAJ method yields a lower gate count at cost of increased logic depth. On the other hand, when considering MAJ-5, the average number of gates has been improved more than 40% while improving also the average circuit logic depth in around 17%.
Finally, the work presented by Soeken et al.,in [53], proposes algorithms for exact synthesis of Boolean logic networks using satisfiability modulo theories (SMT) solvers over MIGs. Table VI shows an average reduction both in number of gates (14%) and logic depth (35%) when using maj gates up to 3 inputs. When allowing also MAJ-5 gates, the average improvements are 39% in terms of total number of gates and 56% in terms of logic depth.
Another set of experiments presented in [27] compares the Neutzling's method to the work described in [26]. Since the previous work can only handle MAJ-3 and MAJ-5 gates, the same restriction has been set to the Neutzling's MAJ synthesis approach. In this experiment, both MAJ-3 and MAJ-5 gates are assumed to have the same physical area. Thus, the number of gates becomes the metric for area. Notice that there are cases when such an assumption is valid. For instance, in the USE methodology in [54]. It is also important to notice that the synthesis process described in [26] does not Therefore, this method should be more effective when considering different area values for different gates. Table VII summarizes the results. Overall, there is an average reduction of 10% on the number of gates and 14% on the number of levels. Experiments also demonstrate that the obtained reductions can be even higher when allowing majority gates with more than 5 inputs to be also used for mapping. The use of majority gates with large fanin has been evaluated in [27]. In this case, it is important to observe that the gate area might increase with respect to the number of inputs. Therefore, reducing the number of gates does not necessarily reduce the final circuit area.
Once such an analysis is not targeting a specific technology, it is important to take into account different majority gate area relationship to the number of inputs since it can vary from one technology to another. Therefore, the gate area has been estimated according to the fanin n and a parameter α that establishes such an area relationship, as follows: Through this equation, when α = 0, the same circuitry area is assumed for all MAJ-n gates, regardless of the number of inputs. On the other hand, when α = 1, the gate area becomes directly related to the gate fanin. Notice that the area of MAJ-3 gate has been adopted as reference, being equal to 1.
Notice that Equation (4) is intended to be used as a first order estimation for the gate area. This equation can be particularly useful when searching for new designs of MAJ-n gates. In fact, more accurate estimations can be used when available. For instance, two possible area estimations for a MAJ-n gate in QCA technology could be the number of cells in the gate as well as the area of the minimum enclosing rectangle.
The same benchmark circuits shown in Table IV have been adopted in these experiments. In Fig. 5, each curve represents the total number of MAJ-3, MAJ-5, MAJ-7 and MAJ-9 instantiated into the mapped circuits when varying α from 0 to 1.4. For α greater than 1.4, the results are kept unchanged.
In terms of logic behavior, the most compact design of MAJ-5 comprises four MAJ-3 gates [55]. Therefore, the area of MAJ-5 could be as much as four times larger than the corresponding area of MAJ-3, leading to a maximum value of α = 2.7. However, it has been observed that no MAJ-5 has been instantiated when α ≥ 1.4. The main reason is the fact that the most MAJ-5 instances have been used to implement OA21 function f = (x 0 (x 1 + x 2 )) and AO21 function f = (x 0 + (x 1 .x 2 )), which can be built both by using only two MAJ-3 gates. As a result, the MAJ-5 area should be at most twice the MAJ-3 area to become useful, leading to a maximum value of α = 1.36. Notice that, if the MAJ-n area is estimated by the number of QCA cels, then the MAJ-5 shown in Fig. 1(b) has exactly twice the MAJ-3 area, being in the boundary condition.
Some usual functions in digital circuit design, such as f 1 = (x 3 .x 0 (x 1 + x 2 )) and f 2 = (x 3 + x 0 + (x 1 .x 2 )), can be built using a MAJ-9 gate but not a MAJ-7. It explains why MAJ-9 is more used than MAJ-7 for α ≤ 0.8. On the other hand, for α > 0.8, using MAJ-3 and MAJ-7 to implement f 1 and f 2 leads to a smaller area than by considering only MAJ-9.
Finally, the synthesis considering majority gates with unbounded fanin and α equal to 0 have also been carried out. In such an evaluation, majority gates with up to 123 inputs have been taken into account in the final circuit. It indicates that majority gates with large fanin can become useful if the gate size is kept close to MAJ-3 gate area.
IV. CONCLUSIONS
This paper presented an extended survey about logic synthesis considering threshold logic and majority-based logic suitable for emerging nanotechnologies. It is clear that there is a room for future development in order to attend the design particularities of emerging technologies. It can be done by adapting existing traditional AND/OR synthesis (switchbased design) or by creating novel design environments and tools in the threshold logic domain.
|
2021-08-03T00:04:06.801Z
|
2021-04-06T00:00:00.000
|
{
"year": 2021,
"sha1": "abbe691497413121d67962cd518acb74f2014719",
"oa_license": null,
"oa_url": "https://jics.org.br/ojs/index.php/JICS/article/download/484/308",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ca6d26550b5c12628805672a03c0d740f4623d61",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
44088764
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneous Slowing and Regressing of Tumor Growth in Childhood/Adolescent Papillary Thyroid Carcinomas Suggested by the Postoperative Thyroglobulin-Doubling Time
Background Children and adolescents with papillary thyroid carcinomas (PTCs) have generally excellent prognoses despite their frequent extended disease. The tumor growth of young patients' PTCs might show spontaneous slowing postoperatively. We compared young PTC patients' postoperative thyroglobulin-doubling time (Tg-DT) with their preoperative hypothetical tumor volume-doubling time (hTV-DT). Methods Fourteen PTC patients aged ≤18 years who underwent total thyroidectomy at Kuma Hospital in 1998–2016 had biochemically persistent disease postoperatively. We calculated their Tg-DTs and estimated their preoperative TV-DTs with the tumor size and the patient's age at surgery, presuming that a single cancer cell was present at the patient's birth. Results Twelve patients had positive Tg-DTs ranging from 2.0 to 147 years, and the remaining two had negative Tg-DTs, indicating slow growth or even regression. The hTV-DTs were 0.3–0.6 years (median 0.5 years), which were significantly shorter than the Tg-DTs (p < 0.001), indicating much faster growth preoperatively. The analyses of the nine patients without radioactive iodine administration (RAI) gave similar results (p < 0.01). Conclusions Irrespective of RAI, the patients' postoperative Tg-DTs were significantly longer than their preoperative hTV-DTs and were negative values in two patients, indicating that the growth of these young patients' PTCs had spontaneously slowed or even regressed postoperatively.
Introduction
Children and adolescents with papillary thyroid carcinoma (PTC) have generally excellent prognoses despite their often extended disease status [1][2][3]. Even when they have distant metastasis, young PTC patients survive for a long time. The excellent prognoses of these patients may be due to their high sensitivity to radioactive iodine (RAI) treatment. In Japan, RAI treatment for childhood and adolescent PTCs has been performed only for patients with distant metastasis, and thyroid ablation with RAI is rarely performed for patients in this age range.
The recurrence of PTC in regional lymph nodes is rather frequent. These recurrences are usually treated surgically without additional RAI treatment. However, even in patients without RAI treatment, the prognoses are good. Papac reported that there was a possibility of spontaneous regression in some tumors such as kidney cancer, malignant melanoma, lymphoma, and leukemia [4]. It is also well known that there is a tendency for spontaneous regression in some pediatric neuroblastomas [5][6][7].
Collins et al. studied the changes in the tumor sizes of pulmonary metastases over time, and in 1956 they proposed the concept that human tumors grow exponentially [8]. A tumor's growth rate is best expressed as the tumor volume-doubling time (TV-DT). Miyauchi et al. found that the changes in serum calcitonin levels in patients with medullary thyroid carcinoma who had persistent hypercalcitoninemia postoperatively were exponential, which is consistent with Collins' concept, and they reported that the calcitonin-doubling time was a strong prognostic factor [9]. Other research groups confirmed the exponential changes in serum calcitonin and carcinoembryonic antigen (CEA) levels and the prognostic values of the calcitonin-doubling time and the CEA-doubling time [10].
Miyauchi et al. also demonstrated that the serum thyroglobulin (Tg) values measured at a thyrotropin-suppressed condition in PTC patients after total thyroidectomy also changed exponentially over time, and they reported that the Tg-doubling time (Tg-DT) was a strong prognostic factor [11]. Sabra et al. reported that the Tg-DT correlated with the TV-DT in patients with pulmonary metastases of PTC [12]. Tuttle et al. described the tumor volume kinetics of papillary thyroid cancers based on the concept of exponential tumor growth [13]. The TV-DTs of other lesions such as breast cancers, hepatocellular cancers, and prostatic cancers have also been reported [14][15][16].
We hypothesized that the postoperative tumor growth of PTCs in young patients might slow down spontaneously. We can calculate Tg-DTs, which can be expected to indicate the postoperative tumor growth rate. However, the preoperative growth rates of PTCs in young patients are not known. We estimated the preoperative TV-DT by using the tumor size and the patient's age at surgery, presuming that a single 10m dia. cancer cell was present at the patient's birth and that the tumor grew at a constant rate. We call this value the "hypothetical tumor volume-doubling time (hTV-DT)." The actual origin of the cancer would be later than the patient's birth; therefore, the growth before the patient's surgery would have been rapider than this value.
To test our hypothesis, we compared the postoperative Tg-DT with the preoperative hTV-DT in young PTC patients with biochemically persistent disease. We calculated the postoperative Tg-DTs in these 14 patients as described [11]. Serum Tg measurements were performed as routine follow-up tests. We excluded Tg data within 1 month postoperatively and 1 year after RAI administration. The median number of Tg measurements was 6.5.
Materials and Methods
We calculated the preoperative hTV-DT by using the patient's age at surgery, Y (years), the max. dia. of the tumor, and the dia. of a single cancer cell as 10 m, that is, 0.01 mm. In general, when a tumor of diameter 1 with the first tumor volume (i.e., TV1) grows to a tumor of diameter 2 with the second tumor volume (i.e., TV2) over a time period , the TV-DT can be calculated as follows: the tumor volume (TV) is calculated as 4/3 × × ( /2) 3 , where is the diameter of the tumor. Each TV-DT is given as (log 2× )/log(TV2/TV1) [17]. For the calculation of the hTV-DT values in the present study, = Y (years), 1 = 0.01 mm, and 2 is the max. dia. of the tumor at surgery. Values are median (ranges) and numbers of cases, Tg: thyroglobulin, Tg-DT: thyroglobulin-doubling time, and hTV-DT: hypothetical tumor volumedoubling time. Note that 1/Tg-DT was significantly smaller than 1/hTV-DT ( < 0.001).
The present study was approved by the Ethical Committee at Kuma Hospital. Tg-antibody was tested with a radio-immunoassay (Thyroglobulin (Tg) Autoantibody RIA kit, RSR, Pentwyn Cardiff, UK) until March 2008 and with an electrochemiluminescence immunoassay (Elecsys Anti-Tg kit, Roche Diagnostics) since April 2008. Patients with a detectable test result of either of these tests were excluded from the present study.
Statistical Analysis.
Two patients had negative Tg-DT values, as described in the Results. This caused a discontinuity problem among the patients with positive and those with negative Tg-DT values. In order to resolve the discontinuity problem, we performed statistical analyses on DT with the reciprocal of DT (i.e., 1/DT), as Barbet et al. described [10]. Differences in 1/DT were calculated using the Wilcoxon signed-rank test. Categorical variables were compared using the Kruskal-Wallis test. These analyses were performed with StatFlex version 6.0 software. All statistical tests were twosided, with the level of significance set at value < 0.05.
Results
There were 12 girls and two boys aged 7-18 years with a median of 16.5 years (Table 1). Their tumor sizes ranged from 13 to 46 mm (median 24 mm). All 14 patients underwent central compartment dissection, and 11 patients underwent unilateral (seven patients) or bilateral (four patients) modified neck dissection as well. All but two of the 14 patients had pathological node metastasis. Only two patients received RAI ablation, 30 mCi and 100 mCi, respectively. Three other patients underwent whole body scintigraphy with a small dose of RAI. One of these five patients showed an accumulation to the lymph node recurrence. The remaining four patients showed no abnormal uptakes.
Five patients developed lymph node recurrence, which were treated surgically. The postoperative follow-up period ranged from 3.
Discussion
Childhood and adolescent PTCs are a mysterious type of cancer. Mazzaferri and Kloos reported that PTC patients aged .1 (6.9-11.5) 9.1 (6.9-11.5) 9.0 (7. ≤19 years had high incidences of local and distant recurrences, although their mortality from thyroid cancer was low, and PTC patients aged ≥60 years had both high recurrence rates and high mortality from thyroid cancer [2]. The latter phenomenon in the elderly sounds natural, but the former phenomenon in the youth is confusing. It is well known that young patients with PTC tend to have large tumors, frequent nodal metastasis, and even pulmonary metastasis. However, mortality from thyroid cancer is surprisingly and disproportionally low in young patients despite an advanced disease status. The most likely explanation for this phenomenon might be that PTCs in young patients are very sensitive to RAI treatment. However, in Japan, the use of thyroid ablation with RAI after thyroidectomy is not common. In the present 14 patients, only two (14%) received thyroid ablation; three received whole body scintigraphy with a small dose of RAI, and nine patients received no RAI at all.
We can express the growth rates of cancers with serum tumor marker-doubling times or with TV-DTs that are calculated with tumor sizes on serial measurements of structural diseases such as pulmonary metastases. These values can usually be obtained only postoperatively. There is generally no direct method to evaluate preoperative tumor growth rates. In the present study, we estimated the preoperative TV-DT using the tumor size and the patient's age at surgery, presuming that a single 10-m dia. cancer cell was present at the patient's birth. The actual time of the origin of the tumor would be after the patient's birth. Thus, the actual preoperative TV-DT should be smaller, or the actual preoperative growth should be more rapid. One might argue that the growth of a tumor may not have been constant. If there were slow growth periods, there should have been rapid growth periods to grow to the tumor size at surgery. This possibility does not contradict the present contention that the growth of PTCs of young patients spontaneously slows down postoperatively.
In this paper, we describe that 12 young patients with PTC had rather long Tg-DTs and the remaining two had negative Tg-DT values, all of which were significantly longer than the hTV-DTs. This was the case for the patients who were not given any dose of RAI. The hTV-DTs in the present patients ranged from 0.3 to 0.6 years (median 0.5 years). The basal cohort of the present study included 78 PTC patients. The hTV-DTs in these 78 patients ranged from 0.2 to 0.6 years (median 0.5 years; data not shown in detail). These estimates suggest that the PTCs in these young patients had grown very rapidly preoperatively.
One might argue that the serum Tg detected in the present 14 patients came from the residual normal thyroid tissue and not from persistent disease. In our previous study on the Tg-DTs of 426 patients with advanced PTC, 16.2% of the patients showed a decrease in serum Tg over time, resulting in negative Tg-DTs [18]. To address those findings, we studied serum Tg values in 27 consecutive patients with medullary thyroid carcinoma who underwent total thyroidectomy. The postoperative serum Tg level was <0.5 ng/ml in 22 patients (excluding the five patients with positive Tg-antibody) [19]. This suggests that the serum Tg detected in the patients who underwent total thyroidectomy at Kuma Hospital was most unlikely from the residual normal thyroid tissue. Interestingly, the proportion of patients with a negative Tg-DT decreased with age: 20.2% in the patients aged <40 years, 18.4% in the patients aged 40-60 years, and 11.4% in the patients aged ≥60 years [20]. These data also indicate that a postoperative decrease in serum Tg is rather common in young PTC patients.
Pediatric neuroblastoma (stage 4S) is known as a tumor with spontaneous regression. Several groups reported that pediatric astrocytomas also regressed spontaneously [21][22][23][24]. Spontaneous tumor growth slowing and even tumor regression in childhood or adolescent patients with PTC might be rather common phenomena.
There are several limitations in this study. The study design was retrospective, and the number of patients was Journal of Thyroid Research 5 small at 14. However, these patients were recruited from the 78 patients who underwent total thyroidectomy for PTC during an 18-year period at a high-volume hospital for thyroid diseases. Although we determined the Tg-DTs in the 14 patients, none of these patients had structural disease. Therefore, the TV-DTs in these patients were not available. In order to look into the tumor growth before surgery, we propose that the hTV-DT be used. The results of our analyses indicate that the preoperative tumor growth rate was greater than the observed Tg-DT in these patients. However, this finding should be tested in future studies.
Conclusion
The Tg-DTs in the present 14 PTC patients aged ≤18 years were significantly and definitely longer than their hTV-DTs, irrespective of the use of RAI. Two of the patients showed a decrease in serum Tg values over time without the use of RAI. The present data suggests that the growth of the PTCs in these children and adolescents spontaneously slowed down or even regressed postoperatively.
Additional Points
One might think that calculations of DTs are not easy. To solve the problems encountered in the calculation of Tg-DTs and TV-DTs, we created the "Doubling Time and Progression Calculator." This can be downloaded at Kuma Hospital's website: http://www.kuma-h.or.jp/english/.
|
2018-06-06T00:50:05.160Z
|
2018-05-16T00:00:00.000
|
{
"year": 2018,
"sha1": "131a28cc104f3f0da7a59d27a174f0e01d4384d5",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jtr/2018/6470251.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "daf909032c3a24bb6d7178712de82d6382ca12fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18150257
|
pes2o/s2orc
|
v3-fos-license
|
Is chronic kidney disease an adverse factor in lung cancer clinical outcome? A propensity score matching study
Background Comorbidity has a great impact on lung cancer survival. Renal function status may affect treatment decisions and drug toxicity. The survival outcome in lung cancer patients with coexisting chronic kidney disease (CKD) has not been fully evaluated. We hypothesized that CKD is an independent risk factor for mortality in patients with lung cancer. Methods A retrospective, propensity‐matched study of 434 patients diagnosed between June 2004 and May 2012 was conducted. CKD was defined as estimated glomerular filtration rate <60 mL/minute. Lung cancer and coexisting CKD patients were matched 1:1 to patients with lung cancer without CKD. Results Age, gender, smoking status, histology, and lung cancer stage were not statistically significantly different between the CKD and non‐CKD groups. Kaplan–Meier survival analysis demonstrated a median survival of 7.26 months (95% confidence interval [CI] 6.06–8.46) in the CKD group compared with 7.82 months (95% CI 6.33–9.30) in the non‐CKD group (P = 0.41). Lung cancer stage‐specific survival is not affected by CKD. Although lung cancer patients with CKD presented with an increased risk of death of 6%, this result was not statistically significant (hazard ratio 1.06, 95% CI 0.93–1.22; P = 0.41). Conclusion According to our limited experience, CKD is not an independent risk factor for survival in lung cancer patients. Clinicians should not be discouraged to treat lung cancer patients with CKD.
Introduction
Lung cancer is the leading cause of cancer death worldwide and is responsible for nearly 19.4% of all cancer deaths. 1 Lung cancer causes more deaths than breast, colon, and pancreatic cancers combined. 2 Recent cancer incidence and mortality data revealed that during 2013, 212 584 people (100 677 women) with lung cancer were diagnosed in the United States. 2 Medical and technological advances have contributed to improved life expectancy for lung cancer patients. However, the aging of the population has led to a growing prevalence of patients suffering from chronic diseases and cancer. Cancer stage is usually the most important factor affecting long-term outcome; however, comorbidities influence the care of these patients, the selection of initial treatment, and its effectiveness.
According to Na et al., patients with chronic kidney disease (CKD) have an increased risk of death of several cancers. 3 However, the literature contains contradictory results of the impact of renal dysfunction on lung cancer survival. [3][4][5] In this study, we evaluate the clinical outcomes of patients with lung cancer and coexisting CKD using a propensity-matched study. We hypothesized that CKD is a possible adverse factor for mortality in patients with lung cancer.
Methods
All adult patients (>18 years) diagnosed with lung cancer at Chang Gung Memorial Hospital, Chiayi, from June 2004 to May 2012 were included in this retrospective study. Propensity score matching was used with a 1:1 match for patients with lung cancer and coexisting CKD to patients with lung cancer without CKD based on age, gender, smoking status, histology, and lung cancer stage. Creatinine was measured at the time of cancer work up. CKD was defined as calculated creatinine clearance (CrCl) < 60 mL/minute/1.73 m 2 using the Cockcroft-Gault formula in the presence of proteinuria/hematuria or the presence of abnormal kidney imaging. 6 The SOLE presence of a CrCl of <60 mL/minute/1.73 m 2 was considered non-CKD. CKD staging was conducted in accordance with the current international guidelines. 7 The Cockcroft-Gault formula, the most commonly used formula to determine renal function status in the clinical care of cancer patients, has been shown to correlate with measured CrCl in multiple disease settings. 8 A recent formulation to estimate renal function was developed by the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI). 9 The Kidney Disease Outcomes Quality Initiative recommends using the CKD-EPI creatinine equation to predict eGFRcreat in adults; however, its clinical use in routine practice has not been established. 10 Medical records and the cancer center database at Chiayi Chang Gung Memorial Hospital were cross-matched to determine the coexistence of lung cancer and CKD. The following clinical data were extracted from medical records: age at lung cancer diagnosis, gender, smoking history, symptom/s at presentation, creatinine clearance, lung cancer histology and stage at diagnosis, primary treatment received (all treatment modalities given within three months post diagnosis), Charlson comorbidity index (CCI), and overall survival. 11 In the CCI, 19 chronic diseases are weighted according to their association with mortality. The sum of each morbidity score is added to reach a total score. The seventh edition of the tumor node metastasis (TNM) staging system for lung cancer was used for lung cancer staging. 12 Clinical staging included a physical examination, chest radiography, bronchoscopy, chest computed tomography, spirometry, brain magnetic resonance imaging, bone scan, and positron emission tomography. Post-treatment follow-up was carried out at the outpatient clinic every three months for three years and every six months thereafter. The follow-up examination included chest radiography, chest computed tomography and positron emission tomography. Patients with an incomplete medical record, creatinine measurement or without pathology reports were excluded. Overall survival was measured from the day of lung cancer diagnosis until the last followup or at the end of 2014. Patients lost to follow-up were contacted by telephone by the cancer center case manager; those not reachable by telephone were considered to have died if they were excluded from National Health Insurance. National Health Insurance offers universal coverage to more than 99% of the Taiwanese population. Patients are usually excluded as a result of death, missing premium payments longer than six months, emigration, or nationality change. The Health Promotion Administration, Ministry of Health and Welfare, Taiwan, release an annual death report of all cancer cases registered back to each cancer center. The institutional review board of Chang Gung Memorial Hospital approved this study.
Statistical analysis
The propensity score was calculated using logistic regression with CKD as the dependent variable. Propensity matching was used to select control patients based on several confounders simultaneously. 13 A caliper width of 0.2 times the standard deviation of the propensity score without replacement was used to pair match CKD and non-CKD patients. 14 Chi-square or Fisher's exact tests were used for categorical variables, and analysis of variance was used for numerical variables. Continuous variables were categorized using median values as the cut-off point for risk stratification. Age was divided into groups of <75 and >75 years of age; the CCI score was divided into <9 or >9; clinical stage was separated into stage I-IIIA versus stage IIIB-IV; and treatment modality into supportive, surgical (any therapeutic surgical resection, excluding diagnostic procedure), and medical treatment groups. Overall survival was estimated using the Kaplan-Meier method and difference in survival was calculated using the log-rank test. Cox proportional hazard analysis was used to estimate the level of significance and the relative risks with 95% confidence interval (CI). A P value of <0.05 was considered statistically significant. The clinical data was analyzed using SPSS version 21.0 (SPSS Inc., Chicago, IL, USA). Figure 1 summarizes the recruitment flow process. During the study period, 1660 lung cancer patients were diagnosed and/or treated at Chang Gung Memorial Hospital at Chiayi. One hundred and eleven patients were excluded: 55 patients had incomplete medical treatment records; 46 patients had no record of their creatinine level; and in 10 patients, lung cancer was not confirmed by our in-house pathologist. Propensity score matching was used to match lung cancer patients with coexisting CKD (CKD group) in a 1:1 ratio to lung cancer patients without CKD (non-CKD group).
Demography
There was no statistically significant difference in age, gender, smoking status, histology, or lung cancer stage between the CKD and non-CKD groups. The median creatinine level was 1.37 mg/dL for the CKD group compared with 0.84 mg/dL for the non-CKD group (P < 0.001). The median CrCl in the CKD group was 48.17 mL/minute compared with 82.14 mL/minute in the non-CKD group (P < 0.001). Twenty patients in the CKD group received renal replacement therapy with hemodialysis prior to the diagnosis of lung cancer. The patients' demographic characteristics are presented in Table 1. The median CCI score was 8 for the CKD group compared with 9 for the non-CKD group (P < 0.001). Distribution of the CCI is presented in Table 2.
Treatment
The proportion of patients that received supportive treatment was much higher in the CKD compared to the non-CKD group (32% vs. 23.7%). The proportion of patients that received medical treatment was much higher in the non-CKD compared with the CKD group (67.3% vs. 58.1%). The proportion of patients that received surgical treatment was similar for both groups ( Table 1).
Chronic kidney disease (CKD) versus non-CKD survival
Kaplan-Meier survival analysis demonstrated a median survival of 7.26 months (95% CI 6.06-8.46) in the CKD group compared with 7.82 months (95% CI 6.33-9.30) in the non-CKD group (P = 0.41). Although lung cancer patients with CKD had an increased risk of death of 6%, this was not statistically significant (hazard ratio [HR] 1.06, 95% CI 0.93-1.22; P = 0.41) (Fig. 2). Survival duration in the CKD group did not differ significantly from the non-CKD group according to age, gender, smoking status, CCI score, histology, stage, or treatment (Table 3). Cancer survival for stage I-IIIA and stage IIIB and IV patients according to the severity of CKD on presentation was not significantly different. The median survival for lung cancer stage I-IIIA was 35.84 months in the non-CKD group, 24.80 months in CKD stage 3-4, and 29.01 months in CKD stage 5 with hemodialysis, respectively (P = 0.26). The median survival for lung cancer stage IIIB-IV was 6.67 months in the non-CKD group, 6.01 months for CKD stage 3-4, 3.08 months for CKD stage 5, and 2.50 months for CKD stage 5 with hemodialysis, respectively (P = 0.68) ( Table 4). Subgroup analysis of stage IV lung cancer revealed a median survival of 5.19 months in CKD patients (95% CI 3.92-6.46) and 6.34 months in non-CKD patients (95% CI 5.33-7.35; P = 0.21) ( Table 4).
Discussion
Chronic kidney disease is a common clinical condition in the elderly population. It is estimated that 44% of individuals aged 65 years or older have CKD. 15 The reported incidence of coexisting lung cancer and CKD is around 13%. 4,5 In this report, the incidence of lung cancer with coexisting chronic renal disease was 28.01% (434/1549). This higher result could be related to the high incidence and prevalence of CKD under hemodialysis in southern Taiwan (513/million and 3297/million, respectively). 16 The 1988-1994 and 1999-2004 National Health and Nutrition Examination Surveys revealed that the prevalence of CKD had increased from 5.4% to 7.7%, respectively. 17 As the global population ages, the incidence of lung cancer with coexisting CKD is expected to rise.
Lung cancer is a disease that mostly affects elderly patients. The median age at diagnosis in our study participants (CKD group 75 AE 9.44 years, non-CKD 75 AE 8.21) is consistent with the SEER Cancer Statistics Review, 1975-2013, which reported the age at diagnosis of 70 years. 18 We used the median age (75 years) to evaluate the effect of age on survival. The younger group (<75) had better survival duration than the older group (>75) (median 10.28 vs. 6.34 months in the non-CKD group and 9.79 vs. 5.39 in the CKD group; P < 0.001). The inferior survival rate for older patients may be related to the following factors: less protocol-specified treatment because of intolerance of side effects, either no treatment or only supportive treatment available, and patients are ineligible for surgical resection. In the same age group, the difference between the CKD and non-CKD groups was not significant (P = 0.58). The survival duration of older patients after radical treatment did not differ significantly from the younger patients. 19 Using SEER and Medicare records of early stage lung cancer patients, Wisnivesky et al. found that women had better lung cancer specific, overall, and relative survival than men in all treatment groups. 20 Sagerup et al. found that regardless of stage, age, period of diagnosis, and selected histological subgroups, women had better survival rates than men. 21 Our data analysis revealed different outcomes: the non-CKD group demonstrated better survival for women (18.3 months, 95% CI 14.51-22.09) compared with men (15.13 months, 95% CI 12.66-13.32; P = 0.048). In the CKD group, the gender differences were not significant: 16.08 months (95% CI 12.13-20.03) for women and 14.72 (95% CI 12.27-17.16) for men (P = 0.43). This discrepancy warrants further investigation.
Non-small cell lung cancer is responsible for nearly 80% of lung cancers, and neuroendocrine tumors account for approximately 20% (nearly 14% by small-cell lung cancer). 22 In our study, the proportion of NSCLC and smallcell lung cancer in the CKD and non-CKD groups were similar. This pattern of distribution is similar to a previous report of lung cancer patients with associated CKD. 5 Lung cancer is usually recognized late in the disease course. The proportion of patients in our study with stage IIIB or IV at presentation was similar between the groups. Cancer stagespecific survival according to the presence or absence of CKD was not statistically significantly different between the groups. We evaluated the effect of CKD in patients with stage IV lung cancer and found that median survival rates did not differ significantly: 6.34 months (95% CI 5.33-7.35) for the non-CKD group compared with 5.19 (95% CI 3.92-6.46) for the CKD group (P = 0.21). Further survival analysis according to the different stages of renal impairment (non-CKD, CKD 3, CDK 4, CKD 5, and CKD 5-under renal replacement therapy) in patients with lung cancer stage I-IIIA and stage IIIB-IV was not statistically significantly different. However, survival rates did differ significantly between the CDK and non-CDK patients according to cancer stage (Table 4). According to the treatment modality for these patients, the proportion of patients receiving supportive treatment is much higher in the CKD compared with the non-CKD group (32.5% vs. 23.7%). The median survival in CKD patients was 2.40 months for supportive care, 8.70 months for medical treatment, and 41.75 months for surgical treatment (P < 0.001). Surgical resection is usually recommended for early lung cancer stages. Although surgical treatment is the treatment modality with the best chance of cure, the comorbidity and physical condition of patients with CKD could render those early stage patients medically inoperable.
Comorbidity has a significant influence on the treatment selection and survival of cancer patients. A recent article by Iachina et al., evaluated the impact of the individual component of the CCI on lung cancer survival. 23 In their report, cardiovascular disease, diabetes, cerebrovascular disorders, and chronic obstructive pulmonary disease have a significant impact on the survival of NSCLC patients. 23 However, as renal disease and other comorbidities were grouped together, the independent effect of CKD was not evaluated. 23 Marcus et al. found that higher comorbidity CKD, chronic kidney disease.
Figure 2
Kaplan-Meier survival curve for lung cancer patients with and without chronic kidney disease (CKD). The median survival was 7.82 months (95% confidence interval 6.33-9.30) in the non-CKD group compared with 7.26 (95% confidence interval 6.06-8.46) in the CKD group. Log rank test: P = 0.41.
CKD in lung cancer outcome severity was associated with higher lung cancer-specific mortality, and higher CCI score determines an increased risk of lung cancer-specific mortality. 24 Because every patient in the CKD group in this study had at least one type of cancer and renal impairment, the total number of comorbidities was not used. In our cohort, each of the independent morbidity scores was added for a total score and stratified according to the median score. In the CKD group, the median survival for patients with a CCI > 9 was 6.11 months compared with 9.99 months in patients with CCI < 9 (P = 0.002). However, in the non-CKD group, the difference was not significant, with survival rates of 7.52 months for CCI > 9 compared with 8.67 months for CCI < 9 (P = 0.09). The CCI score HR in the adjusted model was 1.07 (95% CI 0.87-1.32; P = 0.53). Although the CCI score did not reach statistical significance, we believe that the number and severity of comorbidities influenced treatment selection in these patients. Lung cancer is a deadly disease with a five-year survival rate of only 17.7%. 18 Moderate renal dysfunction (estimated glomerular filtration rate <60 mL/minute) is associated with an increased overall mortality rate of 12% for several types of cancer, but not lung cancer, independent of other known risk factors. 3 Similar survival rates between CKD and non-CKD lung cancer patients were reported by Patel et al. in a small retrospective report (n = 107), in which all patients with CrCl <90 mL/minute (mean CrCl of 71 mL/minute) were included. 5 We believe that the renal function in this group of patients was too good to be categorized as CKD. Our results revealed median survival rates in patients with CKD of 7.26 months and without CKD of 7.82 months (P = 0.41). Although lung cancer patients with CKD presented with an increased risk of death of 6%, this result was not statistically significant (HR 1.06, 95% CI 0.93-1.22, P = 0.41) (Fig. 2). CKD was not an independent predictor for lung cancer survival. In the adjusted model for the CDK group, Cox proportional hazard analysis revealed that the risk of death increases almost two-fold for patients with stage IIIB-IV (HR 1.93, 95% CI 1.38-2.70; P < 0.001). With medical treatment as the reference, patients receiving palliative treatment have a nearly two-fold increased risk of death (HR 1.98, 95% CI 1.60-2.46), while in those receiving surgical treatment the likelihood of death decreases by 55% (P < 0.001).
There are several limitations to our study that need to be addressed. The retrospective design, lack of standardization and overlapping of treatment, and the relatively small number of patients included may have an influence on the survival outcomes.
In our limited experience of Taiwanese patients, CKD is not an independent risk factor for lung cancer survival. Lung cancer stage and the treatment provided are the major determinants of survival. Patients with good physical performance should be aggressively treated to achieve a reasonable outcome.
|
2018-04-03T01:20:21.087Z
|
2017-02-16T00:00:00.000
|
{
"year": 2017,
"sha1": "5b86842cf61ee3cab385a59ae6a67bfa27d4f9af",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.12414",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b86842cf61ee3cab385a59ae6a67bfa27d4f9af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237541076
|
pes2o/s2orc
|
v3-fos-license
|
Calculations of the Spread of the COVID 19 epidemic in New York City based on the Analytical Model
A detailed description of the model for calculating epidemic spread under conditions of lockdown and mass vaccination of the population is given (ASILV model). The proposed analytical model adequately describes the development of the epidemic in New York City. The estimates of the total number of infected persons and the seven-day incident rate made using the proposed model correlate well with the observed data in all the stages of epidemic growth. Model calculations of the spread of the epidemic under different vaccination rates allowed an assessment of the effect of vaccination on the growth of the epidemic. Analysis of seven-day incidence curves at different vaccination intensities led to the preliminary conclusion that at vaccination rates above a minimum value, the emergence of new strains did not lead to a growing epidemic.
Introduction
Most models used to calculate the epidemic offer only numerical methods for solving. We have developed a simple and versatile analytical model [1, 2, 3, 4, 5], which enables us to quickly analyse the distribution patterns of the coronavirus epidemic.
The control calculations performed have shown a high degree of accuracy for widely varying populations, ranging from small areas of Berlin to large cities and a number of countries, such as the United Kingdom, South Africa, Germany and the United States. The correlation coefficients between the respective estimated and statistical curves reach values between 0.94 and 0.99.
The model was further developed to take into account the effects of abrupt changes in lockdown conditions and mass vaccination of the population. Comparison of the results of calculations by this modified model with data from statistical observations also shows good agreement [6,7].
The analytical model using functional relationships between the main parameters determining the development of the epidemic makes it possible to assess the effectiveness of limiting the development of the epidemic through both lockdown and vaccination.
Despite some successes in using the proposed simple analytical model, given its great potential due to its higher speed and simplicity of application compared to currently widely used numerical models, there is a need to clarify its possible limitations, in particular related to some initial assumptions made in deriving the model equations.
Methodology
Let us write the initial differential equations of the epidemic model, taking into account the impact of lockdown and mass vaccination on the epidemic spread, as [6]: λ -intensity factor of decrease in contacts of infected patients with persons who potentially can get infected by means of quarantine and other preventive measures.
v -population vaccination rate (1/day) α -is the coefficient of vaccine effectiveness.
In the tradition of mathematical modelling in epidemiology, this model will hereafter be referred to as ASILV for short. This name underlines the main features of this model, namely: S is the part of the population that is not yet infected, but which could become infected through contact with infected individuals, I is the part of the population that has already been infected at time t, L is the part of the population that is protected from infection by lockdown measures at the time in question, but which could potentially also become infected later if the lockdown conditions change, V is the part of the population that is protected from infection by vaccination, the effectiveness of which α may vary.
Equation (1), defines the change in the number of persons potentially susceptible to the virus under conditions of lockdown and mass vaccination of the population. The denominator in the last summand of equation (1) takes into account that as the proportion of the vaccinated population αv*t increases, the degree of impact of vaccination on the declining epidemic increases. The coefficient of effectiveness α depends on both the type of vaccine and the number of vaccination dose (first or second). We will assume that the maximum vaccination rate will not exceed (αv*t) max ≤ 0.8, i.e. that with an 80% vaccination rate the epidemic cannot develop. This is a natural limitation of the proposed model. However, we have to take into account that some part of the population has already had the disease either explicitly or asymptomatically by the time mass vaccination begins.
The solution to equation (1) is as follows: After substituting (3) into (2), solving the resulting equation, transformations and moving to a relative number of infections, we obtain the basic calculation equations.
For the period from outbreak to mass vaccination that is for t ≤ , when = 0, the solution of equation (2) has the form: -is the relative number of infected persons per one inhabitant of the settlement in question, as a percentage, 0 -is the value of i at the initial moment of the calculation period, K -is the transmission rate coefficient for the settlement with a population of N, which is calculated by the formula : The K coefficient also depends on the transmissibility of the virus strain responsible for the epidemic spread during the period under consideration. The value of the first summand in (5) was obtained for the first and second waves of the virus epidemic. For further virus strains, we assume a higher value of 0.37. In the case where the spread of infection is associated with several virus strains, the calculated dependence will be written as follows: Under conditions of mass vaccination when t ≥ , that is, when > 0: , The calculations are performed first by (4) or (6) and then by (7) for the time period during which vaccination is carried out. The same equation (7) is used to calculate the spread of the epidemic under the condition of an abrupt change in vaccination rate, which was typical of many European countries, in particular Germany, for example. The model equations presented were originally given in [7].
In [7], an attempt is made to relate the model coefficient λ to the effectiveness of the lockdown condition. Let us make some specification of the relationship between this coefficient and the parameter L characterizing the level of reduction in the rate of growth of the epidemic due to lockdown L = / Where and i are the intensity of the epidemic growth under lockdown and without lockdown, respectively. For example, if the application of lockdown reduces the maximum number of infected residents by half, then the coefficient L = ½ = 0.5. Using dependences (4) and (5) for time t → ∞ we find the relation between the coefficient λ and the parameter L. The graph of this dependence is shown in Figure. 1 This graph shows, in particular, that in the absence of lockdown, the coefficient λ can be assumed to be 0.031/day, and that when this coefficient is above 0.042 1/day, the epidemic wave is virtually suppressed by lockdown. However, this does not exclude the possibility of a new virus strain emerging when the lockdown conditions are relaxed. For the most characteristic lockdown conditions in most of Europe, the coefficient λ = 0.034-0.035 1/day, hence the L coefficient varies between 0.2 and 0.3, which means that lockdown reduces the epidemic's growth rate by a factor of 3-5.
The graph in Figure 1 can be approximated by the formula λ = 0,0309*( L )^(-0,091) Figure (1) also shows the approximation curve (8) as dashed (the correlation coefficient between the approximation curve and the calculated curve is 0.9984) The relationship between the empirical model coefficients λ and k and the lockdown conditions, population vaccination rate, population size and strain type allow the ASILV model equations to be used not only for analysis of the current epidemic but also for operational forecasting of COPD19 disease development.
Results
To further investigate the effectiveness of the proposed ASILV model, we use it to analyse the course of the epidemic in New York.
Calculations for the first and the beginning of the second waves of the New York epidemic using the proposed model are given in [4].
Calculations for the first wave were performed with coefficient λ = 0.0345 1/day. The second coefficient in the calculated dependence (4) was determined by formula (5), and for New York it turned out to be K = 0.43 1/day. From the graph in Fig. 1 or formula (8), we estimate that, at this coefficient λ, the growth of the epidemic in its initial stage was slowed by the application of a lockdown by a factor of about 3.5. It should, however, be noted that the lockdown was introduced in the city with a considerable delay, on only 63 days from the start of the epidemic. At that point in time, the number of infections detected (even with low testing coverage) was already reaching around 1% of the city's population. The weekly increase in the number of infected persons in the city at that point in time exceeded 37,000, i.e. more than 5,000 people per day. Positive results from the introduction of the lockdown could not really be observed until day 77, when the rate of spread of the epidemic began to decrease. For this very early phase of the initial wave of the epidemic in New York, the corresponding coefficient λ = 0.033 1/day was found, i.e. the epidemic slowed down by a factor of about 2. Calculations using equation (4) with a coefficient λ = 0.033 1/day indicate that in such a situation the maximum number of infections in the city would have reached 6% of the city's population. In reality, during the first wave of the virus, the relative number of infections did not exceed 3% of the city's population [8].
A new surge of infections was recorded in most countries in mid-and late-September 2020 when a new 'wave' of the virus began to spread strongly. Analysis of the statistical data [9] revealed that the start of the new infection in New York occurred around 18 September of the previous year. This date was taken as zero for calculations of the development of the so-called "second" and subsequent waves of the epidemic. In the calculation period between its start and 4 June 2021, the date of writing, a total of more than 260 days, new virus strains emerged, lockdown and vaccination conditions changed and all these had to be taken into account when using the ASILV model to calculate the spread of the epidemic in New York. Key statistics related to the COPD 19 epidemic in New York City, used later in this paper, can be found in [9] and on the official city government website [10].
Results of epidemic spread calculations and observational data for the entire time period from the beginning of the second wave are shown in Figure 2. The calculations were performed with time interval of 1 week (from Friday until Friday of next week). In the first phase of the second wave, the virus transmission rate was assumed to be the same as for the first "wave", i.e. the value of the coefficient K = 0.43 1/day was kept unchanged. As for the coefficient taking into account lockdown conditions, it was assumed to be λ = 0.035 1/day. This value is the most typical for large European cities under standard lockdown conditions.
In general, the calculated curve at the start of the second wave satisfactorily describes the actual spread of the epidemic in the city. However, around day 60 of the outbreak (or around 15 November), according to [11], the first signs of introduction of the new virus strain into the city, identified as variant B.1.526, appeared. The main virus species determining the development of the epidemic at this period was the so-called "British" strain of B 1.1.7. The spread of this new wave of the epidemic was calculated using equation (6) with constant coefficient λ = 0.035 1/day and a slightly increased coefficient K = 0.45 1/day (allowing for increased transmission of these strains of the virus).
At the end of December and at the beginning of January, due to the Christmas holidays and the New Year, the lockdown conditions were relaxed. This has been taken into account by decreasing the coefficient λ for a short period from December 18, 2020 to January 8, 2021 (from 91 to 112 days) to a minimum value of λ = 0.032 1 /day. The same value for λ was adopted by [7] in the analysis of epidemic change for the same time period in Berlin.
From mid-January, immediately after the holidays, there is a sharp increase in the intensity of the epidemic, which was taken into account in the calculations by the introduction of a new wave of increasing infection.
Simultaneously, mass vaccination of the population begins in mid-January, for the period from mid-January to early June 2021 (from 112 to 252 days after the start of the second wave of the epidemic) using equation (7). The effective vaccination rate was calculated as averaged over the whole vaccination period: Vaccination rates for each vaccine dose 1 and 2 were calculated based on the data given in [10] as the ratio of the percentage of vaccinated population to the total period of mass vaccination of the population. The BionTech-Pfizer and Moderna efficacy ratios for the first and full doses of vaccination were taken to be 1 = 0.7, 2 = 0.92 respectively [12].By June 1 this year, over 49% were fully vaccinated, with only the first dose vaccinated about 10% of the city population. The vaccination period is about 170 days, hence the average effective vaccination rate αv is about 0.003 1/day. Model coefficients were assumed to be λ = 0.035 1/day, K = 0.45 1/day. For the final period starting January 15, 2021, the estimated spread curve shown in Fig. 2 also differs slightly from the one based on statistical data. A more detailed analysis of the data shows, however, that by late March or early April 2021 a slight increase in the intensity of the epidemic can be observed. Vaccination helps to compensate for these changes, which is why we did not need to analyse these features in our work.
Discussion
The calculation results of the proposed ASILV model agree satisfactorily with the statistical data for both the first wave of the epidemic and the subsequent waves (Figure.2). The correlation coefficient between the calculated and statistical data for the second and subsequent waves is R = 0.9991.
Using the standard EXCELL software, it is also possible to quickly establish an incidentce rate of 7 days based on the calculated model, one of the main characteristics determining the growth of an epidemic, accepted in many countries as one of the main criteria for determining the possibility of mitigating a lockdown. Figure. 3 shows a comparison of the estimated and observed seven-day epidemic incidence for the second and subsequent epidemic waves (per 100,000 people). In general, the calculated values of the seven-day incidence do not differ significantly from those obtained from measurements; however, at two points in time, the deviations in both curves are striking, with growth rates from about 10 January outpacing the calculated values and reaching peak values of about 600 infected persons. In comparison, the lockdown rule in Germany can be partially relaxed when the incidence rate is kept below 25 for a prolonged period of time. A second peak in the incidence value was observed in early April, but it was neutralised to values of around 400 by vaccination. Of particular interest is the sharp rise in the epidemic after the end of the Christmas and New Year's holidays. The same sharp increase was observed in most European countries; it can be assumed that a significant weakening of the lockdown conditions during the festive period could trigger a new wave of the epidemic. That is, the weakening of the lockdown was the root cause of the new wave in the following period of time. The increase in infections may have triggered a spike in the new wave after B1.1.7 (according to the new alpha virus classification) was introduced into the USA in January. According to virologists, the virus has continued to be the most widespread strain in New York for many months. Analysing the causes of the new waves of the epidemic is now a major challenge which will make it possible to improve the response to the epidemic. In this figure, the corresponding values of vaccination intensities are shown in brackets. The curve for which no intensity is given corresponds to the conditions of the above calculation, i.e. αv = 0.003 1/day. The upper curve was calculated assuming no vaccination. As might be expected, with increasing vaccination intensity, the maximum number of infected persons decreases and the duration of the epidemic decreases.
Figure 5: Effect of vaccination intensity on incidence magnitude
The effect of vaccination intensity on the spread of the epidemic can be identified more clearly by considering changes in the value of the incidence. Incidence calculations (per 100,000 inhabitants) for different vaccination intensities are presented in Figure 5.
This figure shows that without vaccination and with low vaccination intensities, the epidemic continues to develop for some time. With αv = 0.003 1/day and above, the magnitude of the incidence decreases immediately at the start of the new epidemic wave. With increasing vaccination time, the effect increases, so that for this or a higher value of vaccination intensity, it can be considered unlikely that an epidemic will develop with the emergence of a new strain of the virus. It is estimated, therefore, that αv ≥ 0.003 1/day would be the minimum intensity at which an epidemic in New York City could be excluded.
An analysis of incidence data for the city of Berlin [7] provides indirect support for the assumption of a threshold minimum vaccination intensity. Although vaccination in this city had begun in mid-January, there was a steep rise in the epidemic in mid-March this year associated with the emergence and development of the "British" strain of the virus. The vaccination intensity for the period from January to April did not exceed αv of about 0.0019 1/day. It was only when the vaccination intensity in the city increased sharply to 0.0055 1/day, i.e. from mid-April onwards, that it was possible to reverse the trend of the epidemic.
Given that the lockdown conditions in New York are fairly typical for most European cities and countries (λ = 0.035 1/day), one can take the value of the vaccination intensity obtained as the minimum αv value for the average European area. The problem of choosing this minimum value, however, needs further study and clarification.
Conclusions
1. The proposed analytical model adequately describes the development of the epidemic in New York under various lockdown conditions and under mass vaccination of the population. As in previous papers, the control calculations are in good agreement with the observational data at all stages of the epidemic growth.
2. The incidence estimates for a seven-day period using the proposed model were in good agreement with observations, both for time periods when only the lockdown was observed and when mass vaccination was additionally administered.
3. Model simulations of epidemic spread with different vaccination rates and holding other conditions constant, allowed us to assess the impact of vaccination rate on the epidemic's development.
4. Analysis of the seven-day incidence curves at different vaccination rates gave a preliminary conclusion that when αv ≥ 0.003 1/day, the emergence of new virus strains did not cause an increase in the epidemic.
|
2021-09-17T10:49:53.352Z
|
2021-07-21T00:00:00.000
|
{
"year": 2021,
"sha1": "cf1965cca23097899f5349b3a1283813881d2f80",
"oa_license": "CCBY",
"oa_url": "https://auctoresonline.org/uploads/articles/1627026972AMCRS-RA-64-Galley_Proof.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cf1965cca23097899f5349b3a1283813881d2f80",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
58646899
|
pes2o/s2orc
|
v3-fos-license
|
A novel splice mutation induces exon skipping of the EXT1 gene in patients with hereditary multiple exostoses
The molecular mechanism of hereditary multiple exostoses (HME) remains ambiguous and a limited number of studies have investigated the pathogenic mechanism of mutations in patients with HME. In the present study, a novel heterozygous splice mutation (c.1284+2del) in exostosin glycosyltransferase 1 (EXT1) gene was identified in a three-generation family with HME. Bioinformatics and TA clone-sequencing indicated that the splice site mutation would result in exon 4 skipping. Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) revealed that the expression levels of wild-type EXT1/EXT2 mRNA in patients with HME were significantly decreased, compared with normal control participants (P<0.05). Abnormal EXT1 transcript lacking exon 4 (EXT1-DEL) and full-length EXT1 mRNA (EXT1-FL) were overexpressed in 293-T cells and Cos-7 cells using lentivirus infection. RT-qPCR demonstrated that the expression level of EXT1-DEL was significantly increased, compared with EXT1-FL (17.032 vs. 6.309, respectively; P<0.05). The protein encoded by EXT1-DEL was detected by western blot analysis, and the level was increased, compared with EXT1-FL protein expression. Immunofluorescence indicated that the protein encoded by EXT1-DEL was located in the cytoplasm of Cos-7 cells, which was consistent with the localization of the EXT1-FL protein. In conclusion, the present study identified a novel splice mutation that causes exon 4 skipping during mRNA splicing and causes reduced expression of EXT1/EXT2. The mutation in EXT1-DEL generated a unique peptide that is located in the cytoplasm in vitro, and it expands the mutation spectrum and provides molecular genetic evidence for a novel pathogenic mechanism of HME.
Introduction
Hereditary multiple exostoses (HME), also termed hereditary multiple osteochondroma, is an autosomal dominant inherited disease characterized by the development of multiple exostoses, predominantly located on the limbs, shoulder blades, ribs, and pelvis (1). Osteochondromas are frequently adjacent to the growth plates of bones, and can increase in size and number until growth plates close as the child stops developing. The exostoses can result in numerous health problems, including skeletal bowing and deformities, growth restriction, and nerve and blood vessel compression (1,2). As a common benign bone tumor, HME is estimated to occur at a rate of 1/50,000 cases; however, HME progresses into chondrosarcomas or osteosarcomas in ~2% of patients (3)(4)(5)(6).
Heterozygous germline mutations in the exostosin glycosyltransferase 1 (EXT1) and EXT2 genes are exhibited in >90% of HME cases (7)(8)(9). Although no EXT1 or EXT2 germline mutations are detected in certain cases, somatic mosaic mutations were identified in one HME case (10). In patients with HME, 10% of mutations are spontaneous and 90% of affected individuals have a family history of HME (11). Additionally, 80% of mutations in patients with HME are truncation mutations, including nonsense, frameshift and splice site mutations, which commonly introduce premature stop codons during translation, or result in partial or entire loss of gene function (9). EXT1 and EXT2 are tumor suppressor genes that encode glycosyltransferases (7,12), EXT1 and EXT2 form a hetero-oligomeric complex in the Golgi body that catalyzes chain elongation during the biosynthesis of heparan sulfate (HS) (13). HS has a key role in chondrocyte proliferation and endochondral ossification (14). Therefore, heterozygous mutations in the EXT1 or EXT2 gene theoretically result in a reduction in systemic HS levels by ~50% in HME individuals. However, it has been reported that haploinsufficiency may not always result in osteochondroma formation. When HS levels are significantly decreased, but not lost completely, a second event, such as loss-of-heterozygosity or compound heterozygous mutations, appears to be the major cause of HME development, which has been confirmed in animal models and also reported in a number of patients with HME (15)(16)(17)(18)(19).
Recently, 436 mutations in EXT1 and 223 mutations in EXT2 have been reported in the Multiple Osteochondroma Mutation Database (http://medgen.ua.ac.be/LOVDv.2.0/home. php), including various splicing mutations (11). Alternative splicing is ubiquitous in mammals, and is a major contributor to molecular diversity and complexity, and gene regulation; additionally, alternative splicing is required for numerous critical biological processes in development and disease, including regulation of cell growth, hormone responsiveness and cancer (20,21). However, once mutations exist in splicing elements or splicing signal sequences, particularly at 3' and 5' splice sites, normal splicing of mRNA and translation will be disrupted, which can cause exon skipping or aberrant splicing, where new splicing sites are created, resulting in truncated proteins with potentially reduced expression and function (21). For example, dysregulation of alternative splicing has been demonstrated to be associated with various human diseases, including cancer, muscular dystrophies, neurodegenerative diseases and obesity (21).
A number of splicing mutations have been detected in patients with HME, and the molecular mechanisms are reported to involve the creation of new splice sites or exon skipping due to splicing mutations, resulting in early termination of translation and the degradation of truncated peptides via nonsense-mediated mRNA decay (NMD) (22,23). In the present paper, a splice mutation in EXT1 (c.1284+2del) was identified in a three-generation Chinese family with HME. Skipping of EXT1 exon 4 was verified by TA cloning and sequencing of EXT1 mRNA from the patients with HME. No premature stop codon was produced by the skipping of exon 4; however, the expression levels of EXT1/EXT2 mRNA were notably reduced in the patients, as indicated by reverse transcription-quantitative polymerase chain reaction (RT-qPCR). In vitro, the truncated mutant protein was detected in the cytoplasm when expressed in Cos-7 cells. Thus, whether mutant EXT1 or EXT2 proteins are biologically functional requires further research, but the decrease in the expression of wild-type EXT1/EXT2 proteins will hinder the process of HS polymerization and chain elongation.
Materials and methods
Study participants, cell culture and reagents. Peripheral blood samples were collected from a Chinese family with HME in three generations from May 2013 to March 2015 (Fig. 1A). The HME diagnosis was produced according to their clinical manifestations and physical examinations, including X-ray, computed tomography and pathological sections (24). Osteochondromas tissue was stained with hematoxylin and eosin for 5 min at room temperature. In the present study, two patients with HME (III 1, the proband, male, 31 years; II 2, the mother of the proband, female, 62 years), one normal family member (II 1, the father of the proband, male, 65 years) and one healthy individual (normal physical examination, male, 31 years) were enrolled in mutation analysis of EXT1 and EXT2 genes. The proband (III 1) was the first person who required surgical intervention in the family with HME, and was an inpatient of the Department of Bone Tumors of Fuzhou Second Hospital (Fuzhou, China), the healthy individual was an inpatient at the Medical Examination Center of Fuzhou Second Hospital. The samples were collected together when the proband was in hospital for surgery. Written informed consent was obtained from all participants, and the study was approved by the Ethics Committee of Fuzhou Second Hospital [approval no. (2014) 63].
293-T and Cos-7 (originating from African green monkey kidney fibroblasts) cell lines (Cell Bank of the Chinese Academy of Sciences, Shanghai, China) were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (both from Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA), and incubated at 37˚C incubator in an atmosphere containing 5% CO 2 . Cells were passaged at 80% confluency by digestion with trypsin (Gibco; Thermo Fisher Scientific, Inc.).
Mutation screening for EXT1 and EXT2 genes. Genomic DNA of the participants was extracted from peripheral blood according to the procedures of the SE Blood DNA kit (Omega Bio-Tek, Inc., Norcross, GA, USA). DNA samples were used for mutation screening of the coding exons and the adjacent introns of EXT1 (GenBank NG_007455.2) (https://www.ncbi.nlm.nih.gov/nuccore/NG_007455.2) and EXT2 (GenBank NG_007560.1) (https://www.ncbi.nlm.nih. gov/nuccore/NG_007560.1) genes using previously reported primer sequences (22,25). The products of the amplified sequences were observed on a 2% agarose gel and purified with HiBind ® columns using an E.Z.N.A. ® Cycle-Pure kit (Omega Bio-Tek, Inc.). Bidirectional sequencing was performed on purified products using an ABI 3730 XL genetic analyzer (Applied Biosystems; Thermo Fisher Scientific, Inc.). The possible pathogenic mutation screened from the sequencing would be proved to be a novel variant or a reported one in the ExAc database (http://exac.broadinstitute. org/gene/ENSG00000182197) Bioinformatics analysis and prediction. Several web-based programs with different algorithms were used to analyze the potential effect of mutations on exon splicing. Mutation Taster and Protein Variation Effect Analyzer (PROVEAN) were selected for pathogenicity prediction. Mutation Taster uses a Bayes classifier to predict the disease potential of an alteration (mutationtaster.org/) and PROVEAN is a software tool that predicts whether an amino acid substitution or indel has an impact on the biological function of a protein (provean.jcvi. org/index.php). The CRYP-SKIP algorithm (http://cryp-skip. img.cas.cz/) uses multiple logistic regression to predict the two aberrant transcripts from the primary sequence, and was applied in the present study to estimate the probability of cryptic splice-site activation (P) and exon skipping (1-P) due to a splicing mutation (26). The Berkeley Drosophila Genome Project (BDGP) (http://www.fruitfly.org/about/index. html) algorithm accurately distinguishes between donor and acceptor sites using a generalized hidden Markov model. As a splice site prediction program, it predicts cryptic splice sites and highlights changes in splice sites following input of a mutant sequence. The Human Splicing Finder (HSF) (http://www.umd.be/HSF3/) is an online tool that uses various algorithms to predict the effects of mutations on splicing signals or to identify splicing motifs in any human sequence. It has been previously used to predict the effects caused by splicing mutations (27).
Analysis of EXT1 and EXT2 mRNA. Total RNA was obtained from the venous blood of two patients with HME and normal controls according to the instructions of a QIAamp RNA Blood Mini kit (Qiagen GmbH, Hilden, Germany). Total RNA (5 µg) was reverse transcribed into cDNA using random primers based of a PrimeScript™ 1st Strand cDNA Synthesis kit (Takara Biotechnology Co., Ltd., Dalian, China), and the synthesized cDNA was used as a template for PCR amplification of EXT1 with the following primers: 5'-atgcaggccaaaaaacgctatt-3' (forward); and 5'-tcaaagtcgctcaatgtctcg-3' (reverse). LA Taq (Takara Biotechnology Co., Ltd.) was used as the DNA polymerase under the conditions: 94˚C for 5 min, 94˚C for 30 sec, annealing for 30 sec (53˚C), 72˚C for 45 sec for a total of 30 cycles; and the last cycle was extended at 72˚C for 10 min (ABI 2720 Thermal Cycler; Applied Biosystems; Thermo Fisher Scientific, Inc.). The PCR products were separated by 1% agarose gel electrophoresis and visualized using ethidium bromide staining (Sangon Biotech Co., Ltd., Shanghai, China) for ~1 h at room temperature. The products were purified using HiBind ® columns and then cloned into the PGEM-T Easy vector (Promega Corporation, Madison, WI, USA). Ligation products were transformed into XL1-blue bacteria and cultured in ampicillin/X-gal/IPTG plates. Following transformation, ~50 positive clones were selected randomly from the proband, the mother and normal control, then cultured separately in a shaker overnight at 37˚C. Following extraction, the plasmids of the three groups were sequenced to search for the potential abnormal alternative transcripts in the individuals with HME, and the percentage of the mutated transcripts was evaluated with respect to the normal control.
Lentiviral transduction in vitro and observation of growth condition.
The full-length coding sequence of EXT1 gene (EXT1-FL) was amplified using cDNA from the peripheral blood of a normal participant, and the abnormal mutant transcript of EXT1 (EXT1-DEL) was amplified using the aforementioned recombinant plasmids from TA cloning and sequencing as the template, which matched the NCBI reference sequence (GenBank NM_000127.2), except for exon 4. The products of the amplified transcripts were confirmed by sequencing. GV358-EXT1-FL and GV358-EXT1-DEL lentiviral vectors were constructed, and were amplified and titrated based on the manufacturer's instructions (Shanghai GeneChem Co., Ltd., Shanghai, China) (29). A GV358-GFP vector was used as a negative control (NC). 293-T and Cos-7 cells were seeded in 6-well plates at a density of 5x10 5 cells/ml of DMEM and then co-infected with GV358-EXT11-FL, GV358-EXT1-DEL and empty vector (GV358-GFP vector), which were mixed with Polybrene (Shanghai GeneChem Co., Ltd.) at a multiplicity of infection of 10. Cell infection efficiency and growth state were assessed by observation of green fluorescent protein (EXT1-GFP fusion protein) and cell morphological characteristics at 24 h after infection using a fluorescence microscope (x200 magnification; Olympus Corporation, Tokyo, Japan).
Protein extraction and western blotting. Total protein was isolated from cultured 293-T cells and Cos-7 cells at 72 h after infection. The protein concentration was measured using a Bicinchoninic Acid Protein Assay kit (Beyotime Institute of Biotechnology, Shanghai, China) and 30 µg protein from each group was separated via SDS-PAGE on 10% gels, then blotted onto polyvinylidene difluoride membranes (cat. no. IPVH00010; EMD Millipore, Billerica, MA, USA). Following electrophoresis, membranes were blocked with 5% bovine serum albumin (BSA; Thermo Fisher Scientific, Inc.) for 1 h at room temperature. Membranes were probed with mouse anti-Flag monoclonal antibody (cat. no. RLM3001; Suzhou Ruiying Biotechnology Co., Ltd., Suzhou, China; 1:3,000 dilution) and rabbit anti-human EXT1 (cat. no. ab126305; Abcam, Cambridge, UK; 1:3,000 dilution) to detect EXT1 and the novel truncated peptide, which targeted the 205-335 amino acid sequences of human EXT1, a region upstream of exon 4 (overnight at 4˚C). Peroxidase-conjugated goat anti-mouse IgG (cat. no. A0216) and goat anti-rabbit IgG (cat. no. A0208) secondary antibodies were incubated for 1 h at room temperature (both from Beyotime Institute of Biotechnology; both 1:10,000 dilution). GAPDH was also detected using mouse anti-GAPDH monoclonal antibody (cat. no. RLM3029; Suzhou Ruiying Biotechnology Co., Ltd.; 1:3,000 dilution) as an endogenous control for the western blot analysis under the same conditions aforementioned. Finally, the membranes were developed using enhanced chemiluminescence (ECL-Plus) reagent (Thermo Fisher Scientific, Inc.) and exposed to x-ray film (Carestream Health, Inc., Rochester, NY, USA). Statistical analysis. χ 2 test was used to evaluate the statistical significance of the result of TA cloning sequence (proportion). One-way analysis of variance with Least Significant Difference test was used to evaluate the statistical significance of the results of RT-qPCR, and they were expressed as the mean ± standard error of the mean [SPSS 19.0 (IBM Corp., Armonk, NY, USA)]. P<0.05 was considered to indicate a statistically significant difference.
Results
Clinical data of the family with HME. According to information provided by the proband, at least five individuals of the family were suspected to have multiple exostoses. However, only three members (the proband and his parents) participated in the present study and agreed to publication (Fig. 1A). The proband (III 1) had exhibited exostoses around the joints of the hips, knees, wrists, and ankles for >20 years. Furthermore, imaging (X-ray and computed tomography) and pathological sections confirmed the diagnosis (Fig. 1B and C). The mother had relatively minor symptoms according to the examination conducted.
Mutation screening and identification of a novel mutation (c.1284+2del) in EXT1.
Sequencing results of the coding region and adjacent intronic sequences in EXT1/EXT2 genes of the proband revealed that there was a heterozygous deletion (c.1284+2del) in intron 4 of the EXT1 gene, but no mutation was determined in the EXT2 gene ( Fig. 2A). Furthermore, DNA sequencing identified the same alteration in the mother (II 2) (Fig. 2B); however, the mutation was not exhibited in the normal father or the healthy participants ( Fig. 2C and D), and it was not reported in the ExAc database. The mutation spectrum indicated that the delete mutation was co-segregated with HME in this family. Sequence analysis indicated that c.1284+2del is an intron variation at the 5' splice site (AGgt) of exon 4.
Abnormal splicing and exon skipping in EXT1 gene.
Predictions from multiple bioinformatics databases revealed that the splicing mutation may cause two potential effects in the mRNA: Skipping of exon 4, or loss of the primary 5 splice site (AGGT) and activation of an adjacent cryptic splice site. The CRYP-AGgt analysis revealed that the probability of exon 4 skipping was 0.69, whereas the probability of new cryptic splice site activation was 0.31 (Fig. 3A). BDGP predicted that the mutation caused the splice site to disappear at the mutational site. The HSF tool also indicated that AGgt was absent due to the mutation; however, it also suggested that a novel splice site (AAgt) emerged 3 bp downstream of AGgt. Mutation Taster predicted that c.1284+2del may disrupt normal splicing and that it was a disease-causing mutation that may affect the protein function. The novel polypeptide lacking amino acids from 389 to 428 of exostosin-1 protein (I389_E428del) created by exon 4 skipping was also indicated to be deleterious by PROVEAN analysis (data not shown).
TA cloning and sequencing of the targeted fragments of the two patients identified a notable number of abnormal transcripts with exon 4 skipping in EXT1 mRNA (proband 66.7%, mother, 58.5%, P<0.05; Fig. 3B; Table I); however, no aberrantly spliced transcripts with cryptic splice site were identified in the patients.
Furthermore, although the ratio of transcripts with exon 4 jumping in EXT1 was increased in the proband, compared with the mother, there was no significant statistical difference between them (P>0.05; Table I). The results from TA cloning and sequencing were almost consistent with the bioinformatics predications, and the corresponding amino acids coded by the missing exon 4 were amino acids 389-428 of exostosin-1 protein, which form part of the conserved domain of exostosin (amino acids 110-396; Fig. 4).
Aberrantly reduced expression of EXT1/EXT2 genes. To investigate the potential effect of the splice mutation on the gene expression of EXT1/EXT2, the mRNA of EXT1/EXT2 and the abnormally spliced transcript of EXT1 gene was assessed in the patients with HME and normal controls. To distinguish the normal EXT1 transcript from the abnormally spliced transcript (with exon 4 skipping), the primers were designed with the downstream primer located in exon 4 of EXT1 for detecting the normal transcript, and the downstream primer spanned exons 3 and 5 of EXT1 to detect the abnormally spliced transcript. As depicted in Fig. 5, the levels of wild-type EXT1/EXT2 mRNA in patients with HME were significantly reduced, compared with the normal control (P<0.05). The level of the mutant EXT1 transcript was significantly increased in the proband, compared with the normal control and the mother (P<0.05).
Overexpression of EXT1-GFP fusion protein and aberrantly spliced RNA in vitro. Recombinant plasmids were successfully constructed and then packaged with lentivirus vectors, as confirmed by PCR and sequencing. After 48 h of infection with lentivirus, GV358-EXT1-FL and GV358-EXT1-DEL were overexpressed in the cells. RT-qPCR indicated that the expression levels of full-length transcript and aberrantly spliced transcript were significantly increased, compared with the empty vector infected with lentivirus (17.032-and 6.309-folds, respectively), and the level of aberrantly spliced transcript was significantly increased, compared with the full-length transcript (3.073-folds; P<0.05; Fig. 6).
Increased expression of the truncated polypeptide, with no notable changes in subcellular localization. To investigate the cellular functionality of the aberrant polypeptide, and whether its expression level and subcellular location are different from EXT1-FL, western blot analysis was performed and the subcellular location was determined using laser scanning confocal microscope in cells expressing the lentiviral constructs. Western blotting revealed that the aberrant polypeptide was expressed by the vector, and at an increased level, compared with the wild-type protein (Fig. 7). However, the subcellular localization of the mutated polypeptide exhibited no alteration, compared with the full-length protein, with the majority of the EXT1-DEL and EXT1-FL protein located in the cytoplasm of Cos-7 cells (Fig. 8). However, endogenous EXT1 is generally considered to be located in the Golgi apparatus of the cytoplasm (30).
Discussion
The present study reported a novel heterozygous splice mutation (c.1284+2del) in intron 4 of the EXT1 gene identified in a three-generation family with HME. Mutation Taster and Table I. TA clone and sequencing results of the proband, mother of HME and normal control. PROVEAN predicted that c.1284+2del was a disease-causing mutation. The result of TA cloning and sequencing indicated that the mutation resulted in skipping of EXT1 exon 4 during mRNA splicing in the proband and his affected mother with no premature stop codon. RT-qPCR of the two patients revealed that expression levels of EXT1/EXT2 mRNA were reduced, compared with normal controls, and the levels of the abnormal EXT1 transcript (without exon 4) were increased in the proband, compared with his mother and normal control. Furthermore, the truncated peptide produced from the abnormally spliced transcript is potentially translated and expressed in cells without degradation via NMD. Additionally, the subcellular localization of the truncated peptide may be the same as protein produced from the wild-type EXT1 gene, and both proteins were observed to be localized to the cytoplasm in vitro.
Although the molecular mechanisms associated with HME are not fully understood, it is clear that HME is predominantly provoked by mutations in either/both of the EXT1 or EXT2 tumor suppressor genes. EXT1 accounts for 56-78%, and EXT2 for 21-44% of HME-causing mutations (11). Splicing mutations have been investigated to determine the molecular mechanism of HME (22,23). Alternative splicing is one of the important mechanisms in regulating gene expression and protein diversity, splice sites (5' and 3'), the branch site and the polypyrimidine sequence are the key splicing signals that have major roles in the splicing of pre-mRNA. If mutations occur in these sequences, the effective splicing of exons may be affected, which may interfere with subsequent transcription and translation (20,21). The mutation identified in the present study was located at the 5' splice site of EXT1 exon 4, and was predicted and verified to cause exon 4 skipping. As the sequence length of exon 4 is 120 bp and is a multiple of 3, and the upstream coding region of exon 4 is also a multiple of 3 (1,164 bp), the skipping of exon 4 does not disturb the triplicate coding order of the downstream amino acids in EXT1 and premature stop code will not be created, which is different from other mutations reported in previous studies (22,23).
TA cloning and sequencing results identified abnormal transcripts with skipping of exon 4 in the proband and his affected mother (66.7 and 58.5%, respectively) at significantly increased levels, compared with the normal control participants (P<0.05); however, a few transcripts with exon 4 skipping were identified in the normal control (5% ; Table I), which may be explained by the phenomenon that alternative splicing frequently occurs in human genes with multiple exons (31). Furthermore, the number of abnormal transcripts in the proband was increased, compared with his mother, with no statistical significance between the two patients, potentially due to an insufficient number of clones detected. However, the results from RT-qPCR of EXT1/EXT2 mRNA and the abnormal spliced transcript of EXT1, which lacks exon 4 due to the splice mutation, in the two patients with HME were different from that of TA cloning and sequencing. The expression levels of wild-type EXT1/EXT2 mRNA in both patients were reduced, compared with the normal control (EXT1: proband, 0.02890; mother, 0.00654; and normal, 1.0; and EXT2: proband, 0.23216; mother, 0.08038; and normal, 1.0), particularly for EXT1 mRNA. By contrast, the level of the abnormal spliced EXT1 transcript with exon 4 skipping was significantly increased in the proband, compared with his mother and the normal control (proband, 2.58735; mother, 0.55260; and normal 1.0; P<0.05). The difference between the TA cloning data and RT-qPCR analysis may be due to error in the TA cloning or the numbers of positive colonies analyzed may have been less than estimated. Additionally, although the levels of wild-type EXT1/EXT2 mRNA were also deceased in the mother, the level of the abnormally spliced EXT1 transcript with exon 4 skipping was significantly reduced, compared with the proband (Fig. 5; P<0.05). This may be associated with the evidence that the clinical symptoms of male patients are prone to be more severe, compared with female patients, and it may also indicate that the severity increases with successive generations (32,33).
Previous studies indicated that the mutated EXT1 and EXT2 were localized to the Golgi apparatus in vitro, which were similar to the wild-type genes (30,34); however, the type of the mutations in these studies were truncated mutation (EXT2-Y419X) and missense mutations (EXT1-R340C and EXT2-D227N), and the produced protein was a truncated peptide that lacked the entire domain of Glyco_transf_64 (480-725aa) (EXT2-Y419X), or a single amino acid was changed (EXT1-R340C and EXT2-D227N), which were notably different from the identified splice mutation in the present study. As the splice mutation (c.1284+2del) in the present study just results in the deletion of partial amino acid sequences (389-428 aa), which is resident in the tail of the exostosin domain (110-396 aa) and the junction of the two domains of EXT1, whether the special peptide decays through NMD or not deserves intensive investigation. Additionally, if it is not decayed, where it anchors, and whether it is different from previous reports is also worth investigation.
Lentiviruses are effective and frequently-used tools that allow exogenous genes or exogenous short hairpin RNAs to be integrated into the host genome to achieve stable expression of the target sequence. 293-T cells and Cos-7 cells are common tool cells for efficiently expressing exogenous genes (35,36). In the present study, a lentivirus was used to express the mutant EXT1 transcript lacking exon 4 in vitro.
Overexpression of the abnormal transcript was confirmed by RT-qPCR in 293-T cells infected with the EXT1-DEL lentivirus; however, there were also small amounts of the abnormal transcript detected in cells infected with NC (empty vector) lentivirus (Fig. 6), indicating there may be some endogenous expression of this transcript in 293-T cells. Western blot analysis confirmed the expression of the truncated peptide. Other bands that were distinctly visible in the empty vector and EXT1-FL lanes were possibly caused by the relatively low specificity of the polyclonal EXT1 antibody used (Fig. 7B), while there was single band in either lane incubated with monoclonal anti-Flag antibody (Fig. 7A). In order to observe the subcellular localization of the abnormal peptide in the same host cells as a previous study (30), Cos-7 cells were also used as host cells and transfected with lentiviruses. Immunofluorescence demonstrated that the abnormal peptide was localized in the cytoplasm of Cos-7 cells, which was similar to the localization of EXT1-FL. Notably, the levels of the abnormal peptide were increased, compared with EXT1-FL (Fig. 8), which was consistent with the results of RT-qPCR and western blot analysis.
In conclusion, a novel splice mutation was identified in two patients from a family with HME in the present study. However, more family members did not enroll for co-segregation analysis of the mutation with disease status.
Expression levels of wild-type EXT1/EXT2 mRNA, which possess glycosyltransferase activity, were notably reduced due to the mutation in both patients, yet the level of an abnormally spliced transcript without the full functional domain was increased in the proband, compared with the normal control. The decrease in the copy numbers of the wild-type genes and increase in the abnormal transcript may be the pathogenic mechanism of HME in the family that participated in the present study. The abnormal transcript was detected in the patients with HME, and expression and localization of the protein product were assessed in vitro, revealing that the abnormal peptide was expressed and located in the cytoplasm of Cos-7 cells, which is in accordance with previous studies (30,34). However, it was regrettable that there was no homologous structure in EXT1, except the C-terminal of Exostosin-1 (the domain of Glyco_transf_64), so computational biological analysis of the structure and function was lacking in the present study, in order to distinguish Exostosin-1 from the mutant protein; however, although the two proteins were detected in the cytoplasm of Cos-7 cells, it was not confirmed in vivo experiment. Furthermore, the observations in the present study was only derived from one cell line (Cos-7 cells). In conclusion, the biological function of wild-type EXT1/EXT2 proteins may not be affected by the emergence of increased levels of mutant EXT1/EXT2, but by the decrease in the levels of wild-type EXT1/EXT2 proteins, which will disrupt HS polymerization and chain elongation.
Patient consent for publication
The patients participated in the study agreed with the publication of the paper regarding their family research.
|
2019-01-22T22:35:27.010Z
|
2019-01-16T00:00:00.000
|
{
"year": 2019,
"sha1": "ae7c6ed2c2bf587075a03439d47fbf8e671ec178",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ijo.2019.4688/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae7c6ed2c2bf587075a03439d47fbf8e671ec178",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
255667132
|
pes2o/s2orc
|
v3-fos-license
|
SARS-CoV-2 Infection in Winter 2021/2022: The Association of Varying Clinical Manifestations With and Without Prior Vaccination
Importance SARS-CoV-2 is a rapidly evolving virus with many strains. Although vaccines have proven to be effective against earlier strains of the virus, the efficacy of vaccination status against later strains is still an area of active research. Objective To determine if vaccination status was associated with symptomatology due to infection by later strains of SARS-CoV-2. Design This cross-sectional survey was sent to an adult Jewish population from December 2021 to March 2022. Setting This is a population-based study of Jewish communities throughout the tristate area. The subjects were recruited by local Jewish not-for-profit and social service organizations. Participants Surveys were sent to 14,714 adults who were recruited by local Jewish not-for-profit and social service organizations; 966 respondents completed the survey (6.57%). Only participants who received a positive COVID-19 nasal swab 10 weeks since December 1, 2021, were included in the main outcome. Exposure Participants were grouped by vaccine type (i.e., Johnson & Johnson {J&J}, Moderna, or Pfizer) and vaccination status (i.e., unvaccinated, single, full, or booster). Main outcomes and measures The primary study outcome was an association between immunization status and somatological presentation. Symptom severity classes were built using latent class analysis (LCA). Results Out of 14,714 recipients, 966 completed the survey (6.57%). The participants were mainly self-described Ashkenazi Jewish (97%) with a median age of 41. The LCA resulted in four classes: highly symptomatic (HS), less symptomatic (LS), anosmia, and asymptomatic (AS). Vaccinated participants were less likely to be in symptomatic groups than the unvaccinated participants (odds ratio {OR}: 0.326; 95% confidence interval {CI}: 0.157-0.679; p=0.002). Boosted participants were less likely to be in symptomatic groups than fully vaccinated participants (OR: 0.267; 95% CI: 0.122-0.626; p=0.002). Additionally, there was no association between symptomatology and vaccination type (p=0.353). Conclusions and relevance Participants who received COVID-19 vaccinations or booster shots were less likely to be symptomatic after Omicron infection compared to unvaccinated participants and vaccinated participants without boosters, respectively. There’s no association between vaccination type and symptomatology. These results enhance our understanding that COVID-19 vaccinations improve clinical symptomatology, even in an unforeseen COVID-19 strain.
Introduction
By the end of 2021 and the beginning of 2022, several new Omicron strains of the SARS-CoV-2 virus rose to prominence and quickly replaced the prior predominant Delta variant as the major strain in the United States. SARS-CoV-2 Omicron variant, subsequently referred to as BA.1, was one such strain and was first identified in South Africa on November 24, 2021. This variant has proven to be more contagious than previous strains, with infection reported in all six World Health Organization regions and 149 countries within a few weeks [1,2].
Omicron spread rapidly in the United States, with the first cases reported on December 1, 2021 [2,3]. By January 1, 2022, 95% of SARS-CoV-2 cases were attributed to the Omicron BA.1 variant. Since then, newer Omicron strains have replaced the BA.1 variant, with the BA.5 variant comprising over 88.8% of the US strains as of August 12, 2022 (CDC data accessed on August 12, 2022) [4].
Much of the transmissibility of Omicron has been attributed to differences in the genetic code found in the earliest days of Omicron sequencing, which discovered >30 mutations in the spike protein and receptor domains [5,6]. This led to concern about whether these mutations would alter the virus's transmissibility or lead to immune escape. Immune escape describes the virus's ability to evolve and infect previously immune individuals. It was seen even in areas that were previously shielded due to higher vaccination rates or previous infection with the SARS-CoV-2 virus [7][8][9][10][11].
Vaccines for SARS-CoV-2 have proven to be highly effective at preventing severe disease and fatalities in the original and Alpha strains of COVID-19 [12][13][14][15][16][17]. However, vaccine efficacy has been more modest in preventing symptomatic disease with the subsequent Beta and Delta variants, even among those who received the recommended two-dose vaccine regimen [18][19][20][21]. The SARS-CoV-2 B.1.1.529 Omicron variant showed the steepest reduction in vaccine efficacy provided by vaccination. A reduction in immunity was witnessed where the initial 65% immunity acquired after two doses of the BNT162b2 Pfizer vaccine at 2-4 weeks was reduced to a mere 8.8% at 25 weeks and onward, according to various findings [22].
In response to the waning protection granted by vaccination [23], the CDC recommended that previously doubly vaccinated patients receive an additional dose of an mRNA vaccine, referred to in the vernacular as "boosting" [24], to enhance the levels of protection against breakthrough infection. The administration of boosters resulted in renewed protection against mild infection, but as with the original vaccinations, the protective effects diminished over time [25].
While the rates of COVID-19 breakthrough in previously vaccinated and boosted individuals have been studied [26], data describing the utility of vaccination to mitigate severe COVID-19 infection and symptomatology are still sparse, and how the severity of symptoms relates to differing vaccination status is still an important area of research.
Survey data were analyzed to determine the relationship between clinical and symptomological manifestations of late 2021 or early 2022 COVID-19 variants with a prior vaccination history and/or previous COVID-19 infection. The phenotypes of symptomatology and their relationship with previous COVID-19 diagnoses and varying vaccination status are described.
Given the data regarding vaccination-and booster-induced immunity, this study aimed to determine if vaccination and booster use correlated with reduced symptoms within the cohort, as measured by their membership in various groups in a latent class analysis (LCA). The LCA is an interesting and impressive machine learning modality with regard to COVID-19 and can be a useful tool in parsing symptomatology in data gathered from self-reporting.
Study design
The subjects were recruited by local not-for-profit and social service organizations within Orthodox Jewish communities throughout the tristate area. A cross-sectional survey invitation was sent to 14,714 adults, of whom 1,020 individuals began the survey process (6.93% response rate). The number of participants to complete the survey was 966 (of 1,020, 94.7%). This was the third survey sent to this cohort over the pandemic, which possibly explains the low response rate. Electronic informed consent was taken, and the study's purpose was disclosed before beginning the survey. The study was open to all participants and did not require participants to have SARS-CoV-2 symptoms or exposures to participate.
The survey was developed to determine the most common symptoms associated with infections later in the pandemic and examine the relationship between clinical outcomes in a community-based cohort with previous vaccination and infection between December 1, 2020, and March 1, 2021. The survey included 25 data points, including questions about patient demographics, symptoms of infection, and whether they tested positive for SARS-CoV-2 by nasal swab. The survey was administered via the Health Insurance Portability and Accountability Act-compliant and secure Research Data Capture (REDCap) (Vanderbilt University, Nashville, TN) software. The Advarra Institutional Review Board approved the study (approval number: MOD01212191).
Data analysis
Baseline characteristics were determined, and summary statistics were estimated. We charted the symptoms of those respondents that tested positive and grouped them into four classes using latent class analysis (LCA) based on common clinical presentations. Twenty-four distinct symptoms were assessed, and 20 were ultimately included based on the criteria outlined by Miaskowski et al. where a minimum of five participants (2%) was required to exhibit a symptom for the symptom to be included in the analysis establishing a cutoff [27]. A latent class analysis (LCA) was then used to examine the phenotypic patterns of COVID-19 symptoms.
LCA uses observed categorical or binary data to identify patterns known as latent classes. We used conditional probabilities to estimate the likelihood that a member of the survey cohort belonged to a group based on their response to specific symptoms of COVID-19, thereby allowing the characterization of latent classes. Pearson's chi-square analysis was used to demonstrate frequency differences regarding prior infection status and the type of vaccine received. Subsequently, odds ratios (ORs) were calculated to compare each class individually.
All data processing and statistical analyses were performed in Statistical Analysis System (SAS) (SAS Institute Inc., Cary, NC) version 9.4.3 and Statistical Package for Social Sciences (SPSS) (IBM SPSS Statistics, Armonk, NY) version 28. Complete data analysis was performed; the subjects with missing data were excluded. A two-sided p of less than 0.05 was considered statistically significant.
Population characteristics
The survey cohort had an interquartile age of 41 and was composed of 54% males and 46% females. The patient population was overwhelmingly self-described Ashkenazi Jewish (97%). Among the 966 sufficiently filled respondents in the survey cohort, 217 patients reported SARS-CoV-2 symptoms, with 229 (24%) reporting a positive nasal swab test within the past 10 weeks (since December 1, 2021). The most commonly reported symptoms were fatigue (64.7%), followed by cough (53.1%), sore throat (46.9%), and aches (45.1%). The symptoms persisted for an average of 5.29 days (SD: 3.41).
We charted the symptoms of those respondents that tested positive and grouped them into four classes using latent class analysis (LCA) based on the typology of symptoms ( Table 1). Twenty-five distinct symptoms were assessed, of which 19 were ultimately deemed to be significant. Four distinct classes of symptomatology were identified. Based on the symptomatic presentation, these classes were labeled as class 2, highly symptomatic (HS); class 3, less symptomatic (LS); class 1, anosmia; and class 4, asymptomatic (AS).
Vaccination status and severity of symptoms
Overall, the four classes were associated with different frequencies of vaccination status (p=0.003). The vaccinated participants were less likely to be symptomatic than the unvaccinated participants (OR: 0.326; 95% CI: 0.157-0.679; p=0.002). They were also much less likely to belong to the anosmia class than the asymptomatic class (OR: 6.682; 95% CI: 1.65-26.99; p=0.008). They were less likely to belong to class 3 (LS) than asymptomatic (OR: 3.257; 95% CI: 1.537-6.899; p=0.002) (
Boosted versus unvaccinated
Furthermore, we conducted an LCA analysis comparing boosted respondents with non-vaccinated respondents ( Table 5). The lesser symptomatic and asymptomatic classes had a higher proportion of boosted participants as compared to the unvaccinated participants.
Discussion
By the winter of 2021, breakthrough infections were confirmed in previously vaccinated individuals [23]. While the rates of COVID-19 breakthrough in previously vaccinated and boosted individuals have been studied [25], the relationship between the symptomatology of COVID-19 breakthrough infection with prior vaccination and boosting status is less understood.
We administered a survey asking patients to report their vaccination history, incidence of breakthrough infection, and resulting symptomatology from December 2021 to March 2022. We aimed to discern differences in the symptomatology of confirmed COVID-19 cases among patients who received single, full, or booster vaccination status. We conducted an LCA analysis, grouping individuals into four discrete categories based on their reported symptoms. We found that there was a protective effect conferred by the vaccine and booster vaccination, as demonstrated by a reduction in COVID-19 symptomatology as reported in the survey. We found that vaccination (OR: 0.326; 95% CI: 0.157-0.679; p=0.002) and especially boosting in previously vaccinated individuals (OR: 0.267; 95% CI: 0.122-0.626; p=0.002) were associated with clinically less severe COVID-19 symptomatology groupings based on their self-reported COVID-19 symptoms.
Our survey cohort had a mix of boosted vaccinated (42.1%) and non-boosted but vaccinated participants (57.9%). The completion of the primary two-dose mRNA vaccination series and the provision of a third dose, "boosting," decreased the likelihood of contracting highly symptomatic reinfection of SARS-CoV-2 in our cohort, as defined by increased membership to the previously described highly symptomatic group as opposed to the asymptomatic group.
The use of LCA in this context is an interesting machine learning modality with regard to COVID-19 and can be a useful tool in parsing symptomatology in data garnered from self-reporting [26].
Our study has limitations. Our population was limited in scope, with the vast majority of respondents sharing socio-economic and cultural similarities and coming from an ethnically homogenous Ashkenazi Jewish community. Additionally, age was not well distributed in our cohort, with 344 people (36%) of our participants between 40 and 60 years of age.
Conclusions
There has been much doubt among the general population regarding the efficacy of COVID-19 vaccination and booster vaccines. The data presented here highlight the protective effects conferred by vaccinating and receiving a booster vaccine on the overall symptoms of individuals infected by COVID-19. This may then be used to provide a more robust understanding of the benefits of receiving vaccinations and boosters in the general public.
|
2023-01-12T18:45:32.311Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e035f9f435743c68a4dcda875639dfc924949d77",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/121068-sars-cov-2-infection-in-winter-20212022-the-association-of-varying-clinical-manifestations-with-and-without-prior-vaccination.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9324d4d3697d60d1bf9d6fade092fb9ff09e54e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9915324
|
pes2o/s2orc
|
v3-fos-license
|
Medicare managed care: numbers and trends.
This article captures some key trends in Medicare managed care. The figures which accompany this article explore, among other issues: enrollment; numbers of participating plans; demographic characteristics such as geographic location, age, and income; and premium and benefit comparisons.
INTRODUCTION
Managed care options have been incorporated as a feature of Medicare since the inception of the program in 1965. The Tax Equity and Fiscal Responsibility Act of 1982 introduced a full-risk health maintenance organization (HMO) option available to Medicare beneficiaries beginning in 1985. Since 1985, there has been a steady increase in enrollment in full-risk HMOs and similar competitive medical plans (CMPs), with rates of enrollment accelerating in the last few years. In 1996, nearly 1 in 10 Medicare beneficiaries was enrolled in a risk HMO or CMP. An additional 2 percent of Medicare beneficiaries were enrolled in cost-reimbursed HMOs or other cost-reimbursed prepaid plans.
Medicare beneficiaries enrolled in most risk HMOs have reduced out-of-pocket expenses for covered services and receive benefits not otherwise covered by Medicare, including, in many cases, prescription drug coverage. Medicare beneficiaries are "locked in" to risk HMOs, but Carlos Zarabozo is with the Special Analysis Staff, Office of the Associate Administrator for Policy, Health Care Financing Administration (HCFA). Charles Taylor is a Commander in the U.S. Naval Nurse Corps. He was serving as a U.S Army Baylor University Administrative Resident at HCFA when this research was performed. Jarret Hicks is with the Office of Managed Care, HCFA. The opinions expressed are those of the authors and do not necessarily reflect those of the U.S. Navy or HCFA. as of 1996 some Medicare HMOs are beginning to offer "point-of-service" options permitting the use of non-network providers.
Interest among HMOs in Medicare risk contracts has increased significantly in the last several years. As of April 1996, there were 202 risk contractors and 52 pending applications for risk contracts, representing about 45 percent of the Nation's HMOs. According to the American Association of Health Plans, 70 percent of its members have or expect to have Medicare risk contracts by 1997.
Medicare HMO contracting has become a significant market segment for many HMOs. In 1986, none of the 5 largest Medicare risk plans was among the 5 largest HMOs in the Nation, and only 1 Medicare risk HMO was among the largest 15 HMOs in the country. As of 1994, three of the five largest HMOs in the Nation were also among the five largest Medicare risk contractors.
Enrollment in Medicare risk HMOs continues to be concentrated in certain regions of the country. As of December 1995, 5 counties had 25 percent of total risk enrollment: Los Angeles, Orange, and San Diego Counties in California; Maricopa County, Arizona (Phoenix); and Dade County, Florida (Miami). The Los Angeles area, including the preceding California counties and the counties of San Bernardino, Riverside, Ventura, and Kern, had over 25 percent of total risk enrollment in 1995. Although the highest numbers of enrollees are in areas with relatively high Medicare HMO payment rates (which are based on historical fee-for-service rates in each county), there are a number of relatively low payment areas where substantial percentages of Medicare beneficiaries are enrolled in risk HMOs.
Medicare risk HMO enrollees are less likely to be 85 years of age or over, institutionalized, on Medicaid, or entitled to Medicare on the basis of disability (i.e., under 65 years of age). Results of the 1993 Medicare Current Beneficiary Survey (MCBS) indicate that HMO enrollees tend to be healthier than non-enrollees. Enrollees are particularly satisfied with their costs of health care in an HMO. 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1996 Year NOTES: HMO is health maintenance organization. Cost HMO enrollment numbers Include cost HMOs and health care prepayment plans. All data are for December of the given year, except for 1996, which are as of April.
SOURCE: Data from the Health Care Financing Administration, Office of Managed Care.
Enrollees (in Thousands)
• Enrollment has increased every year since the beginning of the program.
Percent
• Medicare risk enrollment in recent years has increased at an accelerated rate. Although Medicare HMO enrollment is less than one-half the level of the non-Medicare sector, where over one-third of the non-Medicare insured population is enrolled in HMOs, 1 the rate of growth in Medicare HMO enrollment has far exceeded non-Medicare growth rates over the past several years. • The level of concentration in larger plans is declining, though Medicare risk HMO enrollment continues to be concentrated in the largest plans. Seven of the 15 largest plans were in California, and all but 1 of the 5 largest plans is in California. In 1986, 33 percent of Medicare risk HMO enrollment was in the largest 5 contractors, all but 1 of which had participated in a Medicare HMO demonstration project in the early 1980s.
Number of Plans
• As of February 1996, 63 percent of risk plans offered a basic package with no member premium in at least part of their service area. The basic premium includes both the amounts risk HMOs are permitted to charge Medicare beneficiaries for cost-sharing for Medicare covered services, representing Medicare's coinsurance and deductible amounts not included in AAPCC payments to HMOs, and the cost of any non-covered benefits that beneficiaries are required to purchase as part of the basic package offered by the HMO (such as preventive care that Medicare does not cover and which HMOs traditionally cover). • In 1987, zero-premium plans were available in only four metropolitan areas. In 1991, in addition to these areas, six more areas had zero-premium plans. In 1995, 38 metropolitan areas had zero-premium HMOs. In 1995, in only six States where there were risk contractors was a zero-premium plan not available (in at least part of the State). Each of those six States had, at most, two risk HMOs. • The majority of Medicare risk HMOs use savings to finance reduced premiums and/or additional benefits for their members. For 1996, 20 percent of projected Medicare payments will be returned to beneficiaries in the form of reduced premiums and/or additional benefits. 2 Expressed in dollars, $4 billion of $20 billion in projected annual Medicare payments to risk HMOs will be used for enhanced benefits. • In some States, all risk HMOs include drug coverage. Note, however, that for Maryland, Nevada, New Mexico, Pennsylvania, Minnesota, Massachusetts, Illinois, and Oregon, not all beneficiaries in each plan have drug coverage. Residents of some counties do not have drugs included in basic coverage, even though the HMO in which they are enrolled includes drugs in some of the counties included in the plan's service area. • Except for those under 65 years of age (the disabled) and the oldest old (85 years of age or over), HMOs have a fairly representative age distribution.
Percent of Population
• The institutionalized and beneficiaries entitled to Medicare on the basis of disability are less likely to be risk HMO enrollees. In December 1995, 1.06 percent 3 of risk HMO enrollees were institutionalized (residing in a nursing home or similar institutional arrangement), while MCBS data indicate that 5-6 percent of the general Medicare population is institutionalized. • According to a recent study, "Beneficiaries who are dually eligible [have both Medicare and Medicaid coverage] are two-thirds less likely to enroll in HMOs than are other Medicare beneficiaries as a group-less likely to enroll even than the under-65 group" (Welch, 1996). • A number of barriers exist that prevent greater enrollment of dual eligibles in Medicare risk HMOs (Saucier, 1995
Percent
• In terms of income, the MCBS shows that there is a mix of distribution of HMO enrollment in relation to income. The very poor are less likely to be enrolled in HMOs (reflecting their status as dual eligibles), and the very wealthy are also less likely to be HMO enrollees. • Survey results for a sample of risk HMO enrollees from MCBS data for September 1993 indicate that HMO enrollees enjoy better health than non-HMO enrollees. This may reflect a variety of factors, including the types of Medicare beneficiaries who are likely to enroll in HMOs, and improved access to care among HMO enrollees.
Figure 15 Beneficiary Attitudes Towards HMOs and Fee-for-Service: 1993
Fee-for-Service-Satisfied Risk HMO-Satisfied Fee-for-Service-Unsatisfied Risk HMO-Unsatisfied
Measure of Satisfaction
NOTES: HMO is health maintenance organization. "Satisfied" includes very satisfied and satisfied. "Unsatisfied" includes both unsatisfied and very unsatisfied. SOURCE: Health Care Financing Administration, Office of the Actuary: Medicare Current Beneficiary Survey, 1993.
• In their attitudes towards their health plans, MCBS data indicate that risk HMO enrollees are most satisfied with HMO costs. The September 1993 MCBS survey of a sample of Medicare beneficiaries found that HMO enrollees were significantly more satisfied with out-of-pocket costs for medical care, compared with beneficiaries in fee-forservice Medicare. • Each category of beneficiaries (HMO enrollees and non-enrollees) had similar attitudes in terms of availability of care, and the ease of getting care. Impressions of the quality of care varied from one group to the other: 37 percent of risk HMO enrollees were very satisfied with their care while only 30 percent of fee-for-service beneficiaries said they were very satisfied. Those "satisfied" with their care included 55 percent in fee-for-service and 52 percent in HMOs. Fewer than 1 percent of beneficiaries in either category were very unsatisfied with their care, but 5.5 percent of HMO enrollees were unsatisfied with their care versus 2.9 percent of fee-for-service beneficiaries.
|
2018-04-03T00:15:39.962Z
|
1996-01-01T00:00:00.000
|
{
"year": 1996,
"sha1": "bcdf1ac9f40c0111fe22f845076e540b93c9bf11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bcdf1ac9f40c0111fe22f845076e540b93c9bf11",
"s2fieldsofstudy": [
"Business",
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11704182
|
pes2o/s2orc
|
v3-fos-license
|
Necrotizing Fasciitis of the Extremities
Necrotizing fasciitis (NF) describes a life threatening soft tissue infection characterized by a rapid spreading infection of the subcutaneous tissue and in particular the fascia. Various synonyms for this type of infection are used, often due to the difficult diagnosis. Necrotizing fasciits of the extremities is found after simple skin lacerations and often in rural, farming or garden setting environments. Many of the infections are found in immunologically healthy people, but persons revealing a compromised wound healing are endangered additionally, e.g., diabetes. In the majority of the microbiological analyses, streptococci alone or a mixture with mainly anaerobic bacteria may be detected. The management of infected extremities requires a rapid diagnosis, dedicated aggressive surgical management as soon as possible, and a wide debridement extending the border of the infected fascia. Timely surgical revisions within the first day or days together with antibiotic treatment are the only measures to stop the infection. Depending on the status of the patient a hyperbaric oxygenation treatment seems to be useful in order to limit the infection. In fulminated cases early amputations, maximal intensive care treatment of the septic patient are required, where all means are warranted to save the patients life. As a consequence, early clinical diagnoses with thorough surgical debridement of the infected liquid necrotic fascia as well as correct antibiotic treatment are needed. Secondary plastic reconstruction of the soft tissue defects will generally be required.
Introduction
Necrotizing fasciitis (NF) is an infection accompanied by spreading crepitating edema and blister formation. It was first described by Hippocrates in the fifth century as erysipelas [1]. In the eighteenth century a detailed description of NF was provided by British naval physicians and in the early nineteenth century, it was named as gangrenous ulcer, putrid ulcer, malignant ulcer, phagedenic ulcer, phagedena, and phagedena gangrenosa and hospital gangrene [2]. In 1871, during the civil war, J. Jones, an American Surgeon, mentioned more than 2,000 cases very possible NF. Afterwards Pfanner described in 1818 in the German medical literature the clinical picture of necrotizing erysipelas and found streptococci related to the disease [3,4]. In 1924, Meleney used the name hemolytic streptococcal gangrene [5]. Since then, various names have been given to NF (Table 1). Finally, in 1952, Wilson used the term necrotizing fasciitis (NF). The anogenital manifestation of NF was first described by Fournier in 1883 and is since then called Fournier's Gangrene [6,7].
The early onset of this type of soft tissue infection is a discrepancy between pronounced pain and clinical appearance. The aggressive progression induces finally skin and fascia necrosis, coagulopathia and cellulites. Five diagnostic criteria can be defined ( Table 2, [8]). The missing isolation of clostridium perfringens does not necessarily exclude gas gangrene nor prove NF. In patients suffering from NF, more often diabetes, hypertonic blood pressure, arterial occlusive disease, and obesity are found as well as higher percentage of alcohol and drug abuse, immunosuppressant therapy and HIVinfections. Even a varicella zoster virus infection in combination with the use of NSAIDs is discussed as risk factor [9]. It is more likely to appear in elderly people, but children also can be affected. In the literature various reasons have been related to the primary focus and include minor skin lesions, bites of insects and wounds after surgical procedures [10]. Initial microbiological tests discover in a high amount streptococci (app. 30%), a mixture of different bacteria and less common pseudomonas aeruginosa (app. 5%, Table 3).
Therapeutic Strategy Diagnostics
Due to an increased amount of invasive streptococcal infections in the US during the 1980s, a ''streptococcal septic shock syndrome (STSS)'' was defined. This is characterized by a systemic sepsis with multiple organ failure, especially the kidney. Today, NF caused by streptococci group A is classified as a subgroup of this infection. Also myositis caused by streptococcimostly due to direct inoculation/contaminationshould be differentiated. In such cases rapid lyses of the involved muscle, with edema formation, focal development of a compartment syndrome with consecutive necrosis is observed [11][12][13][14][15][16]. Therefore necrotizing soft tissue infections (NSTI's) characterize various diseases. Clinically they can be separated in superficial infections involving cutis/subcutis and deep infections affecting fascia and muscle (NF, myositis). These deep infections are further classified as type I (polymicrobial) and type II (monomicrobial). Bacterial factors play an important role in NSTI. In the cases of invasive streptococcal infections, surface proteins (M1, M3) increase adhesion and prevent phagocytosis and exotoxines (A, B, C, streptococci super-antigen) induce the release of cytokines and could bind to T-cell receptors causing further release of TNF-a, IL-1 and IL-6 ending in a STSS [17,18]. Although M-types 1 and 3 are common other types have been isolated in invasive infections; however, a stable genetic change was observed in M-type 1 group A streptococci in the 1980s, resulting in its ability to produce nicotinamide adenine dinucleotide glycohydrolase (NADase) and might be one factor in severe invasive infections [19]. The rapid tissue destruction is a result of toxin-induced vascular occlusion. As the infection progresses more toxins are produced and tissue destroyed. This microvascular occlusion contributes to shock and organ dysfunction.
In the case of NF early diagnosis is critical with respect to the survival of the patient. This is primarily a clinical diagnosis with no typical changes in lab diagnostics. Serum creatinine phosphokinase (CPK) might be useful in detecting deeper soft-tissue infections. Even a mild leukocytosis could be combined with an increasing percentage of immature neutrophils and should be a reason to be suspicious. Renal impairment precedes hypotension, as well as hypoalbuminemia and hypocalcemia are early signs. Especially in the extremities the bacteria are inoculated mostly through a minor skin lesion. The infection is in the early stage characterized by un-proportional local pain due to fascia necrosis. After this, skin changes are visible with edema and erythema ( Figure 1). The typical pattern of skin necrosis with or without blisters is found later; however, the necrosis generally spreads rapidly in the proximal direction. The necrosis of the fascia has already much further spread, compared to the changes of the skin ( Figure 2). Crepitating skin can be recognized in about 50% of the patients and is suspicious of a poly-bacterial infection. The typical clinical signs are listed in Table 4. Since the time to surgery needs to be diminished no demanding diagnostics like histology or bacterial isolation is possible.
To support the diagnosis of NF ultrasound can be used demonstrating fluid between the muscle and subcutaneous tissue because of fascia necrosis. Sometimes X-rays can reveal gas formation which could be easily palpated. CT or MRI could be of some use, but with little impact on the final decision to proceed to the operative treatment. MRI has some impact due to soft tissue and multiplanar imaging and might be helpful in case the source of infection lies deep inside the body. During surgery an excision biopsy could be done, but the typical discoloration of the fascia (yellow to green) and the possibility to manually dissect the fascia (like chewing gum) will assure the diagnosis. In contrast to myositis caused by streptococci the muscles appear normal and not as necrotic with a discoloration, brown to grey, comparable to loam.
STSS is more often found in association with pharyngitis, or small lesions of the skin or mucosa (scratches, insect bites). This syndrome appears in normally healthy people of all ages, in children more often seen following chickenpox infection [16]. Isolation of streptococci group A is typical -also in normally sterile body compartments -and of course the signs of a systemic shock. However, 50% of the STSS cases are accompanied by NF. The typical serious soft tissue infections are listed in Table 5.
Surgical Approach
The relevance of a surgical management could be proved by Kaiser and Cerra. A reduced surgical treatment was followed by a significant increase of death rate [20]. There is no space for incision and drainage or limited evacuation of the abscess. The only surgical option is the radical surgical excision of the infected subcutaneous tissue in particular including the grey pale fascia. In these cases, the fascia is grey, has lost all its strength and can more or less be pealed off. It is essential to resect the infected fascia and to debride into healthy tissue. Furthermore, it is essential to follow up by surgical re-interventions after a short time -even again on the same day in fulminant cases or at least the subsequent day. In general this regime is followed a few days until the spreading in the proximal direction has been stopped. An amputation in the extremities is not the primary treatment, but in cases where the whole tissue is necrotic and most muscles involved, this might be the only option to stop further spreading and systemic sepsis with multiple organ failure. These amputations have to be performed as open amputations, again requiring second look operation and secondary closure. After primary intensive care and control of the infection and sepsis (mostly after 1 week) reconstructive procedures are initialized reaching from secondary wound closure and skin grafting to flap coverage saving viable tissue and restore function.
Histology Histology of the excised tissue reveals infiltration of fascia by polymorphic nuclear cells, with peri-vascular focus. Sometimes bacteria are detectable. Later, a co-liquation necrosis of the fascia is visible, involving subcutaneous tissue and skin. The tissues cannot be differentiated anymore and muscle tissue is involved as well.
Antibiotic Treatment
Demanding a radical debridement, therefore the resulting wound areas are extensive in most cases and therefore the increased fluid turnover already justifies an intense care treatment. An adjusted antibiotic regime is mandatory. In undefined cases Gram-negative, Gram positive and anaerobic bacteria must be addressed. Mono-therapy includes imipenem-cilastatin, meropenem, ertapenem, piperacillin/tazobactam and tigecycline. A combination-therapy adds vancomycin, linezolid or daptomycin to a carbapenem or b-lactam/ b-lactamase inhibitor combination, if methicilin-resistant staphylococci are possible. Another combination therapy includes penicillin, clindamycin and fluoroquinolone or aminogycoside to cover Gram-negative bacteria. In a case of streptococcal infection clindamycin should be included into the medication, since it has been shown to inhibit the toxin production (m-protein and exotoxin) in severe cases [21]. Especially streptolycin O (SLO) induces changes to the leukocytes. It is specific to phenoloxidase important in the mechanism of host defense and much reduced in NF-cases due to streptococci infection. This significant immunosuppressive effect is accompanied by the effect that phenoloxidase catalyses the transformation of tyrosine to dehydroxy-phenylalanine necessary to produce catecholamine, one reason a patient with NF might need catecholamine substitution. Additionally, some immunotherapies (e.g., immunoglobulins) are also suggested by some authors [13]. The mechanism is believed to be related to the neutralization of superantigen activity and reduction of TNF-a and IL-6.
Intensive Care Therapy Various efforts have been made to categorize patients with respect to the risk of mortality. Negative parameters are age above 50 years, WBC > 40,000 cells/ mm 3 , hematocrit > 50%, HR > 100, temperature < 37°C and creatinine > 15 mg/dl [22]. If the patient develops a septic shock or STSS an acute respiratory distress syndrome (ARDS) is also very likely (app. 50%) and needs mostly intubation and mechanical ventilation to achieve adequate oxygen supply.
Every patient with signs of sepsis or impaired immune response should have intensive care treatment, since organ failure is very common in the time course of NF. The patients are at an extremely high risk to run into systemic sepsis with a poor prognosis. Good oxygenation, cardiac output and control of homeostasis are the primary goals in treating a systemic sepsis and septic shock according to the current guidelines [23]. These guidelines have to be included stepwise in the treatment of sepsis due to necrotizing fasciitis. Additional treatment options to enhance systemic toxin and mediator reduction have been discussed, such as continuous hemofiltration [24]. Due to the variation and limited number of patients in single centers, this approach has been only applied in isolated cases [25].
Therapeutic Options
Besides the basic treatment including intensive care medicine and surgical debridement numerous adjuvant therapies have been recommended with respect to the systemic management of these infections as well as possibilities for local wound treatment.
Systemic Adjuvant Therapy
Hyperbaric Oxygenation (HBO) Necrotizing Infections are considered to be one of the primary indications for HBO as well as decompression disease, gas embolism, CO-and smoke intoxication, anaerobic infections (clostridia infections) and radionecrosis [26]. HBO-therapy is able to increase blood oxygen content by 25% and thereby tissue oxygenation tenfold [27]. Other effects related to this treatment are vasoconstriction, reduced leukocyte sequestration, lipid peroxidation, free radical scavenging and reduction of tissue edema resulting in an increased tissue perfusion/microcirculation [28][29][30]. Another important effect, thought to be helpful in treating NF by improving host defense, is the activation of leukocytes. Also reparative processes might be stimulated due to fibroblast migration, proliferation and collagen synthesis [22,31,32]. These effects might be very helpful to support healing/granulation tissue formation of these mostly difficult wounds [33]. Various treatment regimes are recommended to be followed-up; however the most intense is that in accordance with crush injury with three treatments within the first 48 hours (2-2.5 ATA, 1-2 h O 2breathing), followed by two treatments the next 48 h and finally 48 h once a day. Since the use of HBOtreatment is connected with high medical and technical expenditure, especially if the patient is critically ill due to sepsis and needs breathing support, estimation is necessary, also the literature is controversial about the effects on morbidity and mortality rates [12,34,35]. However, this therapy needs to be considered focusing an increasing network and therapeutic standard in HBO-Treatment. Another possibility to increase oxygen in the body is an intravenous application (Regelsberger's intravenous Oxygen Therapy). However the effect needs to be proved for NF, also with respect to an increase of granulocytes [36]. Therefore the major focus in NF should be related to intensive care management and the vital surgical therapy with no delay.
Topical Wound Treatment Antiseptic Treatment
After primary surgical treatment in most cases a topical wound treatment is used. Various substances according to the isolated bacteria can be chosen. Mostly the following antiseptic substances are recommended: Polyhexamid, Povidon-Iodine, Silver Sulfadiazine, Actate Mafenid. Also wine vinegar and citric acid have been applied, especially to modify the wound environment and lower the pH in cases of pseudomonas infections.
Vacuum Sealing
Vacuum Sealing is a widely used approach to condition destroyed soft tissue in order to allow granulation and a safe secondary reconstruction [37]. It is extremely useful in cleaned wounds of the extremities and the abdomen and reduces surgical interventions to intervals between 2 and 5 days. In acute stages of necrotizing fasciitis, however, vacuum sealing is not indicated during the early states of purulent infections and infected tissue necrosis. But subsequently, when infected and destroyed soft tissues have been removed, vacuum sealing seems to be very useful for conditioning the defects and occasionally allowing limited tissue closure using skin graft; if not, a complex reconstruction is required [38].
Characteristic Clinical Case
The clinical cases with NF and primary focus on the extremities are listed in Tables 6 and 7. Most of these have had minor injuries with rapid spreading of the infection. One exemplary case is described to illustrate the significance of early operative treatment and excision of the devastated tissue. A 52-year-old gentleman was visiting a garden market. By lifting various flowers, He experienced a minor scratch, while handling various flowers, on the ulna side at the middle phalanx of the left ring finger. This happened on a Thursday afternoon and since there was no obvious skin lesion he did not care about it. During the next night and day he experienced increasing swelling and pain of the whole hand -not only the finger. On Saturday morning he consulted a local surgeon, who admitted him immediately to the hospital. The patient had already clear signs of local infection (Figure 3), systemic sepsis with reduced blood pressure, tachycardia and fever up to 40°C. Immediately, he was brought to the surgery room where a primary excision of the possibly involved tissue was done, up to the elbow. However, the patient did not recover very well afterwards in the intensive care unit. Finally he was brought back to surgery -on the same day -and a radical debridement was performed, including the amputation of the ring finger and excision of all obvious involved fascias (Figure 4). After this, four more wound care procedures were done in the operating theatre. Finally after 6 days the wounds could be closed with a pedicle groin flap and skin graft. Three more reconstructive procedures were followed including flap division, and two more correction with a final skin graft ( Figures 5 and 6). During the treatment period at the intensive care unit also HBO-therapy was followed daily starting at day 1 for 5 days.
Conclusion Necrotizing fasciitis (NF) is a life threatening soft tissue infection, characterized by foudroyant spreading necrosis of the involved fascias. Since a high variety of bacteria can be isolated the two different types of NF can be differentiated: Type 1 characterized by a poly-bacterial infection of aerobic and anaerobic bacteria and Type 2 with streptococci group A as source of infection [39]. The type 2 infections are less common, but are more often found in the case of the involvement of extremities. Every fascia in the body can be destroyed by this disease. Early diagnosis is critical to the survival of the patient and must rely on the clinical picture. When there is doubt, there should be no delay in performing surgery with radical debridement. The suggested treatment strategy with adequate early surgical and intensive care medicine could help reduce the up to 70% lethality rate, as stated in some publications, to less than 10% [40][41][42]. Infections of the extremities are less likely lethal, whereas an intra-abdominal occurrence leads mostly to the death of the patient. Higher age, diabetes, arterial occlusive disease, immunosuppressant status and the onset of NF due to iatrogenic infections is linked to a much worse prognosis. NF is an infection still observed very seldom; however, some data indicate an increase of this type of infection in the last decade, therefore clinicians should be aware and alert in cases of a significant soft tissue infection to rule out NF.
|
2017-08-02T07:42:10.767Z
|
2008-05-30T00:00:00.000
|
{
"year": 2008,
"sha1": "e8d89e643ab0cb706c19fa4ef625f43ca6b8ff46",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc7095926?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8d89e643ab0cb706c19fa4ef625f43ca6b8ff46",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
119178292
|
pes2o/s2orc
|
v3-fos-license
|
On the Classification of LS-Sequences
This paper adresses the question whether the $LS$-sequences constructed by Carbone yield indeed a new family of low discrepancy sequences. While it is well known that the case $S=0$ corresponds to van der Corput sequences, we prove here that the case $S=1$ can be traced back to two-sided Kronecker sequences and moreover that for $S \geq 2$ none of these two types occurs anymore. In addition, our approach allows for an improved discrepancy bound for $S=1$ and $L$ arbitrary.
Introduction
There are essentially three classical families of low-discrepancy sequences, namely Kronecker sequences, digital sequences and Halton sequences (compare [Lar14], see also [Nie92]). In [Car12], Carbone constructed a class of one-dimensional low-discrepancy sequences, called LS-sequences with L ∈ N and S ∈ N 0 . The case S = 0 corresponds to the classical one dimensional Halton sequences, called van der Corput sequences. However, the question whether LS-sequences indeed yield a new family of low-discrepancy sequences for S ≥ 1 or if it is just a different way to write down already known lowdiscrepancy sequences has not been answered yet. In this paper, we address this question and thereby derive improved discrepancy bounds for the case S = 1.
Discrepancy. Let S = (z n ) n≥0 be a sequence in [0, 1) d . Then the discrepancy of the first N points of the sequence is defined by D N (S) := sup where the supremum is taken over all axis-parallel subintervals B ⊂ [0, 1) d and A N (B) := # {n | 0 ≤ n < N, z n ∈ B} and λ d denotes the d-dimensional Lebesgue-measure. In the following we restrict to the case d = 1.
then S is called a low-discrepancy sequence. In dimension one this is indeed the best possible rate as was proved by Schmidt in [Sch72], that there exists a constant c with The precise value of the constant c is still unknown (see e.g. [Lar14]). For a discussion of the situation in higher dimensions see e.g. [Nie92], Chapter 3.
A theorem of Weyl and Koksma's inequality imply that a sequence of points Thus, the only candidates for low-discrepancy sequences are uniformly distributed sequences. A specific way to construct uniformly distributed sequences goes back to the work of Kakutani [Kak76] and was later on generalized in [Vol11] in the following sense.
Definition 1.1. Let ρ denote a non-trivial partition of [0, 1). Then the ρrefinement of a partition π of [0, 1), denoted by ρπ, is defined by subdividing all intervals of maximal length positively homothetically to ρ.
Definition 1.2. Let L ∈ N, S ∈ N 0 and β be the solution of Lβ + Sβ 2 = 1. An LS-sequence of partitions ρ n L,S π n∈N is the successive ρ-refinement of the trivial partition π = {[0, 1)} where ρ L,S consists of L+ S intervals such that the first L intervals have length β and the successive S intervals have length β 2 .
The partition ρ n L,S π consists of intervals only of length β n and β n+1 . Its total number of intervals is denoted by t n , the number of intervals of length β n by l n and the number of intervals of length β n+1 by s n . In [Car12], Carbone derived the recurrence relations t n = Lt n−1 + St n−2 l n = Ll n−1 + Sl n−2 s n = Ls n−1 + Ss n−2 for n ≥ 2 with initial conditions t 0 = 1, t 1 = L + S, l 0 = 1, l 1 = L, s 0 = 0 and s 1 = S. Based on these relations, Carbone defined a possible ordering of the endpoints of the partition yielding the LS-sequence of points. One of the observations of this paper is that this ordering indeed yields a simple and easy-to-implement algorithm but also has a certain degree of arbitrariness. Definition 1.3. Given an LS-sequence of partitions ρ n L,S π n∈N , the corresponding LS-sequence of points (ξ n ) n∈N is defined as follows: let Λ 1 L,S be the first t 1 left endpoints of the partiton ρ L,S π ordered by magnitude. Given As the definition of LS-sequences might not be completely intuitive at first sight, we illustrate it by an explicit example.
Example 1.4. For L = S = 1 the LS-sequence coincides with the so-called Kakutani-Fibonacci sequence (see [CIV14]). We have and so on.
Carbone's proof is based on counting arguments but does not give explicit discrepancy bounds. These have been derived later by Iacò and Ziegler in [IZ15] using so-called generalized LS-sequences. A more general result implicating also the low-discrepancy of LS-sequences can be found in [AH13].
It has been pointed out that for parameters S = 0 and L = b, the corresponding LS-sequence conincides with the classical van der Corput sequence, see e.g. [AHZ14]. 1 However, for higher values of S it has been not been proved if LS-sequences indeed yield a new family of examples of low-discrepancy sequences or are just a new formulation of some of the well-known ones. We close this gap to a certain extent by showing the following main result: Theorem 1.7. For S = 1, the LS-sequences is a reordering of the symmetrized Kronecker sequences ({nβ}) n∈Z . For S ≥ 2 the LS-construction neither yields a (re-)ordering of a van der Corput sequence nor of a (symmetrized) Kronecker sequence.
Let us make the notion of symmetrized Kronecker sequences more precise: ∈ Q and z has bounded partial quotients in its continued fraction expansion (see Section 2) then (z n ) has low-discrepancy ( [Nie92], Theorem 3.3). By a symmtrized Kronecker sequence we simply mean a sequence indexed over Our approach does not only give a significantly shorter proof of low-discrepancy of LS-sequences for L = 1 but also improves the known discrepancy bounds by Iacó and Ziegler in this case.
Corollary 1.8. For S = 1 the discrepancy of the LS-sequence (ξ n ) n∈N is bounded by Corollary 1.8 indeed improves the discrepancy bounds for LS-sequences given in Theorem 1.6 in the specific case S = 1. Both results yield inequalities of the type where the a i are integers with a 0 = ⌊z⌋ and a i ≥ 1 for all i ≥ 1. The sequence of convergents (r i ) i∈N of z is defined by The convergents r i = p i /q i with gcd(p i , q i ) = 1 can also be calculated directly by the recurrence relation Remark 2.1. If S = 1, then β 2 + Lβ − 1 = 0 or equivalently 1 β = L + β holds. Thus it follows that a i = L in the continued fraction expansion of β for all i = 1, 2, . . ..
From now on the continued fraction expansion of β is studied and it is always tacitly assumed, that the q i 's are the denominators of the convergents of β.Although the proof of the following lemma is rather obvious we write it down here explictly because our proof of the main theorem is based on this arithmetic observation.
Lemma 2.2. Let n ∈ N 0 . If S = 1 then we have Proof. We prove both claims by induction.
(ii) The proof works analogously as in (i). We have β 2 + 1 = −Lβ and Example 2.3. Consider the Kakutani-Fibonacci sequence from Example 1.4. If we denote by (f n ) n≥0 the Fibonacci sequence, i.e. the sequence inductively defined by f 0 = 0, f 1 = 1 and f n = f n−1 + f n−2 for n ≥ 2, we have that q i = f i for all i = 1, 2, . . ..
Before going into the rather technical details of the proof, let us explain its idea for the example of the Kakutani-Fibonacci sequence (L = S = 1). This sequence of points is given by , . . . .
Using β + β 2 this can be easily re-written as Proof. The two assertions are proved simultaneously by induction on k. For n = 1, 2 the claim is obvious from definition, since ξ 1 = 0 and ξ 2 = β, . . . , ξ L = Lβ. Let k ≥ 2 and n = 2k + 1 be odd. If we denote by ≡ equivalence modulo 1 we have for m ∈ {0, . . . , l n−1 } by Lemma 2.2 and induction hypothesis with −q 2k−1 + 1 ≤ r ≤ −q 2k−3 and q 2k−2 + 1 ≤ r ≤ q 2k and 1 ≤ j ≤ L. Thus it follows that Since the sequence is injective, the claim follows for odd n. So let n = 2k + 2 be even. Then we use again Lemma 2.2 and induction hypothesis to derive with −q 2k−1 + 1 ≤ r ≤ −q 2k−3 and q 2k−2 + 1 ≤ r ≤ q 2k and 1 ≤ j ≤ L. This completes the induction since Proof of Theorem 1.7. If S = 1 the LS-sequence is indeed a reordering of the symmetrized Kronecker sequence by Lemma 2.4. So let S ≥ 2 and L ≥ S. Then β is irrational and the recurrence relation holds. Hence the LS-sequence cannot be a reordering of a van der Corput sequence (which consists only of rational number).
Now assume that the LS-sequence is the reordering of a (possibly symmetrized) Kronecker sequence {nα} for some α ∈ R. Since α itself has to be an element of the LS-sequence, there exists an n ∈ N such that α can be uniquely written in the form α = n k=1 α k β k with α k ∈ {0, . . . , L} for k = 1, . . . , n and α n = 0. By (1) we have the equality β k = x k β + y k with x k , y k ∈ Q and s k x k , s k y k ∈ Z. Thus, α itself can be rewritten as α = x α β + y α with x α , y α ∈ Q and s n x α , s n y α ∈ Z. However, β n+1 , which is an element of the LS-sequence, cannot be an element of {nα} n since β n+1 = x n+1 β + y n+1 , where at least one of x n+1 and y n+1 has denominator s n+1 . This is a contradiction.
A main advantage of the approach via symmetrized Kronecker sequence is that it yields a possibility to calculate improved discrepancy bounds, namely Corollary 1.8.
Proof of Corollary 1.8. We imitate the proofs in [Nie92], Theorem 3.3 and [KN74], Theorem 3.4 respectively and leave away here the technical details that are explained therein very nicely: The number N can be represented in the form where l(N) is the unique non-negative integer with q l(N ) ≤ N < q l(N )+1 and where the c i are integers with 0 ≤ c i ≤ L. Let LS N denote the set consisting of the first N numbers of the LS-sequence. We decompose LS N into blocks of consecutive terms, namely c i blocks of length q i for all 0 ≤ i ≤ l(N). Consider a block of length q i and denote the corresponding point set by A i . If i is odd, A i consists of the fractional parts {nz} with n = n i , n i + 1, . . . , n i + q i − 1 according to Lemma 2.4. As shown in the proof of [Nie92], Theorem 3.3., this point set has discrepancy If i is even, A i consists of the fractional parts {−nz} with again n = n i , n i + 1, . . . , n i + q i − 1 by Lemma 2.4. Since z and −z have the same continued fraction expansion up to signs, we also have Analogous calculations as in [KN74] then yield the assertion.
Asymptotically we deduce the following behaviour, again improving the more general result of [IZ15] in the special case S = 1. Finally, we would like to point out the fact that it follows immediately from our approach that the Kakutani-Fibonacci sequence is the reordering of an orbit of an ergodic interval exchange transformation. In [CIV14], it was shown that a much more complicated interval exchange transformation is necessary in order to get the original ordering given in Definition 1.3.
Corollary 2.6. For L = 1, the LS-sequence is always a reordering of an orbit of an ergodic interval exchange transformation.
|
2017-12-04T16:59:40.000Z
|
2017-06-27T00:00:00.000
|
{
"year": 2018,
"sha1": "cc9ff3b0294094534d94bb560e15f7f01872c5f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.08949",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e986e805fb67ac7264024618ea3cc2ece796be64",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
221122193
|
pes2o/s2orc
|
v3-fos-license
|
Institutional Delivery Services Utilization and Associated Factors at Hetosa District, Ethiopia
Background: The proportion of births attended by skilled health personnel in Ethiopia is very low and the maternal mortality ratio currently is 676 per 100,000 live births. This study aimed at assessing level of institutional delivery service utilization and associated factors among mothers who gave birth during the last 12 months prior to the study in Hetosa district, Arsi zone Ethiopia. Methods: Community based cross sectional study design was conducted among 735 mothers who gave birth within last one year at Hetosa District in 2015. The collected data were entered into computer usingEpi-info version 3.5.1 and exported to SPSS version 20 software for analysis. Univariate, Bivariate and multivariate analysis were done. Significance level and association of variables was tested by using 95% confidence interval (C.I) and odd ratio and p-value less than 0.05 was taken as statistically significant. Result: Forty nine percent of the respondentsgave birth at health facilities and 98% delivered at public health facility. Out of those mothers who delivered at home, 36% were assisted by neighbor and the main reason for their home delivery was easy (precipitate) labor and very short duration which force mother to deliver at home 89% followed by transport problem (7.2%). According to multivariate regression analysis mothers who reside in urban, did not live with her husbandand had one live birthwere more likely to give birth at health institution when compared to their counter parts. Conclusion and recommendation: This study revealed that institutional delivery was high as compared to some of the studies conducted on different parts of the country. Policy makers and health care planners need to recognize the factors hampering institutional delivery and work on improving the situation.
Introduction
Globally, there was an estimated 289,000 maternal deaths in 2013, yielding a maternal mortality rate (MMR) of 210 maternal deaths per 100,000 live births. Developing countries account for 99% of the global maternal death [1]. Maternal mortality is the highest by far in sub-Saharan Africa, where the lifetime risk of death from pregnancy-related conditions is 1 in 16, compared with 1 in 2800 in rich countries [2,3].
In Ethiopia, according to EDHS 2011 and 2016, there are 676 and 412 maternal deaths for every 100,000 live birth, respectively [4]. Maternal death is the most extreme consequence of poor maternal health outcomes. However, due to inadequate care during pregnancy and delivery or the first critical hours after birth, more than 30 million women in developing regions Major causes of maternal deaths in Ethiopia are similar to most developing countries such as Infection, hemorrhage, obstructed lab our, abortion and hypertension in pregnancy [3,7]. At health facility level hemorrhage (PPH) is responsible for 11% of all maternal deaths due to direct obstetric complications. The proportion deaths due to PPH that occurred in facilities is most likely due to the fact that over 90% of births take place at home, and women with PPH may not be arriving at health facility in time [4,8].
Institutional delivery service utilization is one of the key and proven interventions to reduce maternal death. It ensures safe birth, reduce both actual and potential complications and maternal death and increase the survival of most mothers and newborns. But most deliveries in developing countries occur at home without skilled birth attendants.
Socio-economic, socio-demographic, ante natal care attendance and health services related factors were associated with institutional delivery service utilization in studies conducted in Asia including African and Ethiopia. Not only maternal variables, husband personal, socio demographic and wealth therefore, in recognition of the national burden of maternal mortality and the urgency to achieve the Millennium Development Goal 5 (MDG-5), the government of Ethiopia is committed to improve maternal health with a target of reducing maternal mortality ratio (MMR) to 267/100,000 live births through multi-pronged approaches including provision of free delivery service [5,9].
In despite of this institutional delivery service use was low and majority of women in Ethiopia have been giving birth at home [6,10]. Hence; it is important to provide evidences based research on important determinants of institutional delivery service use. Therefore, this study is intended to assess institutional delivery service utilization and associated factors among mothers in Hetosa district, Arsi zone, and Oromia region.
Study Area and Period
This study was conducted at Hetosa Woreda in Arsi Zone, Oromia Regional State, South East Ethiopia in 2017. Iteya is the capital city of the Woreda located 150km from Addis Ababa the capital city of Ethiopia. Organized into 23 rural and 3 urban kebeles, the district has 4 functional public health centers, one higher private clinics, and 23 rural and three urban health posts with assigned two female health extension workers per each health post (52, HEWs) and have 68 health professional workers (10 health officers, 7 BSC nurse and 10 midwifes) with 75% of potential health coverage and with elevation of 2215 meters above sea level. Total population was 151, 859 (male = 77,448 (51%), and female= 74,411 (49%)), women child bearing age 33, 409 and about 31, 637 total households [10].
Study Design
A community-based cross-sectional study design was used.
Sources Population
All mothers who gave birth within last one year in Hetosa woreda.
Inclusion Criteria
All women whose age is between 15-49 years and who gave birth in the past 12 months irrespective of birth outcome.
Exclusion Criteria
Mother who were critically sick and cannot communicate or listen and who did not want to participate in this study was excluded.
Sample Size
The Sample size was calculated using double Epi info software with the following assumptions; at 95% CI with power of 80%, proportion of institutional delivery among higher level of education was 24.5% and odds ratio (OR) 2 [11], Odds Ratio educate to un educated 1:1, design effect of 2 and non-response rate 10%. Then the final sample size was 735.
Sampling Procedures
Multi stage sampling technique was used to select the study units. Hetosa Woreda has 26 total kebeles (the smallest administrative unit), 31, 637 house hold and 79 gote (sub division of kebele), Eight Kebeles were selected by using simple random sampling (lottery) method ( Figure 1).
The sample size was distributed to eight kebeles proportional to the size of their population. Households with women gave birth past 12 months prior to study period were selected using simple random sampling based on the sampling frame obtained from health extension worker of selected kebeles collected by community health information system (CHIS).
The sampling intervals of household determined by dividing total number of house hold to sample size for each kebele. If the houses were closed or the mothers were not present at the time of data collection, three times daily visits supported by volunteerwho selected from kebele were made until we communicated them throughout the data collection period. The next houses were considered in place of the houses which could not be accessed for collecting mothers' data regarding institutional delivery service utilization. If there was more than one mother within the same household lottery method was used to select the one to be included in the sample.
Operational Definition
Utilization of Institutional delivery: is giving the current birth of a baby at public or private health facility except Health Post (if delivery conducted by HEWs).
Data Collection Procedures and Data Quality Control
Structured and pre-test questionnaire developed based on the literatures were prepared first in English and then translated into Afan Oromo the local language, then retranslated back from Afan Oromo to English by another person to insure the consistency of the tool. Diploma graduate female health worker wereparticipated to conduct the face to face interviews through house to house visiting and supervised by two degree holders. Training was given forboth data collectors and supervisors before the actual data collection regarding the aim of study, data collection tool and procedures. During data collection the supervisors were received questionnaires from data collectors and review for completeness, accuracy, and consistency on daily base. Correction measures were taken by discussing with the research team, generally data quality was ensured by coding during data collection if codes were given to the questionnaires during data collection so that any identified problems were solved using the codes.
Data Management and Analysis Procedures
The collected data were entered into computer usingEpiinfo version 3.5.1 software. After the entrance and completeness of all data, cleaning was done. Finally, the data was exported to SPSS version 20 for analysis. Both the descriptive and Bivariate/multivariate logistic regression analysis was performed. Descriptive analyses were done by using frequency, mean, median, standard deviation, and percentages. Crude logistic regression was used to see relationship between one independent variable with outcome at time and adjust logistic regression was used to see relationship between many independent variables with outcome variable after controlling confounding factors. Significance level and association of variables were tested by using 95% confidence interval (C.I) and odd ratio and pvalue less than 0.05 was taken as statistically significant.
Ethical Consideration
Ethical clearance was obtained from College of Health Science, Arsi University and a supportive letter was obtained from Arsi zonal health department and given for Hetosa woreda health office then to kebeles. Oral informed consent was obtained from respondents after explaining the purpose and benefit of study. To ensure the confidentiality of respondents their names were not written on the questionnaire and to keep privacy of respondents the interview was implemented separately without including anyone except interviewer.
Socio-Demographic Characteristics of the Study Participants
A total of 735 mothers who gave birth in the last 12 months prior to this study were interviewed and with a response rate of 100%. Of the respondents, 652 (88.8%) were from rural. The mean age of the respondents was 25.78 ± 5.25 SD. Thirty-eight percent (281) mothers were in the age range of 25-29 years. Majorities (96%) were married and 523 (71%) mothers attended formal education. With respect to occupation of respondents 684 (93%) of them were housewives. With regard to religion, 500 (68%) were Muslim and 222 (30%) were orthodox. Oromo 693 (94%) was the predominant ethnic group. As to the husband's occupational status, the majority 80% (590) were farmers. Economically, 257 (35%) of the households had monthly income of between 501-1000 ETB and 208 (28%) less than 500 ETB monthly income based on quartile classification (Table 1).
Obstetric History of the Respondents
Forty three percent of mothers were parity two to four and 177 (24%) of them were parity five and above. Fifty four percent mothers became pregnant before the age of 20 years. The minimum and maximum ages at first pregnancy were 14 and 34 years with mean age of 19.5 years and ± 3.3 SD. About Ninety one percent of the respondents had ANCfollow up during recent delivery, to know her health status was the most (42.6%) commonly raised reason. Out of Six hundred sixty eight mothers who attended ANC service, 506 (75.7%) of them received information on where to deliver and delivery complication. Among the respondent who did not visit health facility 37 (50%) of them reported that, their main reason was no health problem during labour (50%) [ Table 2].
Access to Information and Health Facility
Eighty one percent of respondentsever had health education on maternal health. Fifty nine percent pointed out health extension workers as the main source of information on maternal health whereas the least source wastraditional birth attendants. Almost all (99.6%) of the respondentshad access to any health facility in their kebele. Concerning the time they travelled on foot to reach the nearby health facility 652 (89%) of them said less than one hour, 83 (11%) said greater than one hour (Table 3).
Awareness and Attitude Related to Institutional Delivery of the Respondent
Regarding awareness of mothers' place of delivery, the majority (93.6%) of them pointed out that giving delivery at health facility was better than giving birth at home. The reasons why they preferred to deliver recently at health institutions were because if delivery was conducted at health facilitythere would be no bleeding 301 (22.7%), saves mother life 288 (21.7%) and other reasons as mentioned in table 4. Regarding mothers awareness on their susceptibility to pregnancy and delivery complications, Twenty six (224) almost nearly half of the respondents reported that every mother including herself is susceptible to pregnancy and delivery complications while the rest 25.3% (213) 21.7% (101) 24.7% (208) and 23.4% (197) said that primi-gravida, mothers with multiple pregnancy, mothers with other medical problems and multi-gravid mothers are susceptible to pregnancy and delivery complications, respectively.
Concerning awareness to the occurrence of complication during delivery 418 (34.5%), 384 (31.8%), 36 1 (29.8%), and 47 (3.9%) of respondents mentioned that severe hemorrhage, retained placenta lasting more than 30 minutes, prolonged labor lasting more than 12 hours, and loss of consciousness are labour complications that can occur during child birth, respectively.
Regarding attitude of the respondent toward institutional delivery, respondents were asked questions related to their attitude on institutional delivery service. More than three quarters of the respondents 78% hadpositive attitude regarding ANC and delivery services (Table 4).
Institutional Delivery Service Utilization
Of the respondents, 360 (49%) (95% CI: 47.05-50.95) of them gave birth at health facilitiesand 375 (51%) delivered at home. Out of those mothers who delivered at home, 135 (36%) were assisted by Neighbor and the main reason for their home delivery was easy (precipitate) or uncomplicated labor 334 (89%) followed by transport problem (7.2%). Among the respondents who deliver at health institution, 356 (98.9%) of them deliver at public health facility and decision making on the choice of place of delivery was made by themselves majorly 59.4% (Table 5).
Determinants of Utilization of Institutional Delivery
In the bivariate logistic regression residence, religion, receiving information about institutional delivery parity, living with husband, ANC follow up, distance to health facility, monthly income and age at first delivery were significantly associated variables. In multivariable logistic analysis for controlling possible confounding effect; residence, religion, parity and living in the same house with partner show significant association with utilizations of institutional delivery.
Mothers who reside in urban were more than 10.9 times more likely to utilize health facility for delivery than those who live in rural area AOR=10.9, (95% CI=5-4.4). Similarly mothers who did not live with her husband was 2.4 times likely to use institutional delivery than mothers who live with her husband (AOR=2.4 CI; 1.4, 4.4). On the other hand those respondents who had had one live births were almost more than three times likely to give birth at health institution when compared to those who had had four or more
Discussion
This study attempted to identify the degree of skilled delivery service utilization and associated factors among mothers who gave birth in the last 12 months prior to the study in Hetosa District. The study showed that institutional delivery service utilization was (49%) in the District and most of mothers (51%) gave birth at home. The study was consistent with other studies conducted in Ethiopia previously at Woldia Woreda (48.3%) and Goba woreda (47%) [12][13]. However, it is higher as compared to similar community based studies conducted in Dodota Woreda, Sekela District, and Munesa Woreda, South East Ethiopia, where the proportion of women who gave birth at health facilities was, only 18.2%, 12.3%, 12.1% respectively [14][15][16]. This discrepancy could be due to the time gap between these studies, difference in study settings and there might have been improvements in accessibility and utilization of health institution delivery service during the current study. It was less compared with findings from a study conducted in Bahirdar town which was (78.8%). The great difference of this study is partially explained by the fact that this study is done in rural setting and that of Bahirdar is in town where health facility is nearly available and communities awareness about institutional delivery is very high. In addition to this, mothers in urban area could be autonomous in making decision, have good knowledge of pregnancy and delivery complications, and better access to information than rural mothers [17].
Institutional delivery was influenced by place of residence. Women residing in urban are about 11 times more likely to deliver in health care facility than rural women. This finding is in line with other studies done in our country [18]. This indicates the difference in access especially in terms of physical distance which is important for service utilization. If health facilities are not in close proximity or in walking distance, rural mothers are less likely to afford transportation cost. In many instances even if they can afford to pay the transportation fare, the vehicle may not be available at the time they need it.
Parity appears commonly as a major factor responsible for the utilization of ANC and institutional delivery. Although studies from Ethiopia [19] and elsewhere in Africa [20] have shown an inverse relationship between parity and the use of ANC services, The present study revealed that women having less than two children were more likely to use institutional delivery. This result was consistent with other national studies in which the probability of giving birth at health facilities decreases for women with five or more births [19]. One reason for this relationship could be the limited access to resources and time constraints related to child care and household activities. Another likely reason is that women with more children perceive delivery as a normal process and develop the confidence to give birth at home.
Similar to the findings of various studies in the country, religion emerged as a predictor for maternal service utilization [20,21]. In our study Orthodox Christians followers were more likely to utilize the service than other religions (protestant, catholic and Muslims). As to how religion influences maternal health service utilization needs further studies to ascertain. However, one of the assumptions is that priests in orthodox Christians may teach their followers to use institutional delivery than other religions.
Unlike other study mothers who do not attended anc follow up more utilized institutional delivery service than those mothers attend anc in this case anc services cannotprovide opportunities for health workers to promote a specific place of delivery or give women information on the status of their pregnancy which in turn alerts them to decide where to deliver.
Majority of mothers preferred home delivery. The most common reasons for home delivery were easy labor, transport problem, far distance from health facility, poor service, afraid user fee, feel shame and husband refused. Similar study from Arsi Zone revealed that the main reasons for mothers to prefer home delivery was short labor 54.8% [19]. Another study in North westidentified that the main reasons for home delivery were smooth and short labor (42%), need to be attended with relatives during labor (44.7%), trusting traditional birth attendants and cultural belief (55.3%) [15].
This study showed that mothers who delivered at home had faced birth complications. Mothers who delivered at home reported complications such as severe hemorrhage (34.5%) retained placenta (31.8%), prolonged labor (29.8%), and loss of consciousness (3.9%). This finding is similar with the result reported from North Woldia, which revealed that 27.3%, 247.3% and 39.4% of mothers who gave birth at home had excessive vaginal bleeding, prolonged labor and retained placenta respectively [15,12].
In summary, the above mentioned points indicate that place of residence, long distance of health facility, transport problem, notliving in the same home with partner, number of delivery (parity) and ANC follow up were major determinant barriers for mothers' utilization of institutional delivery.
Limitation of the study One of the limitations of the study was lack of complementing with qualitative study and the other is the nature of the study beingcross sectional study design makes it difficult to determine the direction of causality.
Conclusion Institutional delivery is high in study area compared to previous report, but still low compared to national plan. Place of residence living separately with her husband, long distance travel, number of parity and ANC follow up was major factor determine institutional delivery in this study It is recommended that, Information on the complications of pregnancy and delivery and on the importance of using either the institutional delivery service, or skilled midwifery assistance in the home, at every childbirth should be given to every mother who came to health facility in general and at ANC visits in particular. It is also important increasing girls' enrollment in educations. Besides giving technical and material support Non-Governmental organization should have to work on programs of institutional delivery service in order to scale up institutional delivery service utilization.
the study area. Moreover, we would like to acknowledge all data collectors, supervisors and study participants.
|
2020-08-15T03:47:04.553Z
|
2020-08-14T00:00:00.000
|
{
"year": 2020,
"sha1": "cd07ecdd3628cfd2eaa1365eac85ea62fd4c386b",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajls.20200804.13.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cd07ecdd3628cfd2eaa1365eac85ea62fd4c386b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118175505
|
pes2o/s2orc
|
v3-fos-license
|
Neutron-antineutron oscillations on the lattice
One possible low energy process due to beyond the Standard Model (BSM) physics is the neutron-antineutron transition, where baryon number changes by two units. In addition to providing a source of baryon number violation in the early universe, interactions of this kind are natural in grand unified theories (GUTs) with Majorana neutrinos that violate lepton number. Bounds on these oscillations can greatly restrict a variety of GUTs, while a non-zero signal would be a"smoking gun"for new physics; however, to make a reliable prediction, the six-quark nucleon-antinucleon matrix elements must first be calculated non-perturbatively via lattice QCD. We review the current understanding of this quantity, describe the lattice formalism, and present preliminary results from $32^3\times256$ clover-Wilson lattices with a pion mass of 390 MeV.
Introduction
One unanswered mystery of the universe is the process that led to the abundance of observed baryons as compared to their antibaryon counterparts. The source of this baryon number violation, which is expected to come from beyond the Standard Model (BSM) physics, can be realized in low-energy processes such as proton decay (if baryon number is violated by 1 unit) or transitions between neutrons and antineutrons (if baryon number is violated by 2 units). The latter case, often referred to as neutron-antineutron oscillation (akin to neutral meson mixing), proves to be an intriguing scenario when considering the usual sphaleron picture of baryogenisis (which violates baryon number, B, and lepton number, L, but conserves B − L) coupled with Majorana neutrinos [1] (whose transition between neutrinos and antineutrinos leads to ∆L = 2). Additionally, neutronantineutron oscillations do not suffer from kinematic suppressions that can restrict proton decay if there is little overlap with the initial state proton and final state electron or muon [2]. To that end, neutron-antineutron oscillations have been explored experimentally with intriguing prospects for O(1000) improvements in upcoming experimental efforts [3].
Any discussion of neutron-antineutron oscillations starts with assuming the existence of some BSM process that leads to a ∆B = 2 operator in the low-energy effective field theory. This operator will lead to off-diagonal elements of the Hamiltonian of the neutron-antineutron system where V is the potential difference between the neutron and the antineutron (a magnetic field can lead to a non-zero V since the magnetic moments have opposite signs) and V = 0 in a free system. Upon solving the Schrödinger equation for the system, one finds the transition probability between neutrons and antineutrons given by While this equation is true for a given V , it is standard to define the period of free neutron oscillations due to the BSM physics as The value for τ nn greatly depends on which BSM scenario is being explored. It has been estimated that a bound of τ nn 10 10 − 10 11 seconds is sufficient to rule out many of the current models [4]. For example, TeV-scale seesaw mechanisms for neutrino masses in SU(2) L × SU(2) R × SU(4) c are expected to be ruled out at τ nn 10 10 − 10 11 seconds [5]; and SO(10) seesaw mechanisms with adequate baryogenisis, at τ nn 10 9 − 10 12 seconds [6]. Current experimental limits can even restrict extra-dimensional models with new particles with masses below a TeV [2,7]. It should be emphasized that these are order of magnitude estimates with the QCD input coming from naïve dimensional analysis. Future estimates will require rigorous and precise lattice calculations to keep pace with experimental precision. The detection mechanism for these transitions is the cold annihilation of a newly formed antineutron with a nearby neutron (for more details, see W. M. Snow's plenary at PXPS 2012 [3]).
The primary channel for this cold annihilation is nn → 5π, and this unique signature allows for experimental signals with little or no background. Generally, there are two sets of experimental searches. The first, which comes for free with large proton decay detectors such as Super-K, is based on neutron-antineutron annihilation within nuclei. Naïvely, one might expect this to occur quite frequently, as the number of nuclei far exceeds the expected bound τ nn 10 11 ; however, the oscillation period within nuclei is highly suppressed, with a magnitude of roughly [8] As a result, the bound is suppressed compared to the free expectation, and one must rely on model estimations and extrapolations to extract it. To date, the most stringent bound from experiments of this kind, τ nn > 3.5 × 10 8 seconds, comes from Super-K (2011) [9]. The second type of experiment explores the annihilation of free, cold neutrons with a target after a significant time of flight. This type of experiment is free of the model-dependent estimations required for annihilations within nuclei and allows for greater control of systematics. To date, the most stringent bound comes from the ILL experiment (1993): τ nn > 0.86 × 10 8 seconds [10]. A factor of O(1000) increase is estimated for future experiments of this kind, but the bounds to rule out various BSM theories could be altered significantly depending on QCD enhancement or suppression of the neutron-antineutron matrix elements.
Oscillations and matrix elements
The observed value of the mixing arises from three inputs where c BSM is the running of the BSM theory to the weak interaction scale, c QCD is the QCD running from the weak to the nuclear scale, and n|O|n is the non-perturbative matrix element mixing the neutron and antineutron states. The one-loop perturbative QCD running, c QCD , is known [7,11], and c BSM has been calculated for multiple theories [2,[5][6][7]. The operator O contains two up quarks and four down quarks and is composed of three pairs of quarks from the possible forms where C is the charge conjugation matrix. Additionally, these terms always come in chiral pairs, since the mixed chirality terms are zero. Lastly, these operators are invariant under color symmetry, SU(3) c , which leads to two color tensors where i, j, k, l, m, n are color indices. These three conditions lead to three types of operators [12]: where χ i = L, R. At first glance, there would appear to be 24 independent operators, but there are several additional symmetries. The first set of symmetries due to the flavor structure is which reduces the set to 18 independent operators. An additional symmetry that emerges from antisymmetrizing pairs of epsilon tensors over four indices leads to (with σ = L, R and ρ = L, R) which reduces the set to 14 operators. In addition to enforcing SU(3) c , it is also expected that the operators should be invariant under SU(2) L ⊗U(1) Y . This gauge symmetry and the symmetries in Eq. (2.6) leave only six operators: We will present results for these operators; however, including the symmetry in Eq. (2.7) leads to the conditions, which reduces the number of independent operators to four. We will use these last two equations for a consistency check of our calculation.
Lattice formalism and contraction details
The mechanism to extract the neutron-antineutron matrix elements follows the common practice of taking ratios of three-point to two-point correlation functions. In particular, the three correlation functions of interest and their large Euclidean time behavior are given by (3.1) The desired quantity of interest, n|O|n , is the long time asymptote of a combination of these correlation functions, The six-quark neutron-antineutron three-point correlation function has several key advantages over typical bi-linear or four-quark nucleon operators. First, if the starting point for the propagator is at the operator insertion (as shown in Fig. 1), only one propagator is needed per measurement, whereas the typical nucleon three-point function requires two propagators, one that starts from the source and one that starts from the operator. Second, because the propagator starts at the operator, one can acquire all the source-operator separations (given by t 1 in Fig. 1) and operator-sink Figure 1: Comparison of neutron-antineutron three-point contractions (left) to typical bilinear three-point contractions (right). One propagator is required for a measurements at all (t 1 ,t 2 ) for the left diagram and two propagators are required for one measurement at a single t 1 -value on the right diagram.
separations (given by t 2 in Fig. 1), which allows for a two-dimensional analysis to quantify the excited state effect. Alternatively, typical three point functions require far more computational resources to quantify excited state effects. Lastly, the neutron-antineutron matrix element contains no disconnected or quark loop contractions, which removes the need for costly all-to-all propagators.
One possible disadvantage is that multiplying six propagators together (as done for the neutronantineutron correlator) could increase the signal-to-noise degradation as compared to bilinear or four-quark matrix elements; however, we find a reasonably good signal-to-noise ratio, as shown in Fig. 2.
Lattice Details
The lattice calculations were performed with Chroma [14] using the 32 3 × 256 anisotropic clover-Wilson lattices defined in Ref. [15] with a pion mass of 390 MeV. The temporal and spatial lattice spacings are roughly 0.035 and 0.123 fm, respectively, and the total spatial extent is roughly 4 fm (m π L ∼ 7.8). For this preliminary calculation, we use a total of 159 configurations, each separated by 4 trajectories, to calculate 7268 propagators with Gaussian-smeared sources. Contractions of these propagators lead to the same number of measurements at all source-operator and operator-sink separations.
Preliminary results
The desired matrix elements, n|P i |n , can be extracted from the long Euclidean time behavior of Eq. (3.2). For each ratio R, there are two time inputs, the source-operator separation (t 1 ) and the operator-sink separation (t 2 ). In Fig. 2, R for the P 1 operator is plotted against t 2 for six different values of t 1 . Two features stand out from these plots. First, there is a significant range of time slices where a signal can be extracted and the signal-to-noise degradation is not overly restrictive. Second, it is evident that there is significant excited state dependence as t 1 is varied (for example, the plateaux extracted for t 1 = 10 and t 1 = 30 are significantly different). For this reason, it is very important to use all information available to explore the full behavior of R as a function of both t 1 and t 2 .
In Fig. 3, the 2D plot of R and |R| are plotted against t 1 and t 2 . Again, it is clear that there is a significant amount of non-trivial behavior due to excited states. To that end, a 2D correlated fit has been performed over the time slices 10 < t 2 < 25 and 30 < t 1 < 40. For this preliminary calculation, systematic errors are estimated by adjusting 2D fit window ±1 on all sides.
The bare (unrenormalized) results for n|P i |n are shown in Table 1. Eq. (2.9) is satisfied exactly, configuration by configuration, but only stochastically in Table 1 due to the bootstrapping in the analysis. The corresponding values calculated from the MIT bag model are also displayed for comparison. The magnitude of each operator as computed on the lattice is below that derived using the MIT bag model; however, it should be emphasized that the lattice results are very preliminary and still require renormalization factors.
Systematic effects
The primary systematic uncertainty in comparing the results of Table 1 to experiment is the unphysically large pion mass used. While it is not clear that an IR quantity such as the pion mass should dramatically effect the short distance six-quark vertex, it is a distinct possibility given that contractions of this system are similar to those for low-energy NN scattering, where physical quark masses are expected to lead to a dramatic increase in the scattering length [16].
The second source of systematic uncertainty is the lattice cutoff (i.e., discretization) and matching the lattice regularization to the usual MS scheme used in the perturbative running [7,11]. Generically, operators of interest might mix with lower dimensional operators with the same symmetries, leading to diverging 1/a corrections (where a is the lattice spacing). However, this is not an issue for these operators since the lowest dimension operator that can lead to a ∆B = 2 interaction requires six quarks. Regardless, there are expected to be O(a) corrections and renormalization coefficients that should be quantified.
The third systematic which was clear from Fig. 2 and Fig. 3 is excited state contamination. The calculation of the six-quark neutron-antineutron correlator gives us a unique view of these contaminations as a 2D function in t 1 and t 2 , which is difficult to come by for any other nucleon three-point function. For this reason, we should be able to accurately quantify these contaminations. Finally, finite volume effects should be quantified as well, though their impact is expected to be insignificant given the m π L ∼ 7.8 lattice used.
Future prospects
We are in the process of taking several steps to improve upon this very preliminary work: we are extending the calculation presented here as well as repeating it for a lighter, 240 MeV pion mass at the same volume and for the current pion mass with a smaller, 2.5 fm spatial extent with a larger ensemble. We are exploring perturbative and non-perturbative lattice renormalization to properly match onto the perturbative QCD running previously calculated. We are also refining our analysis procedures to better quantify excited state effects.
Within the next year or two, we hope to carry out this calculation both with physical pion masses and with a chiral fermion discretization (domain-wall fermions). Both calculations are numerically expensive, but within reach of the LLNL 20 PetaFlops Sequoia BG/Q.
|
2012-07-16T22:03:18.000Z
|
2012-07-16T00:00:00.000
|
{
"year": 2012,
"sha1": "7e8f469cd33d241433445f74d5caef41f1d786e6",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/164/128/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "35aae529cfeca1d5627f1375373d0d0a4c5daaa9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
227248198
|
pes2o/s2orc
|
v3-fos-license
|
Trends, patterns and health consequences of multimorbidity among South Korea adults: Analysis of nationally representative survey data 2007-2016
Background Multimorbidity is a global challenge. It is more common in the elderly and deprived populations. Health systems are not providing appropriate care for people with multimorbidity as they are focused on managing single diseases and are not oriented to effectively manage complexity of care-coordination for multimorbidity. This study aims to examine trends, disparities and consequences of multimorbidity over a 10-year period. It also aims to analyze different multimorbidity clusters and their association with quality of life. Methods This study analyzes Korea National Health and Nutrition Examination Survey – a cross-sectional survey repeated each year of 100 000 individuals aged one or more in 192 regions of South Korea – for the 10-year period 2007-2016. This is a population-based study based on nationally representative survey data for 10 years in Korea. Our study included 68 590 adults aged 19 or more who answered questions on presence of diseases. 39 chronic conditions were included. Disease clustering by frequency, composition and number of diseases from the top 10 most common chronic conditions were used to establish patterns of multimorbidity clusters. We performed regression analyses to analyze annual trend and the prevalence of multimorbidity across socioeconomic strata. Regressions were performed to measure association between multimorbidity and unmet need, health care service utilization, sickness days, perceived health status, and EQ-5D. Results Multimorbidity increased in the study period and was more prevalent in the elderly, females, and people with lower household income and education level. Multimorbidity was associated with increased unmet need, health care utilization and sickness days and reduced perceived health status and quality of life. Hypertension was the most common condition in individuals with multimorbidity. Reduced quality of life was associated with increasing number of chronic diseases and multimorbidity clusters which included stroke and arthritis. Conclusions The prevalence of multimorbidity varied across socioeconomic strata, with higher levels and health consequences observed in individuals in lower socio-economic income groups. Different multimorbidity clusters had differential effect on the quality of life. Health system designs incorporating integrated care strategies for complex conditions are required to effectively manage multimorbidity and different multimorbidity clusters.
People with multimorbidity tend to have high levels of unmet health care need and typically do not receive appropriate care [10,11]. This is partly because of single disease focus of health systems that are not designed to cope with the complexity of care-coordination for people with multimorbidity that have complex health care needs requiring management by multidisciplinary teams [12][13][14].
We present a study that uses nationally-representative yearly survey data from the Korea National Health and Nutrition Examination Survey in Korea (KNHANES) over a 10-year period to analyze trends of multimorbidity and patterns of multimorbidity based on disease clustering [21]. We also examine the relationship between multimorbidity and access to health care, health care utilization and quality of life. We analyzed the presence of different multimorbidity clusters with varied composition and frequency of diseases and the association of these clusters with access to health care, health care utilization and quality of life. As with most countries of the world, multimorbidity in South Korea is a major health challenge as its population is aging more rapidly than any other high-income country [22].
Sample and data sources
We used data from KNHANES for the period 2007-2016 [21]. KHANES is a self-reported nationally-representative survey, designed and conducted by the Korean government each year. It is designed to collect information on socioeconomic status, health behaviors, health care utilization, medical conditions, physical and mental status, quality of life and nutrition conditions from approximately 10 000 individuals aged one or more, in 192 regions of South Korea [21]. KHANES is based on multistage cluster sampling, and survey participants change from year to year [21]. Survey questions are categorized for three different groups according to their stage of life: children (aged 1-11 years), adolescents (aged 12-18 years) and adults (aged 19 years or more) [21].
Our study sample included 68 590 adults aged 19 or more who answered questions asking presence of diseases. We excluded children under 18 years because most of these questions on diseases were limited to the adults. Following a review of published literature and a detailed report on multimorbidity [7,15] we included 39 chronic conditions available from the survey based on the classification from KNHANES' guidelines [21]. We coded 39 chronic conditions into 28 after grouping myocardial infarction (MI) or angina into MI or angina, eight kinds of cancer (stomach, liver, colon, breast, cervix, lung, thyroid and other) into cancer, three kinds of vision problems (cataract, glaucoma and macular degeneration) into vision problems, and chronic hepatitis B and hepatitis C into viral hepatitis (Table 1).
Measures and analysis
As with prior studies, we defined multimorbidity as the concurrent existence of two or more of the 28 chronic conditions in one person [3,7,15]. We used the annual survey weights provided by KHANES to examine the yearly national population estimation [21]. Using this annual survey weight, we provided descriptive statistics to summarize the evolution of multimorbidity and chronic conditions and conducted logistic regression to test the linear trend of annual prevalence of multimorbidity.
Chronic conditions included in the survey questions may change over years. To minimize this potential selection bias of yearly change in chronic conditions we created a pooled weight of ten years based on the annual weights.
Based on this pooled weight we analyzed the distribution of multimorbidity across socioeconomic strata. To analyze differences in multimorbidity by socioeconomic status we first used descriptive statistics, in-VIEWPOINTS PAPERS cluding a box plot and histograms to visualize the distribution of multimorbidity across socioeconomic strata. We conducted bivariate and multivariate logistic regression between the prevalence of multimorbidity and socioeconomic status (age, sex, household income, education). We treated age variable as a continuous variable, sex variable as a binary variable, household income variable as a categorical variable based on household income quartiles, and education income variable as a categorical variable. We quantified the association between socioeconomic status and the prevalence of multimorbidity by reporting unadjusted and adjusted odds ratios (ORs).
We used unmet need, outpatient utilization, inpatient utilization, sickness days, perceived health status, and EQ-5D index scores as the measures of health consequences related to multimorbidity. Unmet need was a binary variable indicating that respondents have had unmet need over the past one year or not, outpatient utilization was a binary variable indicating that respondents have had outpatient visits over the past two weeks, inpatient utilization was a binary variable indicating that respondents have had inpatient visits over the past one year and sickness days was coded as a binary variable indicating that respondents have had sickness days over the past one year. Perceived health status was an ordinal variable with five categories ranging from 1-very poor to 5-very good. We treated perceived health status as continuous variable. EQ-5D [23][24][25] is a standardized instrument that measures five dimensions of mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. EQ-5D index scores indicates health-related quality of life (HRQoL) on a scale from 0 (dead) to 1 (perfect health). We conducted logistic regressions and calculated odds ratios (OR) with 95% Confidence Intervals (CI) for the association between the presence of multimorbidity and unmet need, outpatient utilization, and inpatient utilization. We conducted regression and calculated regression coefficients with 95% confidence interval (CI) for the association between the presence of multimorbidity and perceived health status and EQ-5D index scores.
MI -myocardial infarction, TMJ -temporomandibular joint dysfunction, COPD -chronic obstructive pulmonary disease, UIurinary Incontinence *Multimorbidity: presence of two or more morbidities. We sought to examine the extent and severity of multimorbidity, for which there is no agreed classification, by analyzing the composition and the number of diseases in individuals with multimorbidity. We considered individuals with different multimorbidity profiles differently. For example, people having stroke and depression and people having hypertension and sinusitis have different nature and amount of disease burden.
VIEWPOINTS PAPERS
We coded the number of morbidities that are equal or greater than five as 5+ and we analyzed the most common ten combinations of morbidities per each number of morbidities (1, 2, 3, 4, 5+). We coded the remaining combination of morbidities as 'Other. ' We examined the relationship between EQ-5D index number (utility) [23][24][25] and the top ten most common composition of morbidities and 'Other' composition per number of morbidities. To analyze different profiles of multimorbidity clusters, we examined the composition of multimorbidity, the frequency of these compositions across the number of morbidities and the effect of the number of morbidities and the composition of morbidity clusters on the quality of life.
Descriptive statistics
The prevalence of multimorbidity increased from 19.2% in 2007 to 23.7% in 2016 ( Table 1). Among the morbidities included in our study, cancer, diabetes, thyroid disease, depression, vision problems, hypertension, dyslipidemia, MI or angina, hepatitis, arthritis, osteoporosis, backache, sinusitis, and rhinitis showed statistically significant different annual trends for the ten years of the study period ( Table 1).
The relationship between multimorbidity and age, sex, household income, and education Figure 1 shows the distribution of multimorbidity across socioeconomic strata by number of morbidities. The number of morbidities increased with the age (Figure 1, Panel A). The mean age of healthy people without any morbidity was 39.9 years (95% CI = 39.7, 40.2), while the mean age of people having four morbidities was 65.2 years (95% CI = 64.4, 66.0) (Figure 1, Panel A). Approximately 21.1% of females (95% CI = 20.6, 21.7) and 13.3% of males (95% CI = 12.7, 13.8) (Figure 1, Panel B) had multimorbidity.
The relationship between multimorbidity and consequences
We found a statistically significant relationship between multimorbidity and quality of life. The odds of being sick for people with multimorbidity increased by 112% (OR: 2.121, 95% CI = 1.923, 2.339) compared to that for people without multimorbidity, after adjusting for other covariates (Table 3). For people with multimorbidity, the predicted health status was lower by approximately 0.49 points (95% CI = -0.517, -0.469) and the EQ-5D index number was lower by approximately 0.06 points (95% CI = -0.062, -0.054) than for people without multimorbidity (Table 3). Figure 1. Disparities of multimorbidity across socioeconomic strata. Figure 2 shows the relationship between different profiles of multimorbidity by composition of conditions and the frequency and HRQoL score as measured by EQ-5D. The more conditions individuals had, the less was HRQoL (Figure 2). The mean of EQ-5D was 0.97 for healthy individuals, who accounted for 53% of the study sample, whereas that was 0.75 for individuals with multimorbidity living with five or more conditions, who accounted for 2% of the study sample (Figure 2).
There was wide range of HRQoL scores among individuals with multimorbidity depending on the composition of morbidities. Among individuals with multimorbidity living with two conditions, HRQoL was the highest for individuals living with diabetes and dyslipidemia with EQ-5D score of 0.95, and the lowest for individuals living with hypertension and stroke with EQ-5D score of 0.80.
Among individuals with multimorbidity living with three conditions, EQ-5D scores ranged from 0.73 for individuals living with arthritis, backache, and hypertension to 0.85 for those living with dyslipidemia, hypertension and vision problems (Figure 2).
The most common combinations of conditions for individuals with multimorbidity were arthritis and hypertension among individuals with two chronic conditions; diabetes, dyslipidemia and hypertension among those with three conditions; arthritis, diabetes, dyslipidemia and hypertension among those with four conditions, and; arthritis, dyslipidemia, hypertension, osteoporosis and vision problems among those with five or more conditions.
DISCUSSION
In this study, we sought to advance the understanding of the level and distribution of multimorbidity among different population groups, stratified by age, sex and socioeconomic status, and provide new empirical evidence at the population level based on an analysis of pooled cross-sectional data that used nationally-representative surveys undertaken each year over a 10-year period.
VIEWPOINTS PAPERS
We analyzed the evolution of the prevalence and distribution of multimorbidity across age, sex and socioeconomic status. We found an increasing trend of multimorbidity and substantial disparities in multimorbidity across age, sex, household income, and education level.
We also analyzed the relationship between multimorbidity and access to health care services, health utilization and quality of life. Our findings show statistically significant negative effects of multimorbidity on access to health care services and quality of life..
One of earlier study from KHANES showed that multimorbidity lowered EQ-5D scores [26]. This study extends earlier studies by providing evidence on different patterns of multimorbidity clusters and their effects on HRQoL. We found a wide spectrum of HRQoL among multimorbid individuals depending on the number of conditions and the composition of conditions.
As per earlier studies, the results of our study showed that the odds of multimorbidity increase for older people, females, individuals with low-income, and individuals with low level of education [3,4,6,18,20]. The number of co-occurring conditions increased with age and for females but decreased with an increase in income and education level.
Although people with multimorbidity were more likely to use outpatient and inpatient services, they were more likely to have unmet need for health care services. The findings suggest suboptimal management of multimorbidity despite high utilization of health care services and high levels of out-of-pocket costs incurred. People with low socioeconomic status were more likely to have multimorbidity, and experience higher risk of financial burden as a result of multimorbidity [27].
The current definition of multimorbidity is the presence of multiple coexistences of diseases within a person, and measured by counting number of diseases an individual has [7, 16,28]. Based on this uni-dimensional definition, earlier studies have focused on the relationship between the presence of multimorbidity and its impact on health outcomes [1,7, 10,11,29,30]. However, these studies have not explored different profiles of multimorbidity due to different combinations of diseases and how these combinations lead to different multimorbidity clusters. Clinical decisions for multimorbid patients are complex and challenging [13,31] and therefore, understanding different profiles of multimorbidity is essential for managing multimorbidity more effectively and efficiently in health systems. However, little attention has been given to understanding the features of different profiles of multimorbidity [13]. For this reason, we identified and visually presented different profiles of multimorbidity by common combinations of conditions and the number of conditions. We found multimorbidity was heterogeneous in many ways, including the number of conditions, the composition of conditions, frequency of conditions and the extent of severity as measured by HRQoL. Hypertension was one of the most common conditions for multimorbidity. The number of conditions, as well as the composition of conditions, affected HRQoL. Multimorbidity with stroke, myocardial infarction or arthritis impacted the quality of life most negatively (Figure 2).
When developing clinical guideline to manage patients with multimorbidity one should consider common conditions that lead to multimorbidity and the way these cluster. Like a previous study conducted in the elderly [18], hypertension was the most common single condition in patients with multimorbidity. The combination of hypertension and arthritis was the most frequent coupling among individuals with two conditions, the combination of hypertension, diabetes, and dyslipidemia was the most frequent mix among those with three conditions, and the combination of hypertension, arthritis, diabetes, and dyslipidemia was the most frequent mix among those with four conditions.
To effectively manage multimorbidity, policy makers should develop targeted policies that take into account the frequency and mix of conditions that lead to multimorbidity and different multimorbidity clusters which have varied effects on utilization of health services and the quality of life [29,32,33]. However, in practice, effective management of health systems, health care utilization and outcomes for patients with multimorbidity who are frequent users of health care services is a challenging task, as health systems are designed to manage single diseases [10,14]. Therefore, priority setting and system design should consider varying multimorbidity profiles as well as the disparities among patients with multimorbidity in different socioeconomic strata in relation to access, utilization and outcomes, for example by introducing early interventions for low-income households, including medical aid program [34], conditional cash transfers, food and nutrition assistance [35].
Limitations
Our study has several limitations and strengths. We used self-report national survey data. Self-reported survey data are prone to potential recall bias and selection bias. However, self-reported survey data more accu-
REFERENCES
rately reflect the presence of multimorbidity because they are more likely to capture symptoms of chronic conditions compared to electronic health records that might be incomplete [36]. Another limitation is that the list of conditions included in the survey was not the same throughout our study period. In order to minimize potential bias, we pooled 10-year longitudinal national survey data and used 10-year pooled sample weight to estimate prevalence of multimorbidity. We sought to include all available conditions after reviewing the list of diseases included in other multimorbidity studies [15]. We also provided an annual prevalence of multimorbidity based on each year' s sample weight to compare annual differences. While most studies use single-year cross-sectional data, we used nationally representative survey data that produced a 10-year longitudinal data set, enabling us to examine multimorbidity patterns at population level.
CONCLUSIONS
Multimorbidity is increasing in high-income countries. Multimorbidity negatively affects unmet need, health care utilization, and quality of life, which affect lower socioeconomic income population groups disproportionately, with widening disparities in the prevalence, health care service utilization, HRQoL and level of financial burden over time. The composition, frequency, and the extent of multimorbidity varies widely among different age groups and socioeconomic strata. Varied combinations of conditions lead to different multimorbidity profiles. The effect of these different multimorbidity clusters on health care utilization, HRQoL and level of financial burden vary significantly. Clinical decisions of multimorbid patients is complex and challenging because health systems are designed to manage single-morbid patients. Future research is needed to develop integrated care strategies to target population groups with different profiles of morbidities to ensure effective management and prevention of multimorbidity and its consequences on health outcomes, health related quality of life and financial burden on individuals.
|
2020-11-22T13:11:30.861Z
|
2020-08-23T00:00:00.000
|
{
"year": 2020,
"sha1": "877c535273ec60af6269675a73af06b812c30818",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7189/jogh.10.020426",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf6f4f75947e9e3fcf4f5d3c61cbd91ef7c39b24",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
181330275
|
pes2o/s2orc
|
v3-fos-license
|
Clinical characteristics and cerebro-spinal fluid cytokine changes in patients with acquired immunodeficiency syndrome and central nervous system infection
Clinical characteristics and the cerebro-spinal fluid (CSF) cytokine changes in acquired immunodeficiency syndrome (AIDS) patients with tuberculous meningitis and cryptococcal meningitis in central nervous system (CNS) infections before and after treatment were investigated. The clinical records of 80 AIDS patients with CNS infections and 40 non-CNS infection patients hospitalized in the Infection Department of the First Hospital of Changsha from February 2013 to March 2016 were retrospectively analyzed. Forty-one cases of AIDS complicated with tuberculous meningitis were enrolled as group A, 39 cases of AIDS complicated with cryptococcal meningitis as group B, and 40 cases of non-CNS infection with lumbar puncture indication as group C. The general data, clinical symptoms, CSF examination and prognosis of the three groups of patients were collected. Of the 80 patients, 56 patients were discharged from hospital (improvement group) and 24 died (death group) after treatment. The concentrations of interferon-γ (IFN-γ), interleukin-6 (IL-6), interleukin-10 (IL-10) and tumor necrosis factor-α (TNF-α) in CSF were detected by enzyme-linked immunosorbent assay. There were significant differences in clinical manifestations, CSF pressure, CSF leucocyte count, CSF glucose, CSF chloride and CSF protein between group A, group B and group C (P<0.05). The concentrations of IFN-γ, IL-6, IL-10 and TNF-α in CSF of group A and group B increased significantly compared with group C (P<0.001). The IL-6, IL-10 and TNF-α levels in CSF in the improvement group were significantly lower than those in the death group (P<0.001), while the concentration of IFN-γ increased significantly (P<0.001). CSF biochemistry is characterized by increased pressure, leucocyte count and protein, and decreased chloride and glucose. IFN-γ, IL-6, IL-10 and TNF-α in CSF have certain predictive value for poor prognosis of AIDS patients with CNS infection.
Introduction
Acquired immunodeficiency syndrome (AIDS) is a serious immunodeficiency disease infected by human immunodeficiency virus (HIV) (1). AIDS patients are mostly promiscuous, multi-transfused, homosexual and intravenous drug addicted. Sexual transmission is the main route of the transmission of HIV infection, while other routes are mother-to-child transmission, blood product transfusion, organ transplantation and drug use with syringes (2). At present, the number of AIDS patients worldwide has reached 37 million, and the incidence has increased year by year (3). AIDS is mainly manifested by fatigue, fever and other clinical symptoms, with the characteristics of slow onset and high fatality rate. It mainly invades the immune system of patients, causing serious damage to their immune function (4). AIDS can gradually develop into a secondary infection, prone to various pathogenic bacteria infections. In clinical practice, central nervous system (CNS) infections are common in AIDS patients (5).
The CNS of the normal human body can resist the invasion of various pathogens, but AIDS patients have impaired immune function and decreased resistance, and the brain and spinal cord are easily infected by various pathogens, which in turn leads to CNS infection (6,7). Opportunistic CNS infection is the most common complication in patients with advanced AIDS (8). CNS infections are usually encephalitis caused by bacteria invading the CNS and meningitis caused by spinal pachymeningitis or meninges. The most common CNS infection diseases in AIDS patients are tuberculous meningitis and cryptococcal meningitis (9,10). Tuberculous meningitis is a non-suppurative inflammation of CNS, mostly caused by the invasion of tubercle bacillus of the ependyma and meninge into subarachnoid space. Cryptococcal meningitis is a chronic inflammatory disease with chronic or subacute infection of CNS infected by cryptococcus neoformans. AIDS complicated with tuberculous meningitis or cryptococcal meningitis is the main cause of death (11,12). There is a close relationship between the human immune system and the nervous system, and when CNS infection occurs, the levels of various cytokines in the body will be abnormally expressed (13).
At present, there is no report on the clinical characteristics and cerebro-spinal fluid (CSF) cytokines changes in AIDS patients with tuberculous meningitis and cryptococcal meningitis. The aim of this study was to provide a feasible method for the early diagnosis and prognosis of AIDS patients with CNS infectious diseases by observing the clinical symptoms of AIDS patients with tuberculosis meningitis and cryptococcal meningitis and the significance of cytokines in CSF. This study was approved by the Ethics Committee of The First Hospital of Changsha. All the subjects were informed and agreed to participate in the clinical study, and signed a complete informed consent form.
Materials and methods
Inclusion and exclusion criteria. Inclusion criteria were: in line with the AIDS diagnostic criteria of the US Centers for Disease Control and Prevention (CDC) 2015 (14), enzyme linked immunosorbent assay (ELISA) and western blot confirmed HIV antibody as positive; the clinical symptoms were headache, fever, nausea, consciousness disorder and meningeal irritation; tuberculous meningitis patients with acute and subacute clinical symptoms, and mycobacterium tuberculosis detected by CSF smear; patients diagnosed with cryptococcal meningitis by fungal ink staining, fungal culture, urease test and imaging examination; patients receiving no anti-tuberculosis, anti-cryptococcus neoformans and highly active anti-retroviral therapy (HAART) in the past. Exclusion criteria were: patients complicated with deep fungal infections such as candidiasis, histoplasmosis and penicilliosis marneffei; patients with severe liver, kidney and hematopoietic dysfunction; patients with mental illness or a family history of mental illness.
Research methods. The general data, clinical symptoms, CSF examination and prognosis of 3 groups of patients were collected. The CSF biochemical indexes (including pressure, leucocyte count, glucose, chloride, protein) were detected within 1 day after admission, and the death of patients during admission was recorded. The clinical data, treatment and prognosis of the patients, as well as the follow-up results were summarized.
Treatment outcome. Patients in group A were given antituberculosis treatment with 2 HRZE/4HR regimen. Whereas patients in group B were treated with 1,200 mg/day oral fluconazole for 15 days, followed by 400 mg/day for 45 days and 200 mg/day for life. Patients in the two groups received HAART at the 3rd week of treatment. If the patients were able to tolerate anti-infection and anti-retroviral treatment, HAART was continued. If not, HAART was terminated and other symptomatic treatment was given. Judgment criteria for improvement: no meningeal irritation sign, local orientational sign of nervous system and consciousness disorder. Of the 80 patients, 56 improved and were discharged, and 24 died after treatment, with a fatality rate of 30.00%. The 56 patients who improved were considered as the improvement group and the 24 patients who died as the death group.
Sample collection and detection. CSF (5 ml) from the lower lumbar spine of three groups of patients was extracted using spinal cord puncture method and centrifuged (Hunan Hengnuo Instrument Equipment Co., Ltd., Changsha, China) at 1,500 x g, 4˚C for 10 min, and the separated supernatant was stored in a refrigerator at -20˚C (Shanghai Coolingway Biotechnology Co., Ltd., Shanghai, China) for later use. The concentrations of IFN-γ, IL-6, IL-10 and TNF-α in CSF were detected by ELISA with reference to the instructions of human IFN-γ, IL-6, IL-10 and TNF-α ELISA kits [Abcam (Shanghai) Trading Co., Ltd., Shanghai, China]. For the detection method the sample well, standard well, negative control group and positive control group were set up. Standard solution (100 µl), test sample, negative and positive control solution were absorbed into the reaction wells, and 100 µl of the biological reaction antibody solution was added quickly, covered with a film, mixed well and kept for 40 min. Then, 100 µl of streptavidin was added to each reaction well, covered with a film, mixed evenly and then left to stand for 40 min. The liquid in the reaction wells was poured out, and the washing liquid was added to each well, shaken slightly for 1 min, then discarded. The process was repeated five times. Substrate of reaction solution A (100 µl) and reaction solution B was added into each reaction well, covered with film, and kept in the dark for 5 min. Subsequently, 100 µl of elimination agent was added into wells and then the OD value of each well was immediately detected at 450 nm using an ELISA Analyzer (Shenzhen Sinothinker Technology Co., Ltd., Shenzhen, China) to calculate the concentrations of IFN-γ, IL-6, IL-10 and TNF-α.
Statistical analysis. The SPSS 19.0 (IBM Corp., Armonk, NY, USA) was used for statistical analysis, and Graph Pad Prism 7 was used to plot data images. The measurement data are expressed as mean ± standard deviation (mean ± SD), and independent samples t-test was used to compare the measurements between groups. The countable data were expressed as case number/percentage [n (%)], and Chi-square test was used to compare the countable data between groups. One-way analysis of variance was used for the comparison between the mean values of multiple groups, Dunnett-t test was used for pairwise comparison afterwards. ROC curve was established, the AUC under the ROC curve of IFN-γ, IL-6, IL-10 and TNF-α concentrations in CSF was determined, and the sensitivity and specificity under the diagnostic cut-off were calculated. P<0.05 was considered to indicate a statistically significant difference.
Results
General data. There was no significant difference in sex, age and CSF appearance between group A, group B and group C (P>0.05). By contrast, there were significant differences in clinical manifestations, CSF pressure, CSF leucocyte count, CSF glucose, CSF chloride and CSF protein between the three groups (P<0.05). In group A, 36 cases (87.80%) had CSF pressure ≥180 mmH 2 (Table II and Fig. 1).
Concentration of IFN-γ, IL-6, IL-10 and TNF-α in CSF of patients in the improvement group and the death group. The concentration of IL-6, IL-10 and TNF-α in CSF of patients in the improvement group was significantly lower than those in Table I. Baseline data of patients in the three groups [n (%)]/(mean ± SD). (Table IV and Fig. 3).
Discussion
HIV belongs to retroviruses and is the pathogen of AIDS, both neurotic and lymphotropic. HIV invades T lymphocytes and multiplies in a large number of helper CD4 + lymphocytes, resulting in a large number of their progressive reductions, further leading to serious damage to the immune function of the organism, which may cause opportunistic infections (15,16). HIV can infect B lymphocytes, bone marrow stem cells and mononuclear phagocytes simultaneously. CNS is the vacuum area of body immunity, and tuberculosis and cryptococcosis are common opportunistic infections of AIDS (17) Once the body has immune deficiency, the integrity of the blood-brain barrier is destroyed by HIV, which facilitates the intracranial spread of mycobacterium tuberculosis and cryptococcus neoformans, resulting in CNS infection in the body. CSF undergoes corresponding pathological changes when CNS infection occurs (20). In the study of Price et al (21), it was pointed out that there were difficulties in the clinical diagnosis and management of HIV-related CNS infection, while changes in CSF biomarkers could provide an objective and valuable evaluation method. The results of this study showed that both group A and group B patients presented with clinical manifestations of meningitis such as headache, fever, nausea, vomiting and consciousness disorder. In group A, 87.80% of patients had intracranial pressure ≥180 mmH 2 O, 80.49% had leucocyte count >8x10 6 /l, 58.54% had glucose <2.8 mmol/l, 70.73% had chloride <120 mmol/l, and 80.49% had protein elevation; while the rate of those 5 biochemical components of CSF in the group B were 84.62, 64.10, 66.67, 69.23, and 79.49%, respectively. The significant changes of CSF biochemical indexes after CNS infection may be caused by the infection of mycobacterium tuberculosis and cryptococcus neoformans, causing an increase of permeability of choroid plexus capillaries and meninges, leading to the increase of protein and intracranial pressure and the decrease of glucose and chloride. Graybill et al (22) pointed out that the increased intracranial pressure and decreased glucose content were the main reasons for the poor prognosis of patients. Therefore, by observing the clinical symptoms of AIDS patients with CNS infection and the changes of CSF biochemical indexes, timely drug symptomatic treatment can be given.
Mycobacterium tuberculosis, viruses and fungi infections can cause T-helper 1 (Th1) mediated cellular immune response Table III. Comparison of IFN-γ, IL-6, IL-10 and TNF-α concentrations in CSF of patients between the improvement group and the death group (mean ± SD). in humans, while cellular immunity plays an important role in resisting pathogen infection (23). HIV infection is a disorder of immune function characterized by reduction of CD4 + T cells, imbalance of cytokines and activation of polyclonal cells, and cytokines play an important role in balancing and maintaining immune response (24). IFN-γ, IL-6, IL-10, TNF-α and other cytokines are secreted by activated Th1 cells, Th2 cells, B cells and other cells, which mediate the immune responses of body fluids (25). In a study by Chakrabarti et al (26), the levels of inflammatory cytokines and chemokines IL-6, IL-8/CXCL 8, IP-10/CXCL 10, TNF-α in patients infected with AIDS and Mycobacterium tuberculosis increased, and soluble IL-2 receptors were released after activation of CD4 + T cells in the patients. These inflammatory cytokines and chemokines had very important effects on the development of the disease. The results of this study showed that the concentrations of IFN-γ, IL-6, IL-10 and TNF-α in CSF of patients in group A and B were significantly higher than those in group C, suggesting their involvement in the inflammatory reaction and immune response of AIDS complicated with tuberculous meningitis and cryptococcal meningitis, which is similar to previous studies. Clinically, selectively blocking HIV infected patients can up-regulate the secretion of HIV-1 expressing cytokines, implement cytokines to rebuild the immune function of the body's defect, stimulate the recovery and improve the immune imbalance, which is an important strategy for the treatment of AIDS (27). A study by Worsley et al (28) showed that, the severity of HIV was manifested through the reduction of CD4 + T cells and the occurrence of opportunistic infections; the levels of IL-10 mRNA and TNF-α mRNA increased with the aggravation of the disease, and the decrease of IFN-γ mRNA was one of the reasons leading to the deterioration of HIV disease; with the increase of virus replication, the levels of TNF-α, IL-4 and IL-10 increased and IFN-γ decreased, making children vulnerable to HIV-related opportunistic infections. In our study, the levels of IL-6, IL-10 and TNF-α in CSF of the patients in the improvement group were significantly lower than those in the death group, while the levels of IFN-γ increased significantly, indicating that they may be involved in the development of AIDS patients with CNS infection and related to the patients' poor prognosis. The ROC curve of IFN-γ, IL-6, IL-10 and TNF-α in the diagnosis of AIDS patients with CNS infection was further evaluated, and the results indicated their certain values in the diagnosis of AIDS patients with CNS infection. Therefore, detecting the concentrations of IL-6, IL-10 and TNF-α in CSF of AIDS patients with CNS infection has certain predictive value for the poor prognosis of the patients. In this study, the subjects were screened strictly according to the inclusion and exclusion criteria. The collection of samples and the detection of cytokines were the same in methodology, eliminating the differences caused by experimental methods and ensuring the rigor and reliability of this study. Among CNS infections, the expression levels of cytokines in AIDS patients infected with different severity levels may be different, and the network formed by cytokines is extremely complex, with mutual regulation and interaction (29). The regulatory mechanism of cytokines in AIDS complicated with CNS infection was not included in this study. Future study should expand the sample size, and group the course and treatment of patients with different severity of infection.
Collectively, AIDS patients with tuberculous meningitis and cryptococcal meningitis in CNS infection diseases are mainly manifested by headache, fever, nausea, vomiting, and consciousness disorder. CSF biochemistry is characterized by increased pressure, leucocyte count and protein, and decreased chloride and glucose. IFN-γ, IL-6, IL-10 and TNF-α in CSF have certain predictive value for poor prognosis of AIDS patients with CNS infection.
|
2019-06-07T21:13:18.977Z
|
2019-05-16T00:00:00.000
|
{
"year": 2019,
"sha1": "0bad73b06586635571143d63243fd32999e87a25",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2019.7587/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0bad73b06586635571143d63243fd32999e87a25",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257495132
|
pes2o/s2orc
|
v3-fos-license
|
Diagnostic Accuracy of a Rapid SARS-CoV-2 Antigen Test Among People Experiencing Homelessness: A Prospective Cohort and Implementation Study
Introduction Detection strategies in vulnerable populations such as people experiencing homelessness (PEH) need to be explored to promptly recognize severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreaks. This study investigated the diagnostic accuracy of a rapid SARS-CoV-2 Ag test in PEH during two pandemic waves compared with gold standard real-time multiplex reverse transcription polymerase chain reaction (rtRT-PCR). Methods All PEH ≥ 18 years requesting residence at the available shelters in Verona, Italy, across two cold-weather emergency periods (November 2020–May 2021 and December 2021–April 2022) were prospectively screened for SARS-CoV-2 infection by means of a naso-pharyingeal swab. A lateral flow immunochromatographic assay (Biocredit® COVID-19 Ag) was used as antigen-detecting rapid diagnostic test (Ag-RDT). The rtRT-PCR was performed with Allplex™ SARS-CoV-2 assay kit (Seegene). Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated as measures for diagnostic accuracy. Results Overall, 503 participants were enrolled during the two intervention periods for a total of 732 paired swabs collected: 541 swabs in the first period and 191 in the second. No significant differences in demographic and infection-related characteristics were observed in tested subjects in the study periods, except for the rate of previous infection (0.8% versus 8%; p < 0.001) and vaccination (6% versus 73%; p < 0.001). The prevalence of SARS-CoV-2 in the cohort was 8% (58/732 swabs positive with rtRT-PCR). Seventeen swabs were collected from symptomatic patients (7%). Among them, the concordance between rtRT-PCR and Ag-RDT was 100%, 7 (41.2%) positive and 10 negative pairs. The overall sensitivity of Ag-RDT was 63.8% (95% CI 60.3–67.3) and specificity was 99.8% (95% CI 99.6–100). PPV and NPV were 97.5% and 96.8%, respectively. Sensitivity and specificity did not change substantially across the two periods (65.1% and 99.8% in 2020–2021 vs. 60% and 100% in 2021–2022). Conclusions A periodic Ag-RDT-based screening approach for PEH at point of care could guide preventive measures, including prompt isolation, without referral to hospital-based laboratories for molecular test confirmation in case of positive detection even in individuals asymptomatic for COVID-19. This could help reduce the risk of outbreaks in shelter facilities.
63.8% (95% CI 60.3-67.3) and specificity was 99.8% (95% CI 99.6-100). PPV and NPV were 97.5% and 96.8%, respectively. Sensitivity and specificity did not change substantially across the two periods (65.1% and 99.8% in 2020-2021 vs. 60% and 100% in 2021-2022). Conclusions: A periodic Ag-RDT-based screening approach for PEH at point of care could guide preventive measures, including prompt isolation, without referral to hospital-based laboratories for molecular test confirmation in case of positive detection even in individuals asymptomatic for COVID-19. This could help reduce the risk of outbreaks in shelter facilities.
Key Summary Points
Few studies have been conducted on the implementation of antigen-detecting rapid diagnostic tests (Ag-RDTs) in congregated homeless shelters.
This study assessed the performance of a COVID-19 Ag test as a screening tool for SARS-CoV-2 infection in people experiencing homelessness (PEH) to guide shelter access during cold-weather emergency response plan.
The adoption of an Ag-RDT in this study was not able to exclude SARS-CoV-2 infection.
Given the high specificity, the implementation of this test in PEH requesting a bed shelter can provide timely information to confirm the infection in case of a positive result, even in absence of symptoms.
A periodic Ag-RDT-based screening approach at point of care could help control the spread of SARS-CoV-2 infection in PEH, thus reducing the risk of outbreaks in shelter facilities.
INTRODUCTION
People experiencing homelessness (PEH) are particularly exposed to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection [1]. This is due to inadequate and overcrowded living conditions in homeless shelters as well as poor sanitary conditions and limited use of face masks. Moreover, barriers to timely access to healthcare services and high prevalence of comorbidities compared to the general population increase the risk of developing severe coronavirus disease 2019 (COVID-19) [2][3][4]. SARS-CoV-2 infection prevalence in homeless shelters was reported to be up to 67% in San Francisco, California, USA, while in Italy it was slightly above 8%. In both countries a large proportion of asymptomatic cases was observed [5,6].
Although nucleic acid amplification tests (NAATs)-such as real-time multiplex reverse transcription polymerase chain reaction (rtRT-PCR)-remain the gold standard for the diagnosis of SARS-CoV-2 infection, they are expensive and time-consuming and require specialized personnel and equipment [7]. Antigen-detecting rapid diagnostic tests (Ag-RDTs) in nasal swabs represent a quick, easy to use, and less expensive alternative for the diagnosis of SARS-CoV-2 infection at point of care [8]. In particular, Ag-RDTs can help reduce further transmission providing a timely result with rapid isolation and contact tracing [9]. However, as for several other Ag-RDTs, there have been many concerns regarding their sensitivity and high rates of false-positive results (especially in settings with lower prevalence rates and thus low pre-test probability) [9,10]. The European Centre for Disease Prevention and Control agrees with the World Health Organization minimum performance criteria of C 80% sensitivity and C 97% specificity. Ag-RDTs with a higher specificity ([ 98%) are preferable for first-line testing to reduce false-positive results [11,12]. The performance of the 400 commercially available Ag-RDTs [13] varies in different settings and according to the intended use of the test (diagnosis, screening, surveillance).
Diagnostic accuracy studies and detection strategies in vulnerable, hard-to-reach populations such as PEH need to be explored to promptly recognize outbreaks and avoid further viral spread.
The aim of the study was to investigate the diagnostic accuracy of a rapid SARS-CoV-2 antigen test used as a screening tool in PEH during two pandemic waves compared with gold standard rtRT-PCR.
Study Population, Setting and Procedures
This study is part of a well-established SARS-COV-2 surveillance program promoted by the Municipality of Verona and the ORCHESTRA project together with the University Hospital of Verona, which targeted key populations to ensure the rapid implementation of public health strategies to contain the spread of SARS-CoV-2. The study discussed in this article was performed in the subgroup of the homeless. During two periods starting from 16 November 2020 to 30 May 2021 and subsequently from 30 December 2021 to 20 April 2022, all PEH C 18 years requesting residence at the available shelters (cold-weather emergency response plan) in Verona, Italy, were prospectively screened for SARS-CoV-2 infection regardless of the presence of symptoms. After obtaining written informed consent, two nasopharyngeal swabs (NPS) were collected from each PEH by professional medical staff trained in NPS techniques, according to manufacturer's recommendations. One NPS was collected to perform the Ag-RDT immediately at point of care, and the other was delivered to the Microbiology Unit of the University Hospital of Verona to perform a rtRT-PCR assay. The following data were self-collected for each participant: demographic characteristics (age, sex, nationality); previous SARS-CoV-2 infection and/or vaccination; presence of current or previous (within 2 weeks) COVID-19 symptoms and type: general body malaise, difficulty breathing, headache, sore throat, running nose, cough, loss of smell or taste, nausea/vomiting, diarrhea, and body aches.
The purpose of administering the Ag-RDT was to screen PEH requesting the assignment of a shelter bed at the access point in Verona. The screening was included in a package of services provided by the Municipality of Verona and Diocesan Caritas, an organisation of the Italian Bishop's Conference engaged in many welfare activities including assistance to the homeless. All subjects who tested positive on the Ag-RDT were transferred to dedicated isolation centers arranged by the Municipality of Verona. The staff was responsible for the communication with the shelter coordination team in order to implement the official procedures for isolation, protection measures, and contact tracing. The Ag-RDT (index test) result was confirmed using the rtRT-PCR considered the clinical reference (comparator) assay for the diagnosis of SARS-CoV-2 infection. The personnel who processed and performed the rtRT-PCR was not aware of the result (either positive or negative) of the Ag-RDT.
The STARD (standard for reporting of diagnostic accuracy studies) statement was adopted as a guideline for study design and reporting [14].
SARS-CoV-2 Rapid Antigen Test
Biocredit Ò COVID-19 Ag was used as Ag RDT. This is a lateral flow immunochromatographic assay that adopts dual color system. The test contains a colloid gold conjugate pad and membrane strip pre-coated with antibodies specific to SARS-CoV-2 Ag on the test lines. If SARS-COV-2 Ag is present in the specimen, the complexes between the anti-SARS-CoV-2 conjugate and the virus are captured by specific monoclonal Ab (Ab-Ag-Ab gold conjugate complexes), and a visible black band appears on test line T. The control line (C) serves as a procedural control and should always appear if the test is performed correctly (the sample volume is correct, the membrane functioned correctly, etc.). Reading is carried out at between 5 and 8 min. According to the manufacturer, Biocredit Ò COVID-19 Ag has been evaluated compared to PCR as reference in three different countries (Europe, South America, and Korea). Overall, the results showed 100% specificity and 90.2 sensitivity (sensitivity range 80-96%) [15].
Molecular Detection of SARS-CoV-2
The detection of SARS-CoV-2 was carried out using nasopharyngeal swabs collected from patients using Copan Universal Transport Medium (UTM-RT Ò ) System (Copan Italia Spa, Brescia, Italy). The samples were stored at 4°C and immediately processed after transport to the laboratory.
The extraction of nucleic acids from samples was carried out using a NIMBUS apparatus, and the amplification and detection of specific SARS-CoV-2 genes were carried out with Allplex TM SARS-CoV-2 assay kit (Seegene, Seoul, Korea) following the manufacturer's instructions. This multiplex real-time RT-PCR assay detects four viral targets simultaneously including the E, N, RdRp, and S genes with the RdRp and S revealed in the same fluorescence channel. The Ct threshold was considered positive when Ct \ 40. All samples were analyzed and interpreted by Seegene Viewer software (Seegene), and the amplification curves could also be visualized and analyszd during and after the run [16].
Statistical Analysis
Means and standard deviations (SD) were calculated for continuous variables, and frequency tables and respective percentages were calculated for categorical variables. Significance of differences between the two study periods were evaluated by chi-square test or Fisher's exact test for categorical variables and t test for quantitative variables. Measures for diagnostic test accuracy were calculated as follows [17]: The PPV and NPV were calculated considering the officially reported prevalence of SARS-COV-2 positivity in the same group age of patients in the country in the two different study periods [18]. All analyses were conducted with STATA Ò , version 17.0 (StataCorp LP, College Station, TX, USA).
Ethical Statement
Ethical approval for this study was obtained from the ''Comitato Etico delle Province di Verona e Rovigo'' (2948CESC). All procedures were in accordance with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all participants.
RESULTS
Overall, 503 participants were enrolled during the two intervention periods for a total of 732 paired swab samples collected. All PEH referred to the access point during the intervention period agreed to undergo both Ag-RDT and molecular tests. The majority (278, 55%) were from sub-Saharan and Northern Africa. The specimens included 541 swabs performed during November 2020-May 2021 (corresponding to the second pandemic wave in Italy) and 191 in the period December 2021-April 2022, during the fourth pandemic wave (Table 1a and b). Average age of subjects was 42 (SD 14) years. No significant differences were reported from subjects tested in the first intervention period compared to the second, except for the rate of previous infection (0.8 vs. 8%) and vaccination (6% vs. 73%; p \ 0.001).
The mean Ct value was 30.4 (SD 5.3) for the RdRp/S viral targets, 30 (SD 5.3) for the E gene, and 27.4 (SD 5.2) for the N gene. The mean S/RdRp Ct value of Ag-RDT-positive samples was 29 (SD 4.7); for the Ag-RDT-negative samples, the mean S gene Ct value was 35 (SD 3.8). To speculate about the difference in SARS-CoV-2 detection by the Ag-RDT at different viral loads, we assessed the sensitivity of the Ag-RDT at three different Ct value ranges of S/RdRp viral targets detected by the rtRT-PCR: B 20, 21-33, and and [ 33 B 40. As shown in Table 3, the false-negative results increase with higher Ct values. Sensitivity ranges from 100% (95% CI NPV was calculated based on the prevalence of SARS-CoV-2 in the cohort Ct cycle threshold, RdRp RNA-dependent RNA polymerase, FN false negative, TP true positive, Sens sensitivity, NPV negative predictive value San Francisco, California, offering an Ag-RDT to both residents and staff of congregate-living shelters. Following the pilot phase, the public health department of San Francisco maintained rapid testing in homeless shelters as an alternative to rtRT-PCR [20]. Overall, in our study the sensitivity of the Ag-RDT was low and did not change across the two intervention periods while the specificity remained high Notably, almost 98% of the tests were performed on asymptomatic individuals. The prevalence of SARS-CoV-2 positive rtRT-PCR results was around 8% in both intervention periods.
Several studies have been published on the diagnostic accuracy of Ag-RDTs at point of care, reporting an overall suboptimal sensitivity and high specificity [21][22][23][24].
Notably, these studies mainly included symptomatic individuals. Authors highlighted a clear association between the sensitivity (FN rates) and sample viral load, with the Ct values being significantly lower in the Ag-RDT-positive specimens than in the negative ones. According to the intended use of the index test, in our study we mainly enrolled asymptomatic subjects, and this could partly explain the decreased sensitivity in our cohort. In accordance with other reports, our study showed that the number of Ct values was correlated with sensitivity, reaching 85.7% sensitivity for S gene Ct values \ 20. Lower Ct values (\ 20) have been shown to be associated with infectivity [25,26] and with a higher probability of culturing the virus [27]. Therefore, detecting subjects with a higher viral load (lower Ct values), despite being asymptomatic, could help in identifying those individuals who are at higher risk of being infectious.
The latest recommendations on the implementation of the Ag-RDT testing program [8,11,12] suggest that Ag-RDTs should be used mainly in symptomatic cases. They can be useful for testing asymptomatic individuals only when the positivity rate is C 10%. Administering Ag-RDTs in a lower prevalence setting could likely result in lower prediction values; however, high PPV rates were achieved in our cohort. On the other hand, the challenge represented by the low sensitivity could be addressed by adopting the so-called ''test, retest, re-test'' strategy. This strategy, less expensive and easier to implement compared to one RT-PCR run if the first Ag-RDT is negative, may reduce the probability of false-negative results [28].
The intervention periods of our study correspond to different COVID-19 pandemic waves. According to the epidemiological data released by the national Italian authorities, during the first study period, the alpha variant was the main one in circulation, while during the second period, the omicron variant was dominant [18]. Moreover, also due to the roll-out of the COVID-19 vaccination campaign in Italy, the rate of PEH vaccinated for SARS-CoV-2 during the second period was remarkably higher compared to 1 year before (6% vs. 73%). Our results show that the performance of Ag-RDTs for the diagnosis of SARS-CoV-2 infection was not affected by the viral variant or vaccination status.
The study has some limitations. As per the study design, a NPS was performed as a screening for the access to the shelter but it was not systematically repeated during or after the stay, thus precluding any consideration of the efficacy of the ''test, re-test, re-test'' strategy. Furthermore, a detailed description of the population in terms of comorbidities was not provided.
CONCLUSIONS
Considering the low cost, ease of use, and turnaround time, from a public health perspective our findings suggest that Ag-RDTs can be useful for specific population screening programs, especially in high-prevalence settings or when the epidemic curve rises. This study suggests that detecting asymptomatic subjects with a higher viral load could be crucial to identify those individuals who are at higher risk of being contagious and allow for early intervention in terms of public health measures. Considering the low rate of false-positive results, a periodic Ag-RDT-based screening approach at point of care could reliably guide preventive measures, including prompt isolation without referral to hospital-based laboratories for molecular test confirmation in case of positive results. This could help control the spread of SARS-CoV-2 infection in this vulnerable population, thus reducing the risk of outbreaks in shelter facilities. Compliance with Ethics Guidelines. Ethical approval for this study was obtained from the ''Comitato Etico delle Province di Verona e Rovigo'' (2948CESC). All procedures were in accordance with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The informed consent was obtained from all participants.
Funding
Data Availability. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/.
|
2023-03-14T06:18:02.056Z
|
2023-03-13T00:00:00.000
|
{
"year": 2023,
"sha1": "efe1a0e686c3d1e7795665275310da49144e6b4a",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40121-023-00787-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f0acf0ab417555eef2b58e7e4fd588c4149274d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256096367
|
pes2o/s2orc
|
v3-fos-license
|
Delivery of large molecular protein using flat and short microneedles prepared using focused ion beam (FIB) as a skin ablation tool
Many studies have been reported in the literature on the effects of various geometries and lengths of microneedles (MNs) on transdermal drug delivery using a variety of drug molecules. In particular, sharp-tipped MNs have been used to disrupt the top layer of the skin, namely, stratum corneum (SC). It has also been shown that short- and flat-tipped MNs can pierce the SC and they have the potential to increase drug permeability. However, there is little work that explores MNs as a skin ablative tool with a view to increasing skin permeability. To address this point, well-defined small patterns (size of individual pattern 10–20 μm) on the tip of flat MN (tip radius of individual MN ∼250 μm) were created and their effects evaluated on the permeability of bovine serum albumin (BSA), which is chosen as a model drug of high molecular weight. The patterns on the tip of flat MN act as rough surfaces (e.g. like sand paper) which when applied on the surface of the skin ablate the SC layer. Focused ion beam (FIB) has been used as the fabrication technique for the MNs. The permeability data are then compared with the other data for flat- and sharp-tipped MN. The permeability data from passive diffusion experiments are used as the reference case. The exact number of MNs or patterns in the flat and patterned MN patches is not considered as important as they have not been designed to pierce the skin. However, this is an important consideration in the case of sharp MNs as they pierce and create cavities in the skin. It is found that the delivery of BSA with the fabricated flat and patterned MNs gave similar but somewhat lower drug permeation profile in comparison to the sharp MNs. Passive diffusion showed no permeation, as would be expected due to the large size of the chosen molecule.
Introduction
Microneedles (MNs) are a transdermal drug delivery system that combines the technology of transdermal patches and hypodermic needles. There are two key types of MNs, namely, solid and hollow MNs [1,2]. The materials that have been used to fabricate the MNs range from metals [3], glass [4], silicon [5], biodegradable polymers [6,7] and silk fibroin [8]. Ideally, these materials would be pharmacologically inert, non-toxic and compatible with pharmaceutical ingredients, etc. [9]. The metals traditionally used for MN fabrication consist of stainless steel, nickel coated in gold, titanium, platinum and palladium [10,11].
MNs have been used to deliver several high molecular weight drugs. For example, bovine serum albumin (BSA) has been used as a model drug in several journal papers [12][13][14] to characterise the role of various MN geometries on drug permeation in the skin. MNs perform by disrupting the skin and thereby increasing drug permeability. MNs are scarcely less than 150 μm in length, but a study conducted by Wei-Ze et al. [5] explored the fabrication of super short MNs with a length of 70-80 μm. These authors compared the use of sharp-tipped super-short MNs against blunt super-short MNs and longer sharp needles of 1500 μm. Their results concluded that both sharp and blunt super-short MNs are capable of successfully delivering the Alzheimer's drug galanthamine [5].
Passive diffusion of molecules can typically occur with molecules that are less than 500 Da [15]. Therefore, in order to allow permeation of larger molecules such as BSA (approximately 66.5 kDa), the rate-limiting barrier of the skin, i.e. the stratum corneum layer, needs to be disrupted in some form. For this purpose, various skin ablation techniques involving micro-or nano-derma abrasion have been attempted [16][17][18]. However, there is little discussion on how these methods compare with MNs. Furthermore, there is little or no work that explores MNs as a tool for ablating the skin surface.
In addressing these points, this short communication will note the effect of flat-tipped MN and a well-defined patterned MN that ablates the skin to compare the permeation of BSA as a model drug. These permeation data will also be compared to the data from a sharp-tipped MN. Focused ion beam (FIB) is used as the manufacturing technique that can facilitate the fabrication of a well-defined geometry on the tip surface of a flat MN, which is used to ablate the skin. This is in contrast to the MNs used in the study by Wei-Ze et al. [5], where the MNs were used to pierce the skin. An attempt will be made to evaluate if having a rough surface on the tip of the flat MN tip will have any effect on the drug permeability.
There are multiple technological advances in transdermal drug delivery research that are currently being undertaken. For example, various combinational methods where microneedles have been combined with ultrasound and iontophoresis have been tried [13,19]. Another example is the use of laserengineered dissolving MN arrays for the delivery of vaccinations [20].
FIB has been used in this work as it is a technique that can allow a micron-sized shape to be prepared on the tip of the flat MN with good accuracy. This is a novel method to ascertain if applying a pattern on to the tip of a flat MN surface has an effect on permeability. The exact number of patterns on the tip of the well-defined patterned MN is not considered as the aim of this paper is to establish whether a physical ablation technique has an effect on permeability, compared to physically piercing skin.
Materials and methods
Materials BSA and methylene blue were purchased from Sigma-Aldrich (Gillingham, Dorset, UK). A reverse phase high-performance liquid chromatography (RP-HPLC) instrument (Agilent Series 1100, Cheadle Cheshire, UK) was used to determine BSA concentration. Two reagents, acetonitrile and trifluoroacetic acid (TFA), were used as the mobile phases for the HPLC analyses. They were obtained from Fisher Scientific UK Ltd (Loughborough, UK). A Jupiter C4 300A HPLC column (length 150 mm, internal diameter 4.6 mm) column equipped with a security guard column fitted with a Widepore C4 (4 × 3.0 mm) cartridge (Phenomenex, Inc., Macclesfield, UK) was used to quantify the samples containing BSA. A manual Franz diffusion cell (FDC) (Logan Instruments Corporation, New Jersey, USA) was used to conduct the permeation studies with and without MNs. Deionised water purified using a Millipore Elix System (Billerica, MA, USA) was used for all FDC experiments. Fresh porcine full thickness skin was purchased from Welsh School of Pharmacy, Cardiff University, Cardiff (UK) in pre-cut 2.5-cm 2 sections. The porcine skin samples were prepared by first removing the porcine ear from the main body of the animal. The hairs were then carefully shaved off the ear using electric clippers to ensure the surface of the skin was not damaged. The skin was then removed off the ear cartilage with a scalpel and cut into 2.5-cm 2 sections. Samples were shipped via special delivery in insulated packaging on the same day of excision. The skin samples are then stored in the freezer at −22°C. Before the permeation experiments, samples were placed in a beaker of deionised water to thaw for 1 h and then patted dry with laboratory tissue. Commercially available microneedle patch with 1500-μm-long MN was purchased from AdminPatch (Sunnyvale, CA, USA) and used to pre-treat the porcine skin. It was shown by Cheung et al. [21] that the length of this MN does not completely create a cavity in the skin, which has the same length as the MN. This is due to the viscoelasticity of the skin which causes smaller holes [13]. This patch has been proved to help passage of large molecules through the skin [13,21]. This MN patch has been used in previous work to characterise the BSA release in FDC experiments and therefore would be good to relate the flat and FIB fabricated MNs. There are 31 individual needles on this MN patch, and each is 1400 μm in length. The main characteristics of the MNs have been reported previously by Cheung et al. and are not repeated here [21].
BSA release measurements using Franz diffusion cell
A Franz diffusion cell apparatus was used to measure the BSA permeability in skin [18]. One thousand microgrammes per millilitre of BSA containing solution was placed into the donor chamber and samples extracted from the receiving compartment at time points of 15 min, 30 min and 1, 1.5, 2, 3, 4 and 5 h. The procedures for conducting the permeation experiments are similar to the work conducted by Han and Das [13], and therefore, they are not discussed in this paper in detail.
MN insertion into porcine skin
The porcine skin in this study was not stretched when a force was applied which was conducted in a similar manner to MN insertion by Cheung et al. [21]. An in-house force device was used to insert the MN. The method for this is outlined by Cheung et al. [21]. After an MN array was placed onto the porcine skin, it was removed and placed onto the receptor chamber of a diffusion cell. The properties of the purchased AdminPatch® 1500 MN and skin are also shown in Cheung et al. [21].
Method of analysing BSA concentration using HPLC
The concentration of the sample from the FDC experiment was analysed by RP-HPLC. The diode array detector (DAD) was set at 232 nm. A spectrophotometer (SHIMADZU UV mini 1240) was used to determine the specific wavelength when the light absorbance was at the maximum. A gradient method was adopted with eluent A: 0.1 % TFA in water and B: 0.08 % TFA in acetonitrile, with a mobile phase ratio of A: B, 95:05 to 20:80. The sample size of each injection was 10 μL. The temperature was set to 24°C. The flow rate was set to 1 ml/min. A complete RP-HPLC run took approximately 20 min with a down time of 2 min between each run. An external standard approach was used for the standardisation for the analysis of the component sample BSA. External standard samples were prepared using pure chemicals of BSA purchased from Sigma-Aldrich and dissolved in deionised water, purified by Millipore Milli-Q Plus 185. Standards of concentration ranging 10-100 μg/ml BSA were prepared and analysed by HPLC to obtain the absorbance.
Preparation of flat and patterned MNs
As mentioned earlier, this note will compare the use of flat MNs and FIB patterned MNs to create a sand paperlike texture and bought MNs (AdminPatch® 1500) to investigate the amount of BSA permeation. Two hundred fifty-micrometre-diameter, 250-μm-long flat MNs were fabricated to look at the effects of flat MNs on the concentration of bovine serum albumin (BSA) compared to a long sharp MN.
A stainless steel mounting block (H×L×W: 5 mm× 25 mm×17 mm) with nine 2-mm-diameter drilled holes (Fig. 1a) was machined to mount 250-μm-diameter stainless steel wires upright. The steel wires were purchased from Goodfellow Cambridge Limited (Huntingdon, UK). A translucent Perspex sheet of 1 mm thickness was used to make the MN base unit and cut using a milling machine. This had a 3×3 array of nine holes with 0.25 mm diameter drilled with a 1.5-mm spacing (as shown in Fig. 1b). A stainless steel MN mould with a 0.25-mm depth well drilled into it allowed the wires to be mounted at the same height (0.25 mm) (Fig. 1c). To prepare flat MNs, the stainless steel wires were thread into the stainless steel block (Fig. 1a). Hot modelling wax was then placed inside the hole to hold the wire in place. The top surface of the block was then ground for approximately 5 min for each grade of silicon carbide paper (dampened with water), using coarse grit sizes, first, 220, 1200 and then 2400, manufactured by Struers. This was the method to create the flat MN. This same starting point was used to create the FIB patterned MN. The patterning was performed in an FEI Nov. 600 Nova Nanolab dual beam Focused Ion Beam/Field Emission Gun Scanning Electron Microscope (FIB/FEG-SEM). Simple pattern templates were produced by designing a grid pattern that contained 52 15 μm×15 μm square patterns in Photoshop and were input directly into the patterning engine of the dual beam as a bitmap image. Patterning was performed using a 30-kV Ga + beam and a current of 20 nA, and the cutting progress was monitored using the SEM in real time.
Flat MNs were assembled by removing the ground wires from the mounted block and threading them through the 0.25mm-diameter holes perspex mounting disc (Fig. 1b) (ensuring the ground end was not threaded through). Once all the wires were threaded, the MN disc was then placed onto the PTFE MN mould with the ground end of the wires pushed gently into the mould to ensure all the needles were at the same length (Fig. 1c). The wires were then glued in place with araldite and left to dry in the mould. A MN holder was then placed on top to conceal the excess wires (Fig. 1e). This was also the mounting piece for allowing a specific force to be applied to porcine skin. Figure 1e depicts the MN mould and mount with the MN contained inside. The patterned MN was mounted in a similar manner. The MN would be a ninearray MN with a pitch of 1.5 mm. The MNs made using FIB technique were fabricated to ascertain if the flat MNs that were patterned allowed more drug to permeate through porcine skin compared to the flat MNs that were produced.
Characterisation of Microneedles
Surface morphology characterisation was conducted using optical microscopy, SEM and infinite focus 3D microscopy (Graz, Austria). These analysis techniques were used to determine if the surface of the MN was uniform. A Canon (Surry, UK) microscope was used to initially visualise if the top surface of the needles were ground flat. SEM images were used to image pre-and post-ion beam milling of the material. 3D microscopy images were taken using the instrument Infinite Focus by Alicona (Kent, UK).
Characterisation of MNs
The flat MNs were analysed using microscopy to visualise their initial profiles in order to ascertain if further grinding was required. Once the needles were sufficiently flat, they were then characterised using 3D microscopy to characterise the flat tip profile, as shown in Fig. 2a-c. Using various grades of silicon carbide paper was an effective technique to obtain flat MN structures. It can be seen that there is no burring on the sides of the needles, even if the method used excessive force to grind the needles. This is because the modelling wax used to hold to wired prevented any movement. The 3D microscopy images as shown in Fig. 2b, c show further that the silicon carbide paper produced flat profile MNs.
The patterned MNs were initially produced and characterised in an identical manner to the flat MNs. These images are depicted in Fig. 3a-c. The flat MN was placed into a FIB chamber for micro-patterning. Figure 3a shows a SEM image of the flat MN after the micro-patterning with FIB, and it shows that the edges of the needle are slightly covered by residual wax. Consequently, the patterned MN was cleaned prior to any permeation experiments. 3D microscopy images as shown in Fig. 3b, c were also taken of the patterned MNs in order to obtain a depth profile of the patterned MN to ascertain if they were successfully patterned to the specification outlined. The 3D microscopy results showed that the pattern and the profile were consistent. Therefore, the SEM images produced at the time of ablation will be sufficient enough to visualise the tips of the well-defined FIB needle. The images of the tip were taken after they were cleaned. In the case of the microscopy images, it is shown that SEM produced a clearer image of the FIB structure than using 3D microscopy. However, the 3D microscopy can give an indication of the height of the patterned MN tips which was shown to be approximately 4 μm. Two views of the commercially available sharp microneedle patch which we have used in this work to pretreat the porcine skin are shown in Fig. 3d, e.
Skin permeation of BSA
The concentration of BSA in the receptor chamber was calculated over a period of 5 h (300 min) from the time when the donor compound of concentration 1000 μg/ml BSA was placed into the donor chamber. Full thickness of porcine skin was used as the membrane. Four parameter experiments were conducted: passive diffusion (no MN insertion), the use of well-defined patterns on the tip of flat MN to ablate the skin (FIB MN), fabricated short flat MN (ablation of the skin to act like sand paper) and the insertion of a sharp MN. A repeat of ten experiments was conducted and the average results presented. The results for the FIB fabricated patterned MNs showed a similar BSA release profile compared to flat MN, therefore indicating that the patterned MN showed little to no difference in drug permeability as that of flat MNs. The MN Fig. 1 a Stainless steel mounting block, b microneedle backing plate, c stainless microneedle mould mount, d ground surface of 250 μm stainless steel wire mounted in stainless steel block held by modelling wax, e MN mould and mount with the MN contained inside and f MN geometry pitch 1.5 mm Fig. 2 a Single SEM image and b, c single 3D microscopy images of 250 μm stainless steel wires ground using three grades of silicon carbide paper to make flat short MNs was placed onto the porcine skin for 3 min using the pneumatic pump. Passive diffusion was used as a control.
It was observed (Fig. 4) that the concentration of BSA increased over time for all MN insertions but produced no concentration of BSA for passive diffusion. This is because the molecular size of BSA is too large to passively diffuse through the SC layer of the porcine skin. Similar results have been shown in the literature [22]. After 5 h, the concentration of BSA when a sharp MN was applied to porcine skin showed an approximate concentration of 120 μg/ml compared to no insertion of MN on porcine skin. Whereas after 5 h, the concentration of BSA when a flat MN 250 μm diameter, 250 μm length, was applied to porcine skin showed an approximate 80 μg/ml concentration difference compared to no insertion of MN on porcine skin. The patterned MN gave a 7-μg/ml less concentration compared to the flat MN. This shows a small drug release difference compared to the sharp MN, and therefore, it is observed that the patterned and flat MN gave similar drug release profiles.
The results for the insertion of flat and sharp MNs showed a similar trend with the concentration of BSA being approximately 40 μg/ml greater for sharp MNs than flat MNs over 5 h. There is a greater diffusion of BSA through the porcine skin when a MN is inserted into the porcine skin compared to no MN insertion. This is because the MN created channels in the skin which allow the passing of a larger concentration of BSA to diffuse through the full skin thickness. The channels that have been opened allow a greater diffusion environment allowing large molecules to permeate through the porcine skin. Flat MNs show a promising permeability of MN compared to sharp MNs, where by the skin is ablated rather than directed pierced. Similar results have been shown in the literature conducted by Wei-Ze et al. [5]. However, they showed that super-short MNs are capable of successfully delivering galanthamine (GAL) with a higher permeation than sharp MNs. From our results, it can be observed that the flat MNs give lower concentration profiles to the chosen sharp MNs. However, the concentration difference is not different enough to suggest that flat MN could not be used as an alternative to sharp MNs. This would infer that the flat MNs could be used as an alternative to conventional long sharp MNs and avoid the problems associated with pain from long MNs [23].
It has been shown in the literature by Han and Das [24] that the amount of drug release as a result of MN insertion is largely affected by the length of the MNs themselves. Therefore, the resultant drug release profile from insertion of the sharper MN compared to the insertion of flat and welldefined MNs seems to be consistent with the observations made by Han and Das [24]. The sharp MN is nearly six times the length of the other two MNs used. As there is a slight height difference in the MN length for the flat and welldefined MNs of approximately 4 μm, the question to ask is whether this is a significant height variation in MN length to Fig. 4 Concentration of bovine serum albumin (BSA) over a period of 300 min when a solution of BSA concentration 1000 μg/ml is applied onto full thickness porcine skin in vitro, with no microneedle (passive diffusion) and insertion of flat microneedle, FIB microneedle and a long sharp microneedle for a period of 3 min. All experiments have been done with ten repeats deduce a significant difference in drug release profile. As Han and Das have illustrated the actual MN penetration depth in the skin is not the same as the length of the MN, we have assumed that this height difference is negligible in this case.
Conclusion
It was shown that the delivery of BSA with fabricated flat microneedles (approximate concentration of 80 μg/ml) gave a similar drug release profile in comparison to well-defined FIB fabricated patterned MNs (approximate concentration of 70 μg/ml) after 5 h. The sharp MNs showed an increase of drug release in comparison to the flat MNs, but they are expected to be more painful when inserted into the skin. Passive diffusion gave no permeation data, as would be expected due to the large molecular size of the molecule. The results for the sand paper like MNs fabricated using FIB showed similar BSA release as the flat MNs (250 μm diameter, 250 μm length), therefore indicating that the sand paper MNs showed negligible difference in drug permeability as that of the flat MNs. The results show that using FIB as a technique to create a sandpaper effect to porate the skin is an effective tool.
Funding This work was supported by Loughborough University, UK, and the EPSRC, UK.
Conflict of interest
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2023-01-23T15:15:10.815Z
|
2015-07-31T00:00:00.000
|
{
"year": 2015,
"sha1": "00bac64d99c50d0d5f93b93a94724a59819a0aeb",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13346-015-0252-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "00bac64d99c50d0d5f93b93a94724a59819a0aeb",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
256606913
|
pes2o/s2orc
|
v3-fos-license
|
Pediatric magnetoencephalography
Magnetoencephalography (MEG) is a technology used in pediatric and adult epilepsy that records magnetic fields produced from electric currents in the brain. MEG can locate epileptogenic zone(s), lateralize language functions, localize sensorimotor cortex, and identify visually evoked fields. It is a powerful technology with key advantages in pediatrics. The majority of its limitations are resource driven. With advancing technology, MEG will become a more prominent and valuable tool used in pediatric epilepsy and epilepsy surgery in the future. We review MEG and provide illustrative cases to showcase its usage.
Introduction
Approximately 7%-20% of all pediatric patients diagnosed with epilepsy will follow a drug-resistant course. 1 The associated morbidity, mortality, potential for developmental regression, and loss of quality of life pose major challenges for both parents and clinicians to manage. Epilepsy surgery for many patients remains the only hope of cure or significant reduction in seizure frequency and severity. The noninvasive tests used in planning and performing epilepsy surgery help guide the ability to obtain an adequate resection of diseased tissue while sparing a healthy brain. The importance of reliable and accurate technology is paramount. Magnetoencephalography (MEG) is a technology that records magnetic fields produced from electric currents in the brain. 2 Its use has grown given its ability to locate epileptogenic zones, lateralize language functions, localize sensorimotor cortex, and identify visually evoked fields. 3 This review highlights the uses of MEG through example cases in pediatric drug-resistant epilepsy and epilepsy surgery.
History and design concept
MEG was discovered in 1968 by David Cohen. MEG technology is largely adapted from a similar technologycalled magnetocardiography-that detects magnetic fields in the heart. MEG samples magnetic fields using superconducting quantum interference devices (SQUIDs). 4 A SQUID is a device made up of superconducting loops containing Josephson junctions, which measure magnetic fields. 5 Initially a single SQUID was moved across a patient's scalp to query independent regions. Later, SQUIDs were arranged in arrays to cover a larger surface area of the scalp, thereby making the technology fundamentally more user-friendly. MEG samples 3-4 cm 2 of cortical electric activity by placing 306 sensors across the brain. 4 It detects magnetic fields generated by intracellular currents predominantly from dendritic cells in contrast to electroencephalography (EEG), which detects extracellular currents predominantly from pyramidal cells. 6 MEG has been utilized in the world of pediatric epilepsy surgery for only the past three decades or so. Given the paucity of MEG machines in the world, pediatric neurologists and epileptologists may not be universally exposed to its powers.
Utility
MEG's ability to locate epileptogenic zones and eloquent cortices hinges upon its ability to solve inverse problems. 7 An inverse problem works by collecting observations and then calculating the causal factors that produced the given observations. The areas of interest in a MEG study-like the eloquent cortex-are separated from the brain's normal background electric activity. This is accomplished through magnetic source imaging (MSI). MSI uses fiducial points to identify activated brain regions. 8 Fiducial points are reference points that are typically placed in three locations on a patient's head (left pre-auricular, right pre-auricular, and nasion). These points are used to create a map of the brain and magnetometers in space. 8 Fiducial points use lipid markers as reference points on magnetic resonance imaging (MRI). 8 The structural magnetic resonance image is then co-registered with the MEG recording data to create the magnetic source image. This ultimately provides MEG the ability to identify language centers, motor cortex, and sensory cortex.
MEG and EEG have been studied and are complementary studies. Over 50% of patients have interictal epileptiform discharges (IEDs) that are visible on both EEG and MEG. 9 Seven percent of IEDs may be visible only on EEG compared to 18% of IEDs that may be identified only by MEG. 9 The IEDs that can be seen on MEG but not EEG are often from deep tissue in the brain such as the frontal lobe, temporal lobe, and insula. 10 Classically, these epilepsies can be difficult to diagnose in children as the scalp EEG may be negative. The classic semiology of these seizure types is often absent as children-especially young childrenstruggle to report internal focal features prior to generalization, which detracts from the physician's ability to localize based on semiology.
MEG's ability to identify IED where EEG cannot has direct implications in planning the presurgical evaluation of potential epilepsy surgery candidates. This may be related to the superior coverage that MEG offers with its 306 sensors that cover the entire brain. MEG has been shown to be superior to conventional scalp EEG and high-density EEG in localizing interictal discharges. 11 Identifying the irritative zone helps epileptologists and neurosurgeons plan surgeries with good hypotheses in order to create the best possible blueprint to obtain meaningful information from invasive recording. MEG can be particularly helpful in cases of frontal lobe epilepsy that masquerade as a generalized epilepsy on the scalp EEG recording. 12 It also has distinct advantages in identifying seizures of deep tissue onset.
MEG is useful in identifying a single epileptogenic zone in patients who have multiple MRI lesions. 10,13 MRInegative patients may benefit from MEG given its ability to localize clusters of discharges that may identify MRInegative cortical dysplasias. 10,14 Studies have shown a high concordance between interictal MEG localization and localization data obtained from intracranial EEG recording. 15 MEG's power is not confined only to IEDs. Ictal MEG has been shown to have superior lateralizing and localizing power compared to scalp EEG in identifying the seizure onset zone. 16,17 MEG systems like the BabySQUID and BabyMEG are used for infants and toddlers with drug-resistant epilepsy. 18 These systems allow for improved data collection because the sensors can be positioned closer to the brain and fit more easily to young patients, which aids in ease of setup. 18
Limitations
The major limitations to the utility of MEG are resource driven. Estimates indicate that there are likely no more than 200-300 machines in the world. 19 The cost of building and operating MEG is significant and thus it may not be feasible for centers to universally possess. MEG requires a specialized room free of magnetic noise to reduce or eliminate artifacts. SQUIDs require very low temperatures to record accurately, and the cost of maintaining a MEG is high. 20 MEG sensors are not fixed to the scalp of the patient and are thus susceptible to motion artifacts. The ability to operate a MEG and analyze patients relies on a team of highly trained MEG professionals. Currently, few training programs in the country exist, thereby restricting the number of trained professionals.
Future of MEG
MEG technology continues to advance. As technology in general advances, the cost of production and distribution of complex machines decreases. MEG is no exception. Models predict that future MEGs may be five times less expensive, which would increase the ability of medical centers to purchase and operate this technology. 4 Advances are also being made to produce a mobile MEG, which would increase patient access. Optically pumped magnetometers (OPMs) show promise as a wearable alternative that is similar to MEG. OPMs are quantum sensors that measure magnetic fields through the manipulation of a quantum property called spin. 21 A particle's movement in response to a magnetic field is its spin. OPMs use a light source to measure spin as a reflection of the underlying magnetic field. A major advantage of OPMs compared to MEG is that OPMs do not rely on cryogenic cooling and thus can be placed within millimeters of the patient's scalp. 21 With such close proximity, OPMs are more sensitive to smaller magnetic fields. 21 Researchers in the MEG field continue to work to find more accurate information identifying the seizure onset zone in patients with epilepsy. Recently, a new phenomenon called the ripple onset zone (ROZ) has been shown to have a high localizing value to the seizure onset zone. 22 The ROZ better understands the propagation of electricity to better estimate the epileptogenic zone. 22 Further research is needed in this area, but preliminary studies have shown promise in epilepsy surgery planning. High-frequency oscillations (HFOs) have been shown to be superior to sharp waves in identifying the seizure onset zone in patients. 23 These HFOs have also been studied with MEG. There is early evidence to suggest that HFO mapping with MEG may be helpful for mapping epilepsy surgery, although further research is needed. 23 Studies pairing modalities such as MEG and functional MRI show promise in more accurate localization of seizure onset zone. Finally, MEG use continues to expand not only in epilepsy but also in other diseases like dementia, autism, anxiety, and depression. [24][25][26] Example Cases Case 1 A 20-month-old boy presented to the pediatric tertiary care hospital with episodic independent right and left tonic arm extension and L-predominant clonic activity. These events occurred one to two times per day. His initial EEG (Figure 1) revealed midline and right frontal maximal sharp waves. His MRI revealed a large region of focal cortical dysplasia in the right superior frontal gyrus (Figure 2). The location of the dysplasia was potentially concerning as it was near the motor cortex. As the patient grew older, his seizures became drug resistant. He failed multiple medication trials, prompting presurgical evaluation. MEG was performed to better understand the epileptogenic zone and its relationship to the eloquent cortex and to determine if he would be a candidate for a proposed surgical intervention.
Case 1 highlights MEG's ability to densely localize an epileptogenic zone in addition to eloquent motor cortex in a very young pediatric patient. Given the patient's age, other diagnostic modalities such as functional MRI and transmagnetic stimulation would not be well tolerated. The MEG clearly localized the epileptogenic zone ( Figure 3) in addition to the eloquent motor cortex, allowing plans to resect the lesion to proceed.
Case 2
A nine-year-old girl presented to the tertiary care children's hospital with new onset seizures. The first event was generalized in semiology. An EEG performed was abnormal due to the presence of left temporal sharp waves and slowing (Figure 4). 3T MRI was normal. After the addition of levetiracetam, the patient's seizure semiology changed and she experienced frequent episodes of right head turn, right eye deviation, and right facial and hemibody clonic activity with loss of consciousness. Given the refractory nature of her seizures, surgical candidacy was explored. As a part of the evaluation, MEG was performed ( Figure 5). Case 2 highlights MEG's ability to localize potentially MRI-negative areas of cortical dysplasia and receptive language. This advantage allows epileptologists to better counsel families about further options-such as surgeryand the associated risks and benefits.
Conclusion
As a diagnostic and surgical planning tool for pediatric patients with drug-resistant epilepsies, MEG has clear advantages and few limitations. As the cost of MEG decreases, it should become increasingly available in comprehensive epilepsy centers. Pediatric epilepsy specialists should be urged to familiarize themselves with this powerful tool in order to provide patients with the best care and the highest chance of achieving seizure freedom.
|
2023-02-06T16:03:32.658Z
|
2023-02-03T00:00:00.000
|
{
"year": 2023,
"sha1": "aee5748eb2538d542b2d8097a91659c4437780c4",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cns3.20011",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "13bd8a1e8776aa7ea26b1375c3f29d3feea9692c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
221718208
|
pes2o/s2orc
|
v3-fos-license
|
Group Chase and Escape of Biological Groups Based on a Visual Perception-Decision-Propulsion Model
A relatively simple and local interaction between individuals produces coordinated and ordered collective behaviors that are widespread at all levels of biological groups. Group chase and escape is an important aspect in the field of collective behavior, particularly in regard to predation events in species interactions. Compared with other aspects of collective behavior, less research has been performed on this aspect, and the existing models are constructed only from the phenomenological perspective. We present an individual-based model named Visual Perception-Decision-Propulsion to explore the group chase and escape of biological groups and define several evaluation indicators to assess different aspects of this problem. Within this model, 2 types of self-propulsion individuals, i.e., predators and prey, are considered, and we consider the alignment and repulsion term between homogeneous individuals. Chase and escape are described as the escape (or chase) term between heterogeneous individuals. Based on the model, we identify and distinguish between 2 capture patterns, i.e., cooperative capture and separative capture. Then, we control the internal parameters to analyze the condition of these 2 patterns for production, and the external empirical parameters are adjusted to explore their effect on these 2 patterns. Hence, this paper provides a novel model for group chase and escape based on biological vision to compensate for the shortcomings of classical models and help apply the characteristics of biological groups to human-made swarm systems in the case of confrontation.
I. INTRODUCTION
Collective behavior, which is widespread at all levels of biological groups, is fascinating to most of us [1]. Related topics include shoals of fish [2]- [8], flocks of birds [9], [10], swarms of locusts [11]- [13], communities of bacteria [14], [15], groups of microtubules [16], masses of histiocytes [17], [18], streams of traffic and flows of humans [19], [20]. A relatively simple and local interaction between individuals produces coordinated and ordered collective behaviors such as these [21]. In this way, biological groups show various intelligent characteristics (distribution, self-adaption, and robustness) that cannot be achieved by a single individual in all kinds of situations. A conspicuous behavioral pattern [22] (e.g., aggregation, obstacle avoidance, group chasing and escaping [23]) is observed as a cohesive and highly coherent group. Research in this field is required to explain the The associate editor coordinating the review of this manuscript and approving it for publication was Dong Shen . complex emergent behaviors that are similar to the above patterns at the individual and group levels and to further apply these intelligent characteristics shown by biological groups to human-made swarm systems. Therefore, as an interdisciplinary subject, collective behavior modeling and mechanism exploration is a challenging area of focus.
Several models (e.g., the rule-based model, the random rotation model, and the Boids model) have been proposed to research collective behavior during the last several decades. The classical models of collective behavior are based on 3 simple behavioral rules [1]: separation (avoiding crowding local neighbors), alignment (steering toward the average heading direction of neighbors) and cohesion (moving toward the average position of neighbors), and this type of model was originally considered to demonstrate the quantitative and qualitative collective behavior observed in fish and birds. The random rotation and Boids models, which belong to this type of model, were presented by Aoki [24] and Reynolds [25], respectively. As a special case of the Boids model, the Vicesk model [26], proposed by a physicist, takes the velocity alignment between individuals into consideration to explore the simplest conditions for collective behavior. Later, the Couzin model [27], developed by a biologist, was widely used in theoretical biology and extended to the coordinated control of swarm robotics.
The aforementioned models constructed from a phenomenological perspective have been basically mature in recent years. In these models, individuals follow 3 simple behavioral rules to interact with neighbors based on their positional and velocity information. However, these models narrowly focus on the understanding of decision and propulsion mechanisms while neglecting the understanding of perception mechanisms, which is a vital link between animal flocks and human-made swarm systems. Therefore, such approaches seriously hinder our understanding of the inherent complexity of collective behavior. In the field of sensory neuroscience, the recognition and research of visual perception are penetrating deeply and have made some new progress. Geisler and Albrecht [28] demonstrated that edge detection is performed in the visual cortex of higher animals. Since this discovery, numerous researchers have increasingly explored collective behavior based on visual projection from the perspective of sensory neuroscience, which is a promising direction for explaining the complex behavioral patterns shown by various biological groups. In the case of fish groups, Strandburg-Peshkin et al. [29] found that the structural features of visual interaction networks are different from those of measurement and topology by analyzing individual motion with visual perception information. Rosenthal et al. [30] revealed the nature of social contagion and provided full evidence of the feasibility of predicting complex cascades of behavioral changes by calculating the visual field of individuals. When the research objective was a group of birds, the simplest hybrid projection was presented by Pearce et al. [31] to provide a method for density control. For human crowds, Moussaïd et al. [32] reproduced the self-organization phenomena and crowd turbulence that have been observed during crowd disasters, guided by visual information. In the case of human-made swarm systems, Lavergne et al. [33] applied the principle that motility changes in individuals in response to visual perception in a real system to achieve the formation and cohesion patterns of collective behavior. Schilling et al. [34] achieved a vision-based flock based on a convolutional neural network.
Group chase and escape is an important aspect in the field of collective behavior, particularly in predation events in species interactions. Such problems often involve 2 types of individuals (predators and prey). For predators, their objective is to locate and catch the prey as quickly as possible. Conversely, the objective of prey is to escape and avoid getting captured. Related problems have been studied for a long time [35]; however, the extant studies on this problem are less common than those on other problems of collective behavior. Initially, mathematical tools (e.g., game theory and geometry) were used to study such problems [36], [37], but these methods lack relevant attributes (e.g., noise). Later, Kamimura and Ohira [38] defined a cost function to determine the optimal number of predators for a given number of prey. Angelani [23] proposed a simple individual-based model based on the Vicsek model and found two catch regimes. The model comprises 3 behavioral interactions: alignment and separation between homogeneous individuals and chase (or escape) between heterogeneous individuals. Matsumoto et al. [39] introduced new parameters to a simple group chase and escape model and classified the configurations of chasing and escaping in groups into three characteristic patterns. Yang et al. [40] introduced three aggregation strategies for predators and showed that the aggregation of predators increases the survival time of prey. Using a set of local interaction rules, Janosov et al. [41] considered time delay, external noise and limited acceleration to research the situation of predators chasing a much faster prey and showed how that group chase can significantly enhance the rate of capture.
According to the above research, more effective methods are needed in the field of collective behavior, and only a few studies consider group chase and escape. To study the internal mechanisms of group chase and escape and provide assistance for the construction of human-made swarm systems in the case of confrontation, we propose an individualbased model named Visual Perception-Decision-Propulsion to study the group chase and escape of biological groups. The major contributions of this paper are summarized as follows.
1) The Visual Perception-Decision-Propulsion model is proposed to explore the group chase and escape of biological groups and focuses on the process of perception represented by edge detection from a sensory neuroscience perspective. In addition, we clearly define several evaluation indicators used to evaluate the behavior of group chase and escape. 2) We prove that the model effectively achieves group chase and escape and find 2 capture patterns (cooperative capture and separative capture). For cooperative capture, multiple predators maintain a relatively long distance from one another to surround prey and approach to capture prey. In contrast, for separative capture, predators are separative, and prey are captured by a single predator. Additionally, we monitored the change in the number of clusters of predators when predation occurred to distinguish these 2 patterns.
3) The internal and external parameters are controlled to analyze these 2 patterns. We can observe cooperative capture when the effect of the repulsion term is moderate and smaller than the chase term, and it shows a better effect with relatively large number of predators and a large interaction range. The separative capture was achieved under the condition that the effect of self-propulsion and alignment term and the effect of repulsion term are balanced and the effect of escape term is small. Relatively large number of predators VOLUME 8, 2020 are also required to catch quickly for separative capture, but they need either a small or large interaction range, which is different from the cooperative capture. The remainder of this paper is organized as follows. In Section II, an individual-based Visual Perception-Decision-Propulsion model is presented to model the group chase and escape of biological groups. Based on this approach, in Section III, we define several evaluation indicators that are used to assess the behavior of group chase and escape. To further verify and explore our model, numerical experiments are performed, and the results are presented in Section IV. Our conclusions and a final discussion are provided in Section V.
II. THE VISUAL PERCEPTION-DECISION-PROPULSION MODEL
Group chase and escape are complex and unique in biological groups. It is notable that patterns are shown during predation events in species interactions. The movement of individuals in biological groups involves the following 3 processes: perception, decision and propulsion. For the perception process, individuals obtain the positional and velocity information of their neighbors. The decision process is the key link between the perception and propulsion processes and the appropriate action (e.g., acceleration, deceleration, and turning left and right) chosen by individuals to update their velocity based on the perception information in the decision process. The perception and propulsion processes are considered input and output by the decision process, respectively. During the propulsion process, individuals move at an updated velocity transferred from the decision process. Therefore, to explore the mechanism of group chase and escape, an abstracted and representative model is critical. In this section, an individualbased Visual Perception-Decision-Propulsion model is presented to characterize this problem. The flow of the model is given as follows.
Input:
Positions of individuals in the last moment, r(t − 1); Velocities of individuals in the last moment, v(t − 1); Output: Positions of individuals in the current moment, r(t); Velocities of individuals in the current moment, v(t); 1: The perception process: obtaining the velocity information (v j (t), j ∈ S al i ) and visual information (θ rep i and θ CT i ), as shown in Fig. 1; 2: The decision process: calculating the change in v int i (t) for each individual based on Eq. 1; 3: The propulsion process: calculating r(t) and v(t) for each individual based on Eq. 7; 4: return r(t) and v(t).
A. PERCEPTION
We consider 2 groups of organisms: predators (or chasers C) and prey (or targets T). The number of predators N c is constant in the simulation, while N t is the number of prey that can decrease over time because of the occurrence of predation events. Individuals are described by their position r and velocity v vectors in 2 dimensions. We consider N = N c + N t (0) individuals as round with radius BL and perform a simulation in a square box with periodic boundary conditions. The length of this box (L) depends on the density ρ, N c and the number of initial prey N t (0), and not change as prey are captured; that is, L = √ (N c + N t (0))/ρ. For the perception process, individual i obtains the velocity and visual information of other individuals. The selfpropulsion and alignment term obtains the velocity information of individuals of the same group (including individual i) within a round range of radius r 0 surrounding individual i (Fig. 1A); that is, v j (j ∈ S al i ). The visual information of individuals of the same and different groups within this range ( Fig. 1B and C) is obtained for the repulsion and escape (or chase) term. The visual information is reasonably represented as θ i by performing edge detection, which is demonstrated to be performed in the visual cortex of higher animals [28], [31]. However, it is difficult to reproduce the visual perception of organisms in the real world. Therefore, we calculate the edge detection of each individual as an interval of an angle based on the positional information and then consider the overlapping of the visual information of different individuals to obtain the union of these intervals, the boundaries of which are used to approximate the result of edge detection.
B. DECISION
Based on the perception information, we consider the selfpropulsion and alignment term (v al i ), repulsion term (v rep i ) and escape (or chase) term (v CT i ) to define the decision equation as where v int i is a unit vector, hat (. . .) denotes a normalized vector, and the coefficients in all terms are taken to obey The self-propulsion and alignment term (Fig. 1A) that takes the velocity information obtained in the perception process as the input considers inertia and intragroup interactions to reproduce the coherence of biological groups by accumulating the velocity vector, and its definition is as follows: The repulsion term (Fig. 1B) that takes the visual information obtained in the perception process as the input considers intragroup interactions to avoid crowding local neighbors and is defined as Similarly, the escape (or chase) term (Fig. 1C), which takes the visual information obtained in the perception process as the input to make prey move away from predators and avoid getting captured (predators get closer to prey and catch them), considers intergroup interactions and is defined as The last 2 terms in Eq. 1 use projections of the central unit vector of the occlusion area of the visual field (cos θ i,j,1 +θ i,j,0 2 and sin θ i,j,1 +θ i,j,0 2 ) to characterize the effect of direction on velocity. The closer the distance between individuals is, the wider the angle of the occlusion area of the visual field. Therefore, the angle of the occlusion area also reflects the distance between individuals, and we take a weighted average of the angle of the occlusion area to characterize the effect of distance.
C. PROPULSION
The propulsion process is taken as output by the decision process. We assume that the velocity magnitude of individuals is constant and focus only on the change in the velocity direction of individuals. Predators and prey move at constant velocity magnitudes v c and v t , respectively. Based on the above description, the positions and velocities of individuals are considered to be updated based on the studies of Chaté et al. [42] in the following propulsion equation: where t is the unit time, and we set the upper limit of time t as t max . Additionally, if any predators are closer than the radius of capture area r c to prey, then the prey is captured and eliminated.
III. EVALUATION INDICATORS
According to the above Visual Perception-Decision-Propulsion model, we define the following indicators that represent different aspects of group chase and escape. The prey may be eliminated during the simulation; therefore, we monitor only the following indicators for the group of predators. For the research on group chase and escape, the indicator total catch time t end when all prey are captured (N t (t) = 0) is necessary to assess the quality of capture for the predators and the prey. When t = t end or t = t max , the simulation is stopped.
We set indicator v c to assess the coherence of predators, which is frequently described by the absolute value of the average normalized velocity: The closer v c approaches to 1, the better the coherence of the group. However, for the group chase and escape, predators chase different prey, which perform as several clusters. Therefore, it is necessary to monitor the number of connected clusters of predators N cluster c . To define clusters, we defined a network of interactions that contains the predators (nodes) and edges. An edge exists between 2 predators if they are closer to each other than r 0 . Hence, N cluster c is the number of connected components of the network of predators.
To assess the cohesion of predators, we present the average closest neighbor distance d min c , as given below.
where d ij (t) is the distance between predators i and j.
IV. CASE STUDY
To evaluate the quality of the group chase and escape, we conduct a number of experiments to analyze the model based on the above evaluation indicators. At the beginning of each experiment, we randomly place individuals with a random velocity direction into a square box of length L with periodic boundary conditions. In our simulation, we fix 6 parameters and control for the other parameters (Table. 1). We first show 2 capture patterns and distinguish them by monitoring the evaluation indicators. Then, we evaluate the influences of 3 internal parameters to explain the cause of these patterns. Finally, 2 external empirical parameters are controlled to analyze their influences on these patterns.
A. CAPTURE PATTERNS
The model achieves 2 capture patterns, i.e., cooperative and separative capture, under different parameters, as shown in Fig. 2. We show the cooperative capture with φ al = 0.15, φ rep = 0.35 and φ CT = 0.5, and the separative capture is performed with φ al = 0.5, φ al = 0.45 and φ CT = 0.05. Fig. 2A and B indicate the 2 patterns with r 0 = 10, N c = 8 and N t (0) = 1. Additionally, we also conduct another experiment with r 0 = 10, N c = 18 and N t (0) = 18 and monitor indicators N t (t)/N t (0) and N cluster c /N c for the 2 patterns (Fig. 2C and D). In reality, predators cooperate to surround and capture prey when the prey is relatively larger in size than are they. The model achieves this pattern as cooperative capture. With moderate φ rep , multiple predators keep a relatively long distance from each other to surround prey and approach to capture prey because of large φ CT and φ CT > φ rep (Fig. 2A). In the case of separative capture, φ al accounts for a large proportion, so the group of predators performs with better coherence. However, since the self-propulsion and alignment term contains self-propulsion, that is, the inertia of the individual when moving, even if the escape term exists, the prey cannot escape in time and will be captured by a single predator (Fig. 2B). Moreover, predators are separative due to large φ rep , which makes it more probable that the prey will be captured by a single predator. This pattern corresponds to the situation in which predators attempt to catch a small animal in reality. Comparing the indicators N t (t)/N t (0) and N cluster c /N c of the 2 patterns ( Fig. 2C and D), we find that the value of N cluster c /N c of the cooperative capture is smaller than that of separative capture. In addition, with the occurrence of capture events, the value of N cluster c /N c of the cooperative capture increases stably after first descending. Multiple predators come together from different directions to form a cooperative capture. Once the prey is captured, it is eliminated. Therefore, the chase term no longer works, and the repulsion term is dominant, which leads to the predators dispersing in all directions and finding new prey. Inside, the value of N cluster c /N c of separative capture is almost unchanged. The results suggest that our model reproduces predation events as cooperative and separative capture. Fig. 3A correspond to the 2 capture patterns in Fig. 2, where the large blue area at the top is cooperative capture, and the small blue area at the bottom is separative capture. In the case of cooperative capture, fixing φ al , as φ rep increases, we find that N cluster c /N c and d min c increase; that is, predators become sparser, and t end of cooperative capture rises at the beginning and then decreases. When φ rep is too small, the distance between predators becomes narrow, and thus, they cannot surround prey, which is in line with the finding of Yang et al., in which the aggregation of predators increases the survival time of prey [40]. In contrast, predators cannot get close to prey to reach the capture range even if they can surround the prey when φ rep is too large. When φ rep is moderate and φ rep < φ CT , predators can surround and approach prey to catch them with the maintenance of a separative group, which is shown as cooperative capture. With the increase in φ al , the coherence of predators improves, and the capture time of separative capture first decreases and then increases because prey with large inertia are dull to predators when φ al is large and φ CT is small. In addition, the probability of separative capture increases with separative predators. With a balance between φ al and φ rep and small φ CT , predation shown as separative capture achieves good performance. When the effect of the repulsion term is moderate and smaller than the chase term, we can observe cooperative capture, and our model achieves separative capture under the condition that the effect of the self-propulsion and alignment term and of the repulsion term are balanced, and the effect of the escape term is small.
C. EXTERNAL EMPIRICAL PARAMETERS
In nature, the predation between different species corresponds to different external experience parameters (the ratio of the number of predators to prey N c /N t (0) and the radius of the interaction area r 0 ), so we control for them and perform 50 experiments to explore cooperative capture (φ al = 0.15, φ al = 0.35 and φ CT = 0.5) and separative capture (φ al = 0.5, φ al = 0.45 and φ CT = 0.05). We control for N c /N t (0) with r 0 = 5 and N t (0) = 10 and control for r 0 with N c = N t (0) = 50. As shown in Fig. 4, we analyze the role of these parameters in the 2 patterns based on indicators t end and N cluster c /N c . With the increase in N c /N t (0), cooperative and separative capture show better effects ( Fig. 4A and B). In the case of cooperative capture, the larger the number of predators is, the better they surround the prey and leave it with less space to escape. However, when N c /N t (0) increases to a certain number, the predators will surround the prey, and the effect will optimal. That is, increasing the number of predators cannot reduce the time required to capture prey. When N c /N t (0) is small, the number of predators is insufficient. To achieve cooperative capture, predators can capture prey only one by one. Therefore, an increase in predators in this situation means an increase in N c and a decrease in N cluster c /N c . In contrast, when N c /N t (0) is large, predators can capture prey at the same time. An increase in predators leads to an increase in N cluster c /N c in this case. ρ and N t (0) are constant; therefore, the density of predators increases (Eq. 10). For this reason, separative capture performs better with the increase in N c /N t (0) because of the increase in the probability of predation, and N cluster c /N c decreases with the increase in N c /N t (0).
With the augmentation of r 0 , the t end of cooperative capture decreases because predators have an effect on remote prey, surround them and then approach to capture. Because φ CT is greater than φ rep , the chase term makes the predators become more aggregative when multiple predators perceive the same prey at the same time. Additionally, r 0 is used to construct the interaction network of predators to define clusters, so the increase in r 0 will also make predators more aggregative. For separative capture, although predators can surround prey, φ rep is greater than φ CT , so they disperse before the capture condition is achieved. Therefore, when r 0 is small, multiple predators can approach the prey without the effect of the repulsion term, and as the interaction range increases, the prey can perceive remote predators and take action earlier to avoid capture. However, when r 0 is large, prey perceive too many predators, so they are confused and cannot choose a good direction to escape. In this case, because the definition of cluster is related to r 0 , N cluster c /N c increases. From the above analysis, cooperative capture shows a better effect with relatively large number of predators and a large interaction range. Similarly, relatively large number of predators are required to catch quickly for separative capture, but they need a small or large interaction range, which is different from the requirements of cooperative capture.
V. CONCLUSION
Group chase and escape is an important aspect in the field of collective behavior, particularly in predation events in species interactions. Two types of individuals, i.e., predators and prey, are the main research objects. Predators attempt to catch prey as soon as possible, while prey attempt to escape to avoid being caught. On the one hand, the research on this topic can help us understand biological species interactions, and on the other hand, we can construct human-made swarm systems with biological characteristics in the case of confrontation based on a deep understanding of the collective behavior of biological groups. Compared with other aspects of collective behavior, less research has been performed on this aspect, and the existing models are constructed only from the phenomenological perspective. The models focus on the decision and propulsion process of biological groups, but they ignore the cognition of the perception process, which is a vital link among animal flocks and human-made swarm systems. Therefore, it is necessary to attach importance to research the group chase and escape from the essence of species interactions.
To compensate for these shortcomings and provide help in constructing human-made swarm systems with biological characteristics in the case of confrontation, we propose an individual-based Visual Perception-Decision-Propulsion model to explore group chase and escape from the sensory neuroscience perspective [43], which offers a new direction in this field. This model considers the interaction between homogeneous individuals (the self-propulsion and alignment term and the repulsion term) and between heterogeneous individuals (the escape or chase term). On the basis of this model, we found 2 capture patterns of biological capture and analyzed the internal and external parameters. We can observe cooperative capture when the effect of the repulsion term is moderate and smaller than the chase term, and it shows a better effect with relatively large number of predators and a large interaction range. Our model achieves separative capture under the condition that the effect of the self-propulsion and alignment term and of the repulsion term are balanced and the effect of escape term is small. Relatively large number of predators are also required to catch quickly for separative capture, but they need a small or large interaction range, which is different from the requirements of cooperative capture.
Although the model effectively reproduces the group chase and escape of biological groups, it still has the following shortcomings. The model is constructed from the sensory neuroscience perspective. However, it still contains the alignment term, which requires the velocity information of neighbors, which is difficult to obtain in human-made swarm systems and, thus, is a quite challenging task; therefore, it clearly presents a large challenge for the construction of humanmade swarm systems, and we need to improve this model based purely on biological vision in future work [44]. In our model, to reduce the parameters of the model, predators and prey make decisions based on the same Eq. 1. However, in reality, the strategy of predators is different from that of prey. In addition, we mainly focus on the capture pattern from the predator standpoint while ignoring the escape pattern from the prey standpoint, and thus, we need to improve this focus as well. Generally, theoretical models of collective behavior should be critically evaluated based on their correlation with real-world biology or human-made groups, which conforms to the purpose of the research on collective behavior. Hence, the analysis of actual biological data should be performed in the future. Furthermore, more work is required on the theoretical models to research a specific biological group and the comparison between realistic and experimental data.
|
2020-09-03T09:14:36.785Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "cd7faf5e405a621216d6e4dbe5fb3d056fb1cd4c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09184811.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "cb78127ffac812410d89f1037bf76c33c8929cd8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Computer Science"
]
}
|
244271767
|
pes2o/s2orc
|
v3-fos-license
|
Loss of soluble guanylyl cyclase in platelets contributes to atherosclerotic plaque formation and vascular inflammation
Variants in genes encoding the soluble guanylyl cyclase (sGC) in platelets are associated with coronary artery disease (CAD) risk. Here, by using histology, flow cytometry and intravital microscopy, we show that functional loss of sGC in platelets of atherosclerosis-prone Ldlr−/− mice contributes to atherosclerotic plaque formation, particularly via increasing in vivo leukocyte adhesion to atherosclerotic lesions. In vitro experiments revealed that supernatant from activated platelets lacking sGC promotes leukocyte adhesion to endothelial cells (ECs) by activating ECs. Profiling of platelet-released cytokines indicated that reduced platelet angiopoietin-1 release by sGC-depleted platelets, which was validated in isolated human platelets from carriers of GUCY1A1 risk alleles, enhances leukocyte adhesion to ECs. I mp or ta ntly, p ha rm ac ol ogical sGC stimulation increased platelet angiopoietin-1 release in vitro and reduced leukocyte recruitment and atherosclerotic plaque formation in atherosclerosis-prone Ldlr−/− mice. Therefore, pharmacological sGC stimulation might represent a potential therapeutic strategy to prevent and treat CAD.
cGMP. From a genetic perspective, private mutations in the genes encoding for sGC and rare coding and common noncoding variants in the GUCY1A1 gene, which encodes the α 1 -subunit of the sGC, were all associated with CAD or premature MI by exome sequencing and genome-wide association studies (GWAS), respectively 4,5 . These mutations and variants reduce sGC expression or activity [5][6][7] ; in line with this, enhanced NO-cGMP signaling has been associated with reduced risk of several cardiometabolic phenotypes including CAD and peripheral artery disease 8 . GUCY1A1 and GUCY1B1, genes encoding sGC, are expressed at high levels in platelets and vascular smooth muscle cells. In humans, carriers of the CAD risk variant rs7692387 have reduced sGC α 1 protein levels, which might impair the effects of the natural platelet inhibitor NO 7 . Indeed, a retrospective analysis of two randomized trials revealed that inhibition of platelet activity by aspirin successfully reduced cardiovascular events in primary prevention only in homozygous carriers of the GUCY1A1 risk allele 9 . Since the overall role of platelets in atherosclerosis is controversial 10 , we decided to delete sGC in mice platelets specifically by knockout of its β 1 -subunit to investigate the contribution of platelet sGC on atherosclerosis and vascular inflammation and further evaluate the potential of sGC stimulation as a therapeutic strategy.
Platelet sGC and leukocyte adhesion in vitro
To follow up on enhanced adhesion of leukocytes to atherosclerotic plaques in mice lacking platelet sGC under proatherogenic conditions, we tested whether this phenotype can be resembled in vitro. Therefore, we isolated blood monocytes and neutrophils from WT mice and incubated these with WT aortic ECs in the presence of activated platelet releasate from Pf4-Cre + Gucy1b1 +/LoxP or Pf4-Cre + Gucy1b1 LoxP/LoxP mice. We found that incubation with the supernatant of activated platelets lacking sGC enhanced leukocyte, particularly monocyte (33,125 ± 1,313 versus 27,039 ± 555 relative fluorescence units (RFU), P = 0.006, n = 8; Fig. 2a) and neutrophil (33,810 ± 1,139 versus 27,824 ± 758 RFU, P < 0.001, n = 8; Fig. 2b) adhesion. To delineate whether leukocytes or ECs are activated by the sGC knockout platelet releasate, we preincubated neutrophils or monocytes and ECs with activated platelet supernatant from Pf4-Cre + G ucy1b1 LoxP/LoxP mice before performing the adhesion assay. We found that preincubation of ECs with supernatant from activated sGC knockout platelets increased adhesion compared to preincubation of neutrophils (31,383 ± 1,731 versus 22,254 ± 1,662 RFU, P adj < 0.001, n = 12 experiments; Fig. 2c) or monocytes (Extended Data Fig. 5). In line with this, already at this very early time point, we found enhanced expression of the adhesion molecule Vcam1 in ECs that were incubated with supernatant of activated Pf4-Cre + Gucy1b1 LoxP/LoxP platelets (Extended Data Fig. 6). These data indicate (1) that platelets from Pf4-Cre + Gucy1b1 +/LoxP and Pf4-Cre + Gucy1b1 LoxP/LoxP mice differentially release a soluble factor and (2) that this preferentially leads to activation of ECs.
Reduced release of angiopoietin-1 by sGC-deficient platelets
We observed the influence of a soluble factor released by platelets on leukocyte adhesion to ECs. To identify such factors, we next performed cytokine profiling with supernatant of activated Pf4-Cre + Gucy1b1 LoxP/LoxP and Pf4-Cre + Gucy1b1 +/LoxP platelets. Signal intensity analysis revealed lower angiopoietin-1 (ANGPT1) levels in the supernatant from Pf4-Cre + Gucy1b1 LoxP/LoxP platelets (3.2 ± 0.4 (n = 7) versus 6.7 ± 0.8 (n = 8) arbitrary units (a.u.), P = 0.002; Fig. 3a and Extended Data Fig. 7). We next aimed at replicating this finding in an independent cohort of mice using an enzyme-linked immunosorbent assay (ELISA). Importantly, ANGPT1 levels were comparable in quiescent platelets (0.21 ± 0.01 versus 0.23 ± 0.01 pg 10 −3 platelets, n = 6 each, P = 0.33; Fig. 3b) as well as in platelet-poor plasma (PPP) from Pf4-Cre + Gucy1b1 LoxP/LoxP and Pf4-Cre + Gucy1b1 +/LoxP mice (0.6 ± 0.2 ng ml −1 (n = 6) versus 0.4 ± 0.1 ml −1 (n = 5), P = 0.46; Fig. 3c). In line with the explorative analysis displayed in Fig. 3a, we found Pf4-Cre + Gucy1b1 LoxP/LoxP platelets to release reduced amounts of ANGPT1 on activation (30.4 ± 6.4 versus 60.3 ± 4.7 ng ml −1 , n = 6 each, P = 0.004; Fig. 3d). ANGPT1 decreases particularly vascular endothelial growth factor (VEGF)-mediated adhesion of leukocytes to ECs 11 and binds to the Tie2 receptor on ECs 12 . We used the Tie2 inhibitor BAY-826 to investigate whether Tie2 inhibition influences leukocyte adhesion and found a 17% (±1.2%, n = 11 experiments, P = 0.04) increase in leukocyte adhesion secondary to the inhibition of the ANGPT1 receptor (Fig. 3e). These data demonstrate that ANGPT1 represents a candidate for mediating the effects of platelet sGC on leukocyte recruitment. Of note, inositol 1,4,5-trisphosphate receptor-associated cGMP-kinase substrate (IRAG), which represents a downstream effector of cGMP specifically in modulating platelet activity, is encoded by the IRAG1 (ref. 13 ) (previously IRAG or as human homolog MRVI1) gene and has also been associated with CAD by GWAS 14 . To investigate whether differential ANGPT1 release is mediated via IRAG, we generated Pf4-Cre + Irag1 LoxP/LoxP mice and investigated platelet ANGPT1 release compared to the respective controls. In contrast to Pf4-Cre + Gucy1b1 LoxP/LoxP platelets, we did not detect a difference between the genotypes in this experiment indicating that the influence of sGC on platelet ANGPT1 release is independent of IRAG (Extended Data Fig. 8). It was previously shown that the genotype of the rs7692387 CAD risk allele at the GUCY1A1 locus 4 influences GUCY1A1 expression in different tissues 8 and sGC α 1 protein levels in platelets in particular 7 . Therefore, we tested whether the rs7692387 genotype is associated with ANGPT1 release from platelets in healthy human individuals (n = 5 each; for characteristics see Supplementary Table 3) and found that homozygous carriers of the CAD risk allele G display lower ANGPT1 release compared to heterozygous or homozygous carriers of the non-risk allele (4.5 ± 0.7 versus 8.3 ± 1.4 ng ml −1 , P = 0.04; Fig. 3f).
To explore the role of ANGPT1 in relation to the genes encoding sGC in humans, we queried the STARNET database which contains bulk RNA sequencing (RNA-seq) data from seven cardiometabolic tissues from patients undergoing coronary artery bypass graft surgery 15,16 . ANGPT1 was detected per tissue in seven distinct coexpression modules (Fig. 3g). Of note, ANGPT1 was coexpressed with both GUCY1A1 and GUCY1B1 (encoding sGC α 1 and sGC β 1 ) in all tissues except mammary artery represented by coexpression module 63 (Fig. 3g). These data suggest ubiquitous presence and clinical variation in the amounts of platelets. To investigate circulatory rather than tissue-resident platelets, we further analyzed coexpression module 11 from whole-blood samples; this module consisted of 1,016 genes and was estimated to account for 4.9% of CAD heritability by considering expression quantitative trait locus (eQTL) genes in a meta-analysis of nine GWAS using the restricted maximum likelihood method 17 . Coexpression analysis revealed positive correlation between GUCY1A1/GUCY1B1 and ANGPT1 expression and enrichment for genes involved in the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway platelet activation (Fig. 3h). Altogether this suggests that a substantial proportion of CAD heritability could be mediated by platelets. Furthermore, Gene Ontology (GO) enrichment analysis of this module (Supplementary Table 4) revealed, among others, 'response to wounding' (P = 4.71 × 10 −63 ), 'wound healing' (P = 1.84 × 10 −60 ), blood coagulation (P = 1.99 × 10 −48 ) and 'regulation of locomotion' (P = 6.64 × 10 −44 ). These findings support the role of platelets and the interaction of ANGPT1 with sGC in human CAD.
Modulation of ANGPT1 release by sGC stimulation
The sGC represents a druggable target and sGC stimulators are used in different clinical scenarios, for example, riociguat in pulmonary arterial hypertension and chronic thromboembolic pulmonary hypertension 18 or vericiguat in chronic heart failure 19 . We next aimed at investigating whether modulation of sGC using a vericiguat-like sGC stimulator (BAY-747) can influence the release of ANGPT1 and vascular inflammation. To this end, we first incubated platelets from WT mice with BAY-747 or vehicle and analyzed ANGPT1 release and leukocyte adhesion to ECs. Of note, the platelets of Pf4-Cre + Gucy1b1 LoxP/LoxP mice are not responsive to BAY-747 regarding, for example, platelet aggregation (Extended Data Fig. 9). Stimulation of sGC with BAY-747 in WT platelets doubled ANGPT1 release (133.8 ± 6.5 ng ml −1 versus 64.0 ± 4.3 ng ml −1 , n = 4 each, P = 0.002; Fig. 4a) and reduced neutrophil adhesion to ECs by 15.6% (± 6.5%, n = 7 each, P = 0.02; Fig. 4b). To determine which downstream cGMP pathways instead of IRAG could influence ANGPT1 release on platelet activation by shaking, we performed serine/threonine kinase profiling in WT platelets that were preincubated with BAY-747 and then activated by shaking. As displayed in Supplementary Table 5, sGC stimulation led to a marked increase in the activation of kinases from the AGC group, which include protein kinase C (PKC) and protein kinase G (PKG). To determine whether the PKC pathway is involved in ANGPT1 release and whether canonical cGMP signaling is involved in its regulation, we next inhibited the inositol 1,4,5-trisphosphate receptor (IP 3 -R), PKC and PKG directly and measured ANGPT1 release secondary to activation by shaking (Fig. 4c). Compared to vehicle, inhibition of IP 3 -R (−23.8 ± 2.8 ng ml −1 , n = 5, P adj = 0.01), PKC (−23.9 ± 1.4 ng ml −1 , n = 6, P adj < 0.001) and PKG (−21.3 ± 3.5 ng ml −1 , n = 6, P adj = 0.004) reduced ANGPT1 release. In contrast, inhibition of further downstream pathways did not result in altered ANGPT1 release (Fig. 4d). In an analysis of phosphorylated peptides, main targets of NO-cGMP signaling were detected, including endothelial NO synthase and vasodilatorstimulated phosphoprotein being among the top ten phosphorylated targets (Supplementary Table 6). These data indicate that ANGPT1 release by platelets induced by shaking is mediated via the PKC pathway and modulated by cGMP via canonical cGMP signaling.
Therapeutic potential of stimulating sGC in atherosclerosis
To investigate whether sGC stimulation in vivo can reduce recruitment from blood to the vascular wall, we fed Ldlr −/− mice a Western diet containing 0 or 150 parts per million (ppm) BAY-747 and adoptively transferred green fluorescent protein (GFP) + myeloid cells after 6 weeks. We retrieved GFP + monocytes admixed with neutrophils from naïve transgenic Ubc-GFP mice (all leukocytes express GFP + ) and injected the cells intravenously into Ldlr −/− mice (all cells are GFP − ). After a further 24 h, mice were killed and the numbers of GFP + cells were determined using flow cytometry of aortic cell suspensions. Mice that received the Western diet containing the sGC stimulator displayed reduced numbers of aortic GFP + cells (21.4 ± 3.3 versus 42.8 ± 6.9 cells, n = 8 each, P = 0.01; Fig. 5a). These data and the data displayed in Fig. 4 indicate that pharmacological sGC stimulation can modulate platelet ANGPT1 release, in vitro leukocyte adhesion, and in vivo leukocyte recruitment.
To determine whether such treatment can reduce atherosclerotic plaque formation, we again fed atherosclerosis-prone Ldlr −/− mice a Western diet for ten weeks. In one group, the diet contained 150 ppm BAY-747 (treatment group) while the other group received a Western diet without (0 ppm) BAY-747 (control group). Both groups had elevated serum cholesterol levels without significant difference between the two groups; similarly, platelet count, blood leukocytes and body weight after the diet were comparable (Extended Data Fig. 10a-d). Under steady-state conditions, BAY-747 plasma levels in male and female mice receiving BAY-747 were 61.7 ± 1.4 μg l −1 and 40.3 ± 1.7 μg l −1 (n = 6, each), respectively; in the control group, BAY-747 was not detectable in plasma as expected (Extended Data Fig. 10e). In the aortic root, we found significantly reduced atherosclerotic plaque formation in mice of the treatment group (62.5 ± 16.1 μm 2 (n = 7) versus 123.1 ± 18.6 μm 2 (n = 9), P = 0.03; Fig. 5b). Furthermore, we detected fewer leukocytes in the aortic roots of those animals (30.3 ± 5.8% (n = 8) versus 50.5 ± 6.3% (n = 11) of plaque area, P = 0.04; Fig. 5c).
Discussion
NO-sGC-cGMP signaling has important functions in several cell types. For instance, increasing intracellular cGMP levels inhibits migration of vascular smooth muscle cells and aggregation of platelets, respectively 20,21 . The observation that genes encoding key proteins in this pathway were associated with CAD and premature MI by GWAS 4,14,22,23 and exome sequencing studies 5,8,24 renders an important role in the pathophysiology of coronary atherosclerosis likely. In this study, we sought to specifically investigate the role of platelet sGC in atherosclerosis because we found that carriers of the common, noncoding risk variant rs7692387 identified by GWAS 4 displayed reduced expression of sGC in platelets and, as a consequence, reduced inhibition of platelet aggregation on NO stimulation 7 . In addition, we observed that in contrast to the general population, individuals who are homozygous for this variant had a benefit from aspirin treatment in primary prevention of cardiovascular events 9 . In a series of in vivo and in vitro experiments, we observed larger atherosclerotic plaques in the aortic roots of mice lacking platelet sGC compared to sGC WT mice. Platelet sGC has just recently been described to act as an endogenous brake on platelet aggregation 25 . Indeed, it has been hypothesized that platelets adhering to endothelial or plaque erosions may be activated more or less depending of the availability of sGC. This may stimulate inflammation locally and, subsequently, facilitate atherosclerotic plaque formation. While this is a hypothesis, we report evidence that platelet sGC influences the release of the soluble factor ANGPT1 with reduced amounts released by platelets lacking sGC. The role of ANGPT1 in atherosclerosis is controversial since there are reports describing a protective 11,12,26 and a deleterious role 27 . However, given the cellular findings describing the inhibitory effect of ANGPT1 on leukocyte adhesion and vascular atherosclerosis reported in the literature 11 , which were confirmed in this study, and the beneficial role of ANGPT1-Tie2 signaling in inflammatory diseases in general 28 , it represents an interesting candidate mediating the downstream effects of platelet sGC signaling. In addition, its effects might be depending on coreleased factors. Importantly, the role of sGC in platelets exceeds the role as a brake on aggregation: while sGC passivates platelet aggregation, it enhances the release of ANGPT1 when platelets are modestly activated. The notion that this effect is independent of aggregation is further supported by the findings that (1) IRAG, the mediator of cGMP-dependent inhibition of platelet aggregation 29 was not significantly involved in ANGPT1 release and (2) that ANGPT1 release was induced by shaking but not, for example, ADP or arachidonic acid. ANGPT1 release rather seems to be mediated by PKC activity and modulated via canonical cGMP signaling. It is important to acknowledge that the content of ANGPT1 in WT and knockout platelets was similar further indicating a direct link between sGC availability and ANGPT1 release. This is further emphasized by the finding that a genetically determined reduction, but not lack of, sGC in the platelets of carriers of the CAD-associated risk variant rs7692387 was associated with a reduction in ANGPT1 release. This, on the one hand, supports the translational relevance of this by nature artificial in vitro and animal studies; on the other hand, it raises the question whether modulators of sGC function could be used to prevent and treat coronary atherosclerosis. Indeed, stimulators of sGC are emerging as therapeutic compounds for different cardiovascular diseases (for an overview see Sandner et al. 30 ). Recently, the sGC stimulator vericiguat was found to lower the risk of death from cardiovascular causes or hospitalization for heart failure in patients with heart failure with reduced ejection fraction 19 . However, data on atherosclerotic plaque formation and ischemic cardiovascular events are lacking thus far.
To this end, we investigated whether sGC stimulation using a vericiguat-like stimulator, BAY-747, can modulate the reported cellular and phenotypic effects. First, we found that sGC stimulation can increase ANGPT1 release and subsequently reduce leukocyte adhesion to ECs in vitro. Since we postulate that sGC contributes to reducing vascular inflammation, we next studied whether sGC stimulation can alter the recruitment of inflammatory leukocytes from blood to plaque. In an adoptive transfer experiment, we observed a reduction in leukocyte recruitment that is furthermore in line with previous reports showing anti-inflammatory effects of cGMP-increasing pharmacological compounds 31,32 . The finding of multiple genome-wide significant hits in genes that encode proteins tightly involved in the formation (NOS3 (ref. 22 ), GUCY1A1 (ref. 4 )), fate (PDE5A 23,24 ), or mediating the downstream effects of cGMP (IRAG1 (ref. 14 ), PDE3A 33 ) and the observation that sGC levels are reduced in atherosclerotic tissues 34 increase the likelihood that targeting sGC might be beneficial in CAD. In atherosclerosis-prone mice on a Western diet, stimulating sGC with BAY-747 led to a reduction in atherosclerotic plaque formation and vascular inflammation warranting further investigation into this promising pharmacological treatment strategy. Further evidence might be taken from a recent study that compared two pharmacological strategies which are used to treat erectile dysfunction: compared to alprostadil, that is, prostaglandin E1, treatment with the inhibitor of phosphodiesterase 5A, sildenafil, was associated with a reduced risk of all-cause mortality and MI in men suffering from CAD 35 . However, to definitively prove a beneficial role in CAD, prospective clinical trial data are needed.
Taken together, we provide further evidence for a crucial role of platelets in atherosclerosis in general, and of platelet sGC in particular. As shown by our in vitro studies using human and murine biospecimen and in vivo studies, we postulate an endogenous inhibitory role of platelet sGC on EC-mediated leukocyte recruitment (Fig. 6). We are aware that platelets are not the only cell type in which sGC activity and genetic variants modulating its availability influence CAD risk. Modulating sGC activity, especially using stimulators, might nevertheless be a promising therapeutic strategy exceeding the effects on platelet sGC.
Our study has several limitations. First, this is an in vitro and mouse in vivo study that cannot resemble the human physiology and pathophysiology. However, the finding that lack of platelet sGC in mice and genetically determined reduced platelet sGC α 1 in humans both reduce release of platelet ANGPT1 supports a possible translation. Furthermore, similar to humans, a genetically determined reduction in Gucy1a1 messenger RNA was associated with increased atherosclerotic plaque formation in the hybrid mouse diversity panel 7,36 . Second, although we have shown that reduced ANGPT1 release and enhanced leukocyte adhesion to ECs is linked to reduced or lacking platelet sGC availability and that stimulation of sGC can modulate these downstream effects, the exact molecular mechanism linking sGC and ANGPT1 release is to be explored. While it is well known that platelets contain at least three types of secretory granules, with α-granules that also harbor ANGPT1 (ref. 37 ) being the most abundant type, there is also evidence for the existence of functionally distinct subpopulations within α-granules 38 , which may allow selective release of their contents by different stimuli 39,40 . A similar observation has been reported in neutrophils regarding storage of ANGPT1 and VEGF 41 . We further provide evidence that it is independent of IRAG but mediated via canonical cGMP signaling. Third, we know that ANGPT1 is likely not the only mediator of sGC effects on atherosclerosis. Rather, ANGPT1 might modulate the effect of other cytokines and mediators that are released by platelets on modest activation. Fourth, although we show that platelet sGC activity influences leukocyte recruitment and atherosclerotic plaque formation, the benefit of systemic sGC stimulators might be influenced by the effects on sGC in other cell types, for example, vascular smooth muscle cells. Lastly, we cannot generalize a benefit of sGC stimulation to human platelets. Yet our study can, in addition to its biological implications, be regarded as hypothesisgenerating for future clinical trials investigating whether modulating cGMP formation is useful in addition to reduce, for example, low-density lipoprotein cholesterol levels and residual inflammatory risk.
Human samples
The study protocol was approved by the local ethics committee of the Technical University of Munich (no. 387/17S). Blood was collected from healthy volunteers after signing the informed consent. To determine the genotype of the individuals, DNA was isolated from whole blood using the Puregene Blood Kit (catalog no. 158489; QIAGEN) according to the manufacturer's protocol. Samples were genotyped for the GUCY1A1 risk variant using an rs7692387 TaqMan Genotyping Assay (C__29125113_10) on a Viia7 system (both Thermo Fisher Scientific).
Histology, immunohistochemistry and en face staining
Aortic roots were embedded in optimal cutting temperature compound (catalog no. 62550; Sakura Finetek) and snap-frozen to −80 °C. Frozen samples were cut into 5-μm sections and applied to microscope slides. From the onset of aortic valves, every fifth slide was subjected to tissue staining. For the Masson's trichrome stain, the procedure of Masson as modified by Lillie was applied according to the manufacturer's instructions (Sigma-Aldrich). In brief, frozen sections were hydrated and fixated in 4% paraformaldehyde before mordanting in Bouin's solution (catalog no. HT10132; Sigma-Aldrich) at 56 °C for 15 min. Afterwards, cell nuclei were stained in Weigert's iron hematoxylin solution (catalog no. HT1079; Sigma-Aldrich) and darkened in running tap water. Specimens were successively subjected to Biebrich scarlet-acid fuchsin solution, phosphotungstic/phosphomolybdic acid solution and aniline blue solution (catalog no. HT15; Sigma-Aldrich) for staining of cytoplasm, muscle and collagen structures, respectively. After rinsing in 1% acetic acid, slides were dehydrated in an increasing ethanol row followed by xylene and covered with mounting medium. Mean total plaque size (in μm 2 ) was evaluated for sections showing at least two complete cusps by manually selecting the plaque area in ImageJ2 (version 2.3.0). For immunohistochemistry, specimens were fixed in ice-cold acetone, blocked in 10% rabbit serum (catalog no. S5000; Vector Laboratories) in PBS with Tween-20 (0.2%) and stained in anti-CD11b antibody (1:200, catalog no. 101202; BioLegend) or antimonocyte + macrophage (MOMA) antibody (1:50, catalog no. ab33451; Abcam) overnight, followed by incubations in horseradish peroxidase (HRP)-conjugated rabbit anti-rat secondary antibody (1:200, catalog no. ab6734; Abcam) and AEC substrate (catalog no. ab64252; Abcam). The primary antibodies have been validated by the manufacturers for use in immunohistochemistry on frozen mouse samples. Subsequently, cell nuclei were counterstained with Gill's hematoxylin solution II (catalog no. 1051752500; Merck Millipore). CD11b or monocyte and macrophage content was quantified as CD11b-or MOMA2-positive area, respectively, per total plaque area by means of automated color thresholding in ImageJ2. For en face analyses, aortae were dissected from the heart to the iliac bifurcation, cleaned of surrounding tissue and fixed for 24 h at 4 °C in a 4% solution of paraformaldehyde in PBS. Samples were washed in 60% isopropanol and incubated for 30 min at 37 °C in a freshly filtered solution of 3 mg ml −1 Oil Red O (ORO) (catalog no. O0625; Sigma-Aldrich) in 60% isopropanol. After washing off excess dye in 60% isopropanol, aortae were opened longitudinally, pinned on a black pad and imaged on a Stemi 2000-C microscope with an Axiocam ERc 5 s camera using the ZEN 2.3 blue software (version 2.3.69.1000, Carl Zeiss). The percentage of the lesion area was determined manually as the ORO-positive area of total en face aortic area from aortic root to branch of the right renal artery using ImageJ2. All staining and analyses were performed in blinded fashion.
Megakaryocytes were collected by performing two rounds of bovine serum albumin (BSA) density gradient filtration 48 . Briefly, cells were resuspended in PBS and placed on top of two layers of a 1.5 and 3% BSA solution and incubated for 40 min. Sedimented cells were subjected to a second round of density gradient filtration, obtained purified megakaryocytes resuspended in 500 μl of TRIzol (catalog no. 15596026; Thermo Fisher Scientific) and stored at −80 °C for further processing.
Isolation of nucleic acids and (quantitative) PCR
After adding chloroform, samples were shaken vigorously and centrifuged at 12,000g for 15 min and 4 °C. The upper phase containing RNA was further processed using the RNeasy Mini Kit (catalog no. 74139; QIAGEN) according to the manufacturer's recommendations. RNA was quantified using a NanoQuant Plate on an Infinite M200 PRO plate reader (TECAN) and RNA integrity was measured on a 2100 Bioanalyzer (Agilent Technologies).
After the RNA was transcribed into complementary DNA using the High-Capacity RNAto-cDNA kit (catalog no. 4388950; Applied Biosystems), real-time quantitative PCR (qPCR) was performed using the TaqMan Fast Universal PCR Master Mix (catalog no. 4366072) and TaqMan probes (Gucy1a1, Mm01220285_m1; Gucy1b1, Mm00516926_m1; Gucy1a2, Mm01253540_m1; Gucy1b2, Mm00555742_m1; Vcam1, Mm01320970_m1; Gapdh, Mm99999915_g1; all Thermo Fisher Scientific). Reactions were performed in a total volume of 10 μl on a ViiA 7 system (Thermo Fisher Scientific). Gapdh was used as a housekeeping gene and data were evaluated by conversion to ΔCt values.
Protein extraction
Lung and aorta were collected from Pf4-Cre + Gucy1b1 +/LoxP and Pf4-Cre + Gucy1b1 LoxP/LoxP mice after perfusion of organs with PBS and snapfrozen in liquid nitrogen. For protein isolation, specimens were placed in ice-cold radioimmunoprecipitation assay (RIPA) buffer (catalog no. 9806S; Cell Signaling Technology) supplemented with 1:100 protease inhibitor cocktail (catalog no. 1861278; Thermo Fisher Scientific) and disrupted using an electric tissue homogenizer on ice. To isolate peripheral blood mononuclear cells (PBMCs), heparinized full blood was applied onto a layer of Ficoll-Paque Premium (catalog no. 17-5442-02; GE Healthcare) and centrifuged at 400g for 30 min without break. The interface containing PBMCs was transferred to new tubes, washed and incubated in RBC lysis buffer (BioLegend) for 5 min. Cells were resuspended in RIPA supplemented with 1:100 protease inhibitor cocktail. For the generation of thrombocyte lysates, platelets were collected from heparinized full blood as stated previously and resuspended in RIPA buffer supplemented with protease inhibitor to a number of 2 × 10 8 cells per ml. Cells were disrupted by intermittent sonication 3 times for 30 s in an ice bath. Protein concentrations were determined using a bicinchoninic acid assay (catalog no. 23227; Thermo Fisher Scientific) according to the manufacturer's protocol.
Generation of supernatant from activated platelets
A total of 800 μl each of blood from Pf4-Cre + Gucy1b1 +/LoxP mice and Pf4-Cre + Gucy1b1 LoxP/LoxP mice was collected in heparinized tubes, gently diluted in PBS and centrifuged for 10 min at 100g and room temperature without active deceleration. PRP was subjected to a second centrifugation step at 700g to obtain platelets that were resuspended in Roswell Park Memorial Institute (RPMI) 1640 medium (catalog no. A1049101; Thermo Fisher Scientific) and activated by orbital shaking for 30 min at 1,000 rpm. Samples were centrifuged at 12,000g for 10 min and the supernatant was directly used in subsequent experiments. Thrombocyte count was analyzed simultaneously with an automated hematology analyzer (Sysmex Corporation).
For sGC stimulation, platelet suspension of WT mice was split in 2 vials containing either BAY-747 (150 ppm (150 mg l −1 ) final concentration) or vehicle (DMSO, 0.4%), mixed gently and incubated for 30 min before shaking.
Blood from healthy volunteers was collected in hirudin-coated tubes (Sarstedt) and centrifuged for 13 min at 170 g and room temperature without active deceleration. PRP was transferred into new tubes and activated by orbital shaking for 30 min at 1,000 rpm. Afterwards, samples were centrifuged at 12,000 g for 10 min and the supernatant was stored at −80 °C before conducting subsequent experiments. Thrombocyte count was determined from PRP as stated above.
Primary murine aortic ECs (catalog no. C57-6052, mAoEC; Cell Biologics) were cultured in complete EC medium (catalog no. PB-M1168; PeloBiotech) in a humidified incubator with 5% CO 2 at 37 °C and grown to confluency in 96-well plates for experiments.
Adhesion assays were performed using the CytoSelect Leukocyte-Endothelium Adhesion Assay (catalog no. CBA-210; Cell Biolabs) according to the manufacturer's instructions. Briefly, leukocytes were fluorescently labeled with LeukoTracker solution, resuspended in RPMI 1640 and added to mAoECs to a number of 2.5 × 10 5 cells. Cells were incubated for 1 h at 37 °C in the presence of 50 μl of activated platelet supernatant. Plates were washed three times to remove nonadherent cells, lysed in 1× lysis buffer and fluorescence was measured on an Infinite M200 PRO plate reader (Exc = 485 nm and Em = 535 nm; TECAN). Experiments were conducted in triplicate.
For the preincubation experiments, either neutrophils/monocytes or ECs were exclusively incubated with activated platelet supernatant of WT mice for 30 min before performing the adhesion assay, omitting the additional administration of platelet supernatant in this step.
Incubation of ECs with activated platelet supernatant
mAoECs were grown to confluence in 12-well dishes and stimulated with activated platelet supernatant from either Pf4-Cre + Gucy1b1 +/LoxP or Pf4-Cre + Gucy1b1 LoxP/LoxP mice in RPMI 1640 for 1 h at 37 °C. Cells were lysed by adding 500 μl TRIzol and stored at −80 °C.
Cytokine profiling and enzyme-linked immunosorbent assays
Cytokine profiling assays were performed using the Proteome Profiler Mouse XL Cytokine Array (catalo no. ARY028; R&D Systems) according to the manufacturer's protocol. Briefly, after isolating and activating the platelets of Pf4-Cre + Gucy1b1 +/LoxP and Pf4-Cre + Gucy1b1 LoxP/LoxP mice by shaking in RPMI as described above, samples were added to the supplied antibody-spotted nitrocellulose membrane and incubated at 4 °C overnight. Captured proteins were detected using a mixture of biotinylated detection antibodies followed by streptavidin-HRP and visualized using chemiluminescent detection reagents. Signal intensities were detected by an ImageQuant LAS 4000 imaging system and analyzed using the appropriate image analysis software (ImageQuant LAS TL, version 8.1; GE Healthcare Life Sciences). The signal intensities of target proteins were normalized to the signal intensities for the reference spots in each sample. The procedure was repeated for a total of six samples per group.
ANGPT1 ELISAs were performed to determine murine (catalog no. EK1296; Boster Biological Technology) and human (catalog no. DANG10; R&D Systems) protein levels according to the manufacturer's recommendations.
Coexpression analyses in STARNET
Aligned multitissue RNA-seq samples from STARNET 15 were pseudolog transformed and normalized using L2 penalized regression with penalty term 1.0, adjusting for the covariates: sequencing laboratory; read length; RNA extraction protocol (PolyA and Ribo-Zero); age; and sex. Additional adjustments included the first four surrogate variables detected by surrogate variable analysis 50 and flow cell information after singular value decomposition retaining components with eigenvalues >4. Coexpression modules were inferred using weighted gene coexpression network analysis 51 with β = 5.2 for tissue-specific and β = 2.7 for cross-tissue correlations, resulting in both tissue-specific and cross-tissue coexpression networks as described previously 52 . These data and analyses were accessed through the STARNET browser (http://starnet.mssm.edu) by querying coexpression modules containing ANGPT1. KEGG pathway and GO enrichment was carried out on coexpression module 11 for transcripts derived from whole blood (1,012 out of 1,016 transcripts) using Enrichr 53 .
Flow cytometry
Mice were killed under isoflurane anesthesia and blood was collected in EDTA-coated microvettes (Sarstedt). For in vivo staining of circulating blood leukocytes, an antibody directed against CD45-BV605 (1:10 in 100 μl PBS, clone 30-F11, catalog no. 103140; BioLegend) was injected intravenously 5 min before killing the animals. After lysing RBCs in 1× RBC lysis buffer, samples were washed and resuspended in fluorescence-activated cell sorting (FACS) buffer (PBS containing 0.5% BSA). Aortas were perfused through the left ventricle with PBS and excised from root to common iliac artery bifurcation after removing perivascular fat and surrounding other tissue, minced using fine scissors and digested in 450 U ml −1 collagenase I (catalog no. C0130), 125 U ml −1 collagenase XI (catalog no. C7657), 60 U ml −1 DNase I (catalog no. D5319) and 60 U ml −1 hyaluronidase (catalog no. H3506; all Sigma-Aldrich) for 1 h at 37 °C under agitation. Cell suspensions were filtered through 40-μm nylon cell strainers (BD Biosciences), washed and resuspended in FACS buffer.
PamGene serine/threonine kinase array
Washed platelets from WT mice were incubated with vehicle or BAY-747 (150 ppm) and activated by shaking as described above. Cells were centrifuged at 1,200g for 5 min and resuspended in ice-cold M-PER lysis buffer (catalog no. 78503) supplemented with protease inhibitor (1:100) and phosphatase inhibitor (1:100, catalog no. 78428; all Thermo Fisher Scientific) to a concentration of 200 × 10 6 platelets per 100 μl. Platelets were lysed for 15 min on ice. Lysates were centrifuged at 20,000g for 15 min and stored at −80 °C. Serine/ threonine kinase profiles were determined using the PamChip serine/threonine kinase assay (PamGene International) as described previously 54 .
Inhibition of platelet signaling pathways
Washed platelets were isolated from WT mice as described above and incubated with different inhibitors of downstream cGMP signaling (Supplementary Table 1) compared to vehicle for 30 min at room temperature. Subsequently, platelets were activated by shaking for 30 min, centrifuged, and supernatant was collected for determination of ANGPT1 release as described above.
Determination of BAY-747 serum/plasma concentration BAY-747 exposure was quantified in plasma using a liquid chromatography (LC) system for mass separation (Kinetex 5 μm C18 100 A LC Column 150 × 4.6 mm) coupled to a Triple Quad 4500 LC-mass spectrometry analyzer (positive mode; AB Sciex). A generic internal standard was added to the samples. A five-point calibration curve and quality control samples were used for relative quantification. Plasma was obtained from six mice per group. Data are the mean + s.e.m.
Adoptive transfer of leukocytes
Bone marrow monocytes and neutrophils were isolated simultaneously from Ubc-GFP mice using anti-Ly6G-PE and anti-CD115-biotin antibodies followed by PE-and streptavidincoated microbeads as stated above and resuspended in PBS. We intravenously injected equal amounts of isolated cells into Ldlr −/− mice fed a 0 or 150 ppm BAY-747 containing Western diet for 6 weeks and collected blood and aortae 24 h later as stated above. The number of CD45.2 high /CD11b high /GFP high cells within the aorta normalized to the exact number of injected cells was quantified by flow cytometry.
Statistical analysis
Normality distribution of the data was assessed using the Kolmogorov-Smirnov test or the Shapiro-Wilk test for sample sizes n < 5. Test results and subsequently used statistical tests are displayed in Supplementary Table 2. Data were analyzed using a two-tailed Student's unpaired or paired t-test (for normally distributed data) or Mann-Whitney U-test (for non-normally distributed data), as appropriate and indicated in the respective figure legend and Supplementary Table 2. When comparing more than two groups, a repeated measures one-way analysis of variance (ANOVA) test followed by a Tukey test for multiple comparisons or mixed-effect analyses/repeated measures one-way ANOVA were performed, as appropriate, when data were normally distributed. To determine the statistical outliers, the two-sided ROUT test was used. If outliers were removed from the analysis, this is indicated in the respective figure legend. Sample sizes/numbers of replicates are indicated in the figure legends and visualized in the figures (each symbol represents one animal or biological replicate) and data are displayed as the mean + s.e.m. P values <0.05, when investigating more than two groups after adjustment for multiple testing, were regarded as statistically significant. Statistical analyses were performed with Prism v.9 for macOS (GraphPad Software). Fig. 8 |. ngiopoietin-1 (ANGPT1) release by control or Pf4-Cre + Irag1 LoxP/LoxP platelets after activation by shaking.
Extended Data
Each symbol represents one independent animal (n = 3). Two-sided unpaired t-test. Data are mean and s.e.m. Abbreviation: ADP, adenosine diphosphate. a. Serum cholesterol levels (n = 12). b. Platelet count (n = 9 in 0 ppm, n = 8 in 150 ppm group). c. Blood leukocytes (n = 9 in 0 ppm, n = 8 in 150 ppm group). d. Body weight after (n = 12). e. Plasma concentrations of the soluble guanylyl cyclase stimulator BAY-747 in treated mice. f. Blood leukocyte numbers and subsets. Each symbol represents one independent animal (n = 12). Two-sided unpaired t-test. Two outliers were removed in the analysis of F: neutrophils (150 ppm group; n = 10) and Ly6C high monocytes (150 ppm group; n = 11) according to the ROUT method. Data are mean and s.e.m.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. a-c, Atherosclerotic plaque formation as assessed by aortic root histology (a), aortic en face ORO staining (b), and monocyte and macrophage content (c) in 12 (b: 8) Pf4-Cre + Gucy1b1 LoxP/LoxP Ldlr −/− mice compared to 14 (b: 9) Pf4-Cre + Gucy1b1 +/LoxP Ldlr −/− mice that were fed a Western diet for 10 weeks. Two-sided unpaired t-test. d, Leukocyte adhesion to atherosclerotic plaques in n = 13 Pf4-Cre + Gucy1b1 LoxP/LoxP Ldlr −/− mice compared to n = 11 Pf4-Cr e + Gucy1b1 +/LoxP Ldlr −/− mice that were fed a Western diet for 6 weeks to induce atherosclerotic plaque formation. Two-sided unpaired t-test. e, Quantification of vascular inflammation by flow cytometry analysis of aortic cell suspensions of Pf4-Cre + Gucy1b1 LoxP/LoxP Ldlr −/− mice compared to Pf4-Cre + Gucy1b1 +/LoxP Ldlr −/− mice (n = 11 per group). Two-sided Mann-Whitney U-test. Each symbol represents one independent animal. Data are the mean ± s.e.m. a,b, WT monocyte (a) and neutrophil (b) adhesion to WT ECs after incubation with supernatant of activated platelets isolated from either Pf4-Cre + Gucy1b1 LoxP/LoxP or Pf4-Cre + Gucy1b1 +/LoxP mice. Each symbol represents 1 independent animal (n = 8 per group). Two-sided unpaired t-test. Data are the mean ± s.e.m. c, Quantification of neutrophil adhesion after preincubation of either ECs or neutrophils with supernatant of activated platelets from Pf4-Cre + Gucy1b1 LoxP/LoxP mice in comparison to non-preincubation conditions. Each symbol represents 1 paired sample (each derived from n = 8 independent animals). Repeated measures one-way ANOVA with Tukey test for multiple testing. Data are the mean ± s.e.m. a, Left, identification of ANGPT1 (encircled) as differentially released protein from activated Pf4-Cre + Gucy1b1 +/LoxP and Pf4-Cre + Gucy1b1 LoxP/LoxP platelets. Right, quantification of ANGPT1 signal in 8 Pf4-Cre + Gucy1b1 +/LoxP and 7 Pf4-Cre + Gucy1b1 LoxP/LoxP mice. Two-sided unpaired t-test. b-d, Quantification of platelet ANGPT1 content (n = 6 independent animals) (b), PPP ANGPT1 (n = 5 and 6 independent animals, respectively) (c) and released ANGPT1 as determined in 6 independent animals per group by ELISA (d). Two-sided unpaired t-test. e, WT neutrophil adhesion to WT ECs after incubation with supernatant of activated WT platelets in the absence and presence of the Tie2 inhibitor BAY-826 (0.5 μM). Two-sided paired t-test on n = 11 sample pairs derived from independent animals. f, Platelet ANGPT1 release in five humans carrying the GUCY1A1 (rs7692387) non-risk (AA, AG genotype) allele and five homozygous carriers of the risk allele (GG genotype). Each symbol represents one individual. Two-sided unpaired t-test. Data are the mean ± s.e.m. g, STARNET coexpression modules containing ANGPT1 from multitissue RNA-seq sampling of approximately 600 patients with CAD. The arrow denotes coexpression module 11 from whole-blood samples (BLOOD). h, Heatmap of Pearson's correlation coefficients of genes in coexpression module 11, showing positive correlation of ANGPT1, GUCY1A1 and GUCY1B1 along with enrichment for platelet activation genes (false discovery rate = 6.863 × 10 −13 , Enrichr, KEGG pathway). AOR, aorta; LIV, liver; MAM, mammary artery; SF, subcutaneous fat; SKLM, skeletal muscle; VAF, visceral fat. a, ANGPT1 release by WT platelets incubated with 150 ppm BAY-747 or vehicle (n = 4 sample pairs from independent animals). b, WT neutrophil adhesion to WT ECs after incubation with supernatant from n = 7 activated WT platelets from independent animals that were preincubated with either vehicle or 150 ppm BAY-747. c, Inhibition of IP 3 -R (10 μM 2-APB), PKC (5 μM Ro 32-0432), PKG (10 μM KT-5823) and measurement of ANGPT1 release from platelets (n = 6). One outlier was removed from the PKG group (n = 5) according to the ROUT test. d, Inhibition of mitogen-activated protein kinase (MAPK) (10 μM VX-702), cyclin-dependent kinase (CDK) 2/5/9 (100 nM dinaciclib), extracellular signalregulated kinase (ERK) (10 μM ravoxertinib), and Akt kinase (1 μM MK-2206) and measurement of ANGPT1 release from platelets (n = 6). Each symbol represents paired samples derived from independent animals. a,b, Two-sided paired t-test. c,d, Mixed-effects analysis (ANOVA with Dunnett multiple comparison test). Data are the mean ± s.e.m. a, Adoptive transfer of GFP + leukocytes: study scheme, flow cytometry plot (left) and quantification (right) of GFP + cells (n = 8 independent animals). b, Aortic root atherosclerotic plaques in Ldlr −/− mice that were fed a Western diet for 10 weeks containing 0 (control group, n = 9) or 150 ppm BAY-747 (n = 7). c, CD11b + area of aortic roots in mice from the control (n = 11) and treatment groups (n = 8). d, Quantification of vascular inflammation by flow cytometry analysis of aortic cell suspensions of mice in the control (n = 12) and treatment groups (n = 12). Each symbol represents one independent mouse. Two-sided unpaired t-test. Data are the mean ± s.e.m. One outlier was removed in d in the analysis of neutrophils (150 ppm; n = 11) according to the ROUT test. If sGC levels are reduced, for example, in the mouse model used in this study or in platelets of homozygous carriers of the CAD-associated risk variant, less ANGPT1 is released. Subsequently enhanced EC activation and leukocyte recruitment contribute to atherosclerotic plaque formation. This figure contains modified image material available at Servier Medical Art under a Creative Commons Attribution 3.0 Unported License.
|
2021-11-18T14:16:38.469Z
|
2021-11-15T00:00:00.000
|
{
"year": 2022,
"sha1": "4118c39b254c93b796e7aa29f56d958df30272e2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s44161-022-00175-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "50defeef4944d64160a5b38ed46f1562f4d15f0f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
35870423
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Shorter Treatment Regimens for Hepatitis C on Population Health and Under Fixed Budgets
Abstract Background Direct acting antiviral hepatitis C virus (HCV) therapies are highly effective but costly. Wider adoption of an 8-week ledipasvir/sofosbuvir treatment regimen could result in significant savings, but may be less efficacious compared with a 12-week regimen. We evaluated outcomes under a constrained budget and cost-effectiveness of 8 vs 12 weeks of therapy in treatment-naïve, noncirrhotic, genotype 1 HCV-infected black and nonblack individuals and considered scenarios of IL28B and NS5A resistance testing to determine treatment duration in sensitivity analyses. Methods We developed a decision tree to use in conjunction with Monte Carlo simulation to investigate the cost-effectiveness of recommended treatment durations and the population health effect of these strategies given a constrained budget. Outcomes included the total number of individuals treated and attaining sustained virologic response (SVR) given a constrained budget and incremental cost-effectiveness ratios. Results We found that treating eligible (treatment-naïve, noncirrhotic, HCV-RNA <6 million copies) individuals with 8 weeks rather than 12 weeks of therapy was cost-effective and allowed for 50% more individuals to attain SVR given a constrained budget among both black and nonblack individuals, and our results suggested that NS5A resistance testing is cost-effective. Conclusions Eight-week therapy provides good value, and wider adoption of shorter treatment could allow more individuals to attain SVR on the population level given a constrained budget. This analysis provides an evidence base to justify movement of the 8-week regimen to the preferred regimen list for appropriate patients in the HCV treatment guidelines and suggests expanding that recommendation to black patients in settings where cost and relapse trade-offs are considered.
examine the economic value associated with 8-and 12-week treatment regimens. We considered treatment for black patients and nonblack patients and considered strategies for identifying patients best treated with 12 weeks of LDV/SOF, including testing for host-related factors (interleukin-28B [IL28B] genotype) or virus-related factors (NS5A resistance). We identified threshold treatment efficacies and cost that changed conclusions, and we considered the decision assuming both an open treatment budget and a fixed capacity system.
Model Structure
We first built a decision tree to describe the effectiveness of the 8-and 12-week strategies in black and nonblack patients ( Figure 1). The model begins with treatment eligible patients presenting for treatment. The efficacy of either the 8-or 12-week course of LDV/SOF determines the sustained virologic response (SVR) rate after the first round of therapy. Those who fail firstline therapy are either retained in care with salvage therapy of sofosbuvir/velpatasvir/voxilaprevir for 12 weeks or are lost to follow-up [13]. Those retained either attain SVR or fail therapy and never attain SVR. The decision tree estimates per-person therapy costs of first-and second-line therapy and estimates the proportion of the population achieving SVR.
Next, we used the HEP-CE model to estimate the lifetime medical costs and quality-adjusted life-years (QALYs) of each strategy, discounted 3% annually [14]. The HEP-CE Model is a Monte Carlo lifetime simulation of HCV infection, screening, and treatment, summarized in greater detail in the published literature [15][16][17]. The model inputs are summarized in Table 1. Where possible, inputs were informed by the relevant clinical trials.
Cost-effectiveness Analysis
In the cost-effectiveness analysis, we simulated a cohort without constrained treatment capacity, evaluating the effect of treating all individuals. The model estimates of the lifetime cost and QALY per person. We modeled the effect of no treatment, treatment with an 8-week LDV/SOF regimen, and treatment with a 12-week LDV/SOF regimen. We sorted regimens in order of increasing lifetime cost and then calculated the incremental cost and QALYs associated with increasingly expensive strategies compared with the next least costly strategy. We calculated incremental cost-effectiveness ratios (ICERs) by dividing the incremental cost by the incremental QALYs. We used the commonly cited US willingness-to-pay threshold of $100 000 per QALY gained to interpret ICERs [14].
Budget Constrained Analysis
We evaluated the effect of each treatment strategy by assuming a fixed budget constraint. This analysis assumed the budgetary perspective of a public payer, department of health, or department of corrections with a fixed pharmacy budget and therefore considered only the costs of first-and second-line therapy.
As an example, we chose a $10 000 000 fixed budget. Using the decision tree from the cost-effectiveness analysis, we found the maximum number of individuals who could be treated while keeping the budget at or below the constraint.
Scenario Analyses
We also explored 2 potential testing strategies that could evaluate who would have success using an 8-week regimen and who may benefit from longer therapy: detecting interleukin-28B (IL28B) polymorphisms and testing for the prevalence of NS5A RAS [18]. The difference in efficacy observed between black and nonblack patients has been in part attributed to differences in IL28B polymorphisms [19]. While the role of IL28B in pegylated interferon-alpha treatment was well established, the effect of IL28B polymorphisms on treatment with direct acting antivirals (DAAs) such as LDV/SOF has been attenuated; however, differences in SVR rates among IL28B genotypes persist [19,20]. Some studies suggest that certain NS5A RAS can affect SVR achievement, although NS5A resistance testing is rare [18]. We evaluated the effect of IL28B testing and treating individuals with either an 8-week (CC genotypes) or 12-week (non-CC genotypes) course of therapy. Next, we considered a scenario in which all patients received HCV viral genotyping for NS5A ledipasvir-specific RAS, including substitutions at the following positions: K24G/ N/R, M28A/G/T, Q30E/G/H/L/K/R/T, L31I/F/M/V, P32L, S38F, H58D, A92K/T, and Y93C/F/H/N/S in genotype 1a patients and L31F/I/M/V, P32L, P58D, A92K, and Y93C/H/N/S in genotype 1b patients [18]. Our approach to parameterizing these analyses is covered in the Supplementary Appendix. Figure 1 is a schematic of our model. All individuals begin the analysis ready for treatment. Following firstline therapy, individuals' chance of attaining SVR is based on the efficacy of the firstline regimen. Among those failing to achieve SVR, they are either retained in care or lost to follow-up. Those lost to follow-up never achieve SVR. To further evaluate the robustness of our results, we conducted several sensitivity analyses. We varied the efficacy and cost of therapy to determine the effect of price reductions and evaluated the effects of retention, the age of the cohort, and the availability of salvage treatment.
Cost-effectiveness Analysis
The cost-effectiveness was similar among black and nonblack patients as regimen costs were the same and efficacy rates were similar ( Table 2). Among all patients, an 8-week course of LDV/ SOF resulted in a discounted lifetime medical cost of $226 000 and a quality-adjusted lifetime expectancy of 15.2 QALYs, yielding ICERs of less than $11 000/QALY ( Table 2). When employing the 8-week regimen, 97% of black patients ultimately attained SVR compared with 98% of nonblack patients. Four percent of black patients and 3% of nonblack patients needed a second course of therapy because they failed the 8-week regimen, 0.03% of black patients and 0.02% of nonblack patients were left without treatment options after failing both HCV treatment regimens, and 2.8% of black patients and 2.4% of nonblack patients were lost to follow-up between the first and second lines of therapy. A 12-week regimen resulted in fewer patients requiring retreatment (1.1% vs 3.7% for black patients and 2.9% vs 3.1% for nonblack patients), fewer patients being left without treatment options (0.01% vs 0.03% for black patients and 0.020% vs 0.022% for nonblack patients), and fewer patients lost to follow-up (0.8% vs 2.8% for black patients and 2.2% vs 2.4% for nonblack patients). However, the 12-week regimen increased costs by $18 000 per black patient and $19 000 per nonblack patient, with a commensurate increase in QALYs of less than 0.1 per black patient and less than 0.01 per nonblack patient, yielding an ICER compared with the 8-week regimen of $212 000/QALY for black patients and $2 850 000 for nonblack patients ( Table 2).
Budget Constrained Analysis
In the presence of a fixed pharmacy budget of $10 000 000, 261 patients could be treated with an 8-week regimen, with 254 black and 255 nonblack patients attaining SVR, while 175 could be treated under the 12-week regimen, with 174 black and 171 nonblack patients attaining SVR. While the 12-week strategy yielded a higher probability of SVR among those who were treated, using an 8-week regimen allowed for almost 50% more individuals to be cured ( Table 2).
IL28B Testing Scenario Analyses
In black patients, 8-week therapy had a lifetime cost of $227 000 and 15.0 QALYs for an ICER of $11 000 compared with no treatment, and 93.9% of patients reached SVR (Table 3). Treating based on the IL28B polymorphism increased costs by $16 000 and quality of life by less than 0.1 QALYs, resulting in an ICER of $190 000 compared with an 8-week regimen for all patients without IL28B testing (Table 3). Treating all patients with 12-week therapy was slightly more expensive, increasing costs by $2000, and quality of life by just under 0.01 QALYs, producing an ICER of $267 000 compared with the IL28B Proportion male, % 60 0-100 [12] Average age at HCV infection 26 16-36 [27] HCV disease progression Median y to cirrhosis from infection 25 15-35 [28,29] Median y to first liverrelated event after cirrhosis 11 6-17 [30] Liver-related mortality with compensated cirrhosis, deaths/100 PY [38,39] testing strategy (Table 3). Among nonblack patients, where the prevalence of IL28B non-CC polymorphisms is low relative to black patients, IL28B testing was a dominated strategy.
In this cohort, treating all patients with an 8-week course had an ICER of $11 000 compared with no treatment, while the 12-week regimen had an ICER of $212 000 compared with the 8-week regimen (Table 3). In both black and nonblack patients, an 8-week treatment course was preferred to treating patients based on the results of an IL28B test.
NS5A Testing Scenario
We found that 8-week therapy cost $227 000 with 15.1 QALYs, yielding an ICER of $10 900 compared with no treatment (Table 3). Treating patients based on NS5A RAS increased costs by $3230 and quality of life by 0.06 QALYs, resulting in an ICER of $56 500. Treating all patients with a 12-week regimen increased costs by $14 400 over the NS5A testing strategy and increased QALYs by 0.09, producing an ICER of $164 000. With an ICER of less than $100 000 per QALY, administering an NS5A test and treating based on RASs was preferred to treating all patients with either an 8-or 12-week treatment course regardless of RAS. We found that NS5A testing was cost-effective as long as the SVR rate for 8 weeks of therapy in patients with RAS conferring more than 100-fold resistance to ledipasvir was 88% or less.
Sensitivity Analyses
Two-way sensitivity analyses identified thresholds of 8-week treatment efficacy and salvage therapy efficacy where 12-week LDV/SOF therapy is preferred (Figure 2). Assuming that salvage therapy cures 97.3% of those who fail an 8-week course of LDV/SOF, the 8-week treatment regimen was preferred from a cost-effectiveness perspective unless 8-week treatment efficacy was <93.4% for black patients or <91.6% for nonblack patients. In the extreme case of a completely ineffective salvage therapy (SVR = 0%), we found that 8-week therapy remained preferred from a cost-effectiveness perspective, as long as the 8-week regimen resulted in an SVR greater than 94.5% for black patients and 92.7% for nonblack patients. With a constrained budget, 8-week treatment resulted in more individuals cured unless the efficacy of 8-week therapy was <65.9% for black patients and <64.7% for nonblack patients. Next, we found that when the monthly cost of LDV/ SOF was $8883 (47% of the current Federal Supply Schedule costs = $18 900) 12-week therapy was the preferred strategy for black patients. Because the 8-week and 12-week efficacies were similar for nonblack patients, the 12-week regimen was not preferred unless the monthly price of LDV/SOF fell to less than 4% ($750) of the Federal Supply Schedule cost. When we varied retention in care after failing an 8-week regimen, we found that at any level of follow-up for salvage therapy (0%-100%), the 8-week regimen remained preferred, likely because firstline therapy is so efficacious.
The findings were robust in all other deterministic sensitivity analyses, including changing the efficacy and cost of therapy, retention, the age of the cohort, and the availability of salvage treatment (Supplementary Appendix).
DISCUSSION
This cost-effectiveness analysis, both with and without a fixed budget constraint, demonstrates that among treatment-naïve, genotype 1 HCV-infected individuals without cirrhosis, an 8-week treatment regimen provides good value for the money and is preferred to a 12-week regimen in both black and nonblack patients. While 8-week treatment results in more treatment failures, resources invested in extending therapy to 12 weeks would likely be more productively invested in other HCV-related health care interventions, such as expanding HCV screening or improving HCV linkage to care. We found that 8 weeks of therapy was preferred even though our rate of retreatment was low (24%) [13]; additional investments in linkage to care would likely increase the attractiveness if 8 weeks of treatment. Furthermore, when presented with a fixed budget constraint, the 8-week regimen results in nearly 50% more individuals attaining SVR than the 12-week regimen, yielding better population outcomes. This finding is particularly relevant for health systems faced with a fixed budget such as correctional systems or Medicaid, and this type of analysis could be useful to settings outside of the United States grappling with similar cost/ efficacy trade-offs. In scenario analyses, however, we demonstrate that NS5A testing might be a good strategy for both controlling cost and minimizing poor outcomes. Guidelines should consider the value of NS5A testing, and future research should evaluate the real-world performance of such an individualized approach.
The American Association for the Study of Liver Diseases (AASLD)/Infectious Diseases Society of America (IDSA) HCV treatment guidance recently added the 8-week LDV/SOF regimen to the recommended regimen list for treatment-naïve, genotype 1 HCV-infected individuals without cirrhosis who have a baseline HCV RNA <6 million IU/mL. However, there are caveats regarding which individuals are best suited for this shortened course of therapy; thus many clinicians remain concerned about mandating shortened treatment courses that can increase the risk of relapse for their individual patients. However, early trials showing decreased efficacy did not limit 8-week treatment to patients with HCV RNA <6 million copies, as would later be recommended. When we considered efficacy stratified by RNA, the relative efficacy rates of 8-and 12-week therapy in both black and nonblack patients were similar. Furthermore, we provide additional strategies here that may improve provider comfort with patient-tailored approaches.
While differences in treatment outcomes by race persist in the era of DAA treatment, these differences are less dependent on the IL28B polymorphism [21]. In a scenario considering the usefulness of IL28B testing to prioritize black patients for 8 vs 12 weeks of LDV/SOF, we found that treating based on Figure 2 depicts the results of our sensitivity analysis of the effect of 8-week regimen SVR and salvage therapy SVR. The x-axis displays the SVR range of the salvage regimen, and the y-axis depicts the SVR of the 8-week regimen. Holding constant the efficacy of the 12-week regimen, we vary the salvage SVR from 0% to 100% and find the corresponding 8-week regimen efficacy threshold that results in 12-week therapy to be preferred. In the figure, the downward sloping line is that threshold, with the shaded region underneath representing where the 12-week regimen is preferred. Areas above each threshold shaded region indicate where the 8-week regimen is preferred. The threshold for black patients is higher compared with nonblack patients because in our primary data source [18] the 12-week efficacy of LDV/SOF was higher among black patients (98.9%) than it was among nonblack (97.1%) patients, which makes 12 weeks of therapy more attractive in general. Abbreviations: LDV, ledipasvir; SOF, sofosbuvir; SVR, sustained virologic response.
IL28B polymorphism is not preferred from a cost-effectiveness standpoint, likely because the test does not provide adequate information to risk stratify. It is possible that the linked polymorphism IFNL4-ΔG/TT (rs368234815) may provide more resolution, especially in black patients [22]; however, commercial testing is not yet available. In contrast, our results suggest that baseline testing for NS5A RAS that convey >100-fold ledipasvir phenotypic resistance is part of a potentially attractive treatment strategy. Work from Sarrazin et al. demonstrated that treating patients who are infected with a virus with baseline NS5A RAS with a 12-week regimen increases SVR by nearly 13 percentage points (from 82.8% with 8 weeks to 95.7% with 12 weeks) [18]. Our model results suggest that this large gain in SVR at a modest test cost provides good economic value and might be the ideal strategy to reduce cost and avoid higher relapse.
While in the current environment we demonstrate that overall 8-week treatment is preferred, there are important caveats. It is possible that future price negotiations and market competition result in the price of LDV/SOF falling to the point that an additional month of therapy for all patients provides good value and is cost-effective. In our analysis, we find that that occurs at around $8900 for a month of therapy in black patients, approximately 50% of the Federal Supply Schedule price and 25% of the average wholesale price of LDV/SOF [1,23]. It is possible that some insurers or health systems have already crossed this threshold, or may do so with the downward pressure on prices due to competition with the release of the new 8-week regimen of glecaprevir/pibrentasvir. If so, those systems would secure the best possible outcomes by treating all black patients for 12 weeks. Among nonblack patients, the threshold price that results in 12-week therapy being cost-effective is very low and likely not realistic in the near future ($750 per month of therapy).
These data support the decision by the AASLD/IDSA Guidance Panel to recommend the 8-week regimen, regardless of price points, for nonblack patients. While this analysis also supports 8 weeks in black patients, it acknowledges a higher relapse rate with the 8-week regimen and a need for salvage with a second course of approved therapies. Furthermore, in the setting where the cost of LDV/SOF is less than $8900 per month (or $17 800 per treatment), the trade-off of higher relapse and cost is not needed. The guidance panel makes recommendations based on safety and efficacy and does not consider cost per se [24]. Thus, these data are more likely to support population-, health system-, and health insurer-level decisions, where fixed budgets imply that using an 8-week regimen could allow more black patients to be treated. This analysis has several limitations. First, the price of HCV treatment varies significantly among payers, and there is evidence of large price reductions following negotiations for exclusivity [25]. We attempted to capture this lower cost by using the Federal Supply Schedule, but it is possible this is not the appropriate metric. Next, due to data availability, we had to use heterogeneous data sources for the base case and 2 scenarios. Although the absolute efficacy values do not always match perfectly among the 2 scenarios and base case, the relative efficacies are internally consistent. While more research is needed to explore combinations of HCV viral load, IL28B genotype, RAS presence, and fibrosis in depth, we believe our results are a valuable first step in understanding the potential value in different testing and treatment strategies. Finally, while there are a number of treatments available, we focused on LDV/SOF alone as a firstline regimen. While the approval of glecaprevir/pibrentasvir provides another 8-week treatment option primarily in treatment-naïve patients, we believe that LDV/SOF will have continued relevance in the clinic. Price negotiations leading to steep discounts for LDV/SOF make prices difficult to compare, even given the lower published wholesale acquisition cost of glecaprevir/pibrentasvir ($13 200/4 weeks) compared with LDV/SOF ($31 500/4 weeks) [1,25]. LDV/SOF has been available since 2014, and many providers have experience with that regimen. Given the similarity in efficacy between LDV/SOF and glecaprevir/pibrentasvir and the recommendation of LDV/SOF in the AASLD/IDSA treatment guidelines, the 2 regimens will likely continue as competitors. As such, our findings likely apply to glecaprevir/pibrentasvir as well. In particular, there are questions around the role of NS5A resistance in glecaprevir/pibrentasvir that clinical trials were unable to answer [26]. Our finding that NS5A resistance testing is likely cost-effective represents an important research space for maximizing the efficacy of glecaprevir/pibrentasvir in particular populations.
While highly efficacious therapies can cure HCV with few side effects in as little as 8 weeks, many individuals and payers are struggling with the cost. For LDV/SOF, our results indicate that 8-week therapy is cost-effective and can result in better population outcomes in both black and nonblack patients compared with 12-week therapy, even with lower rates of SVR. Future research demonstrating the real-world effectiveness of NS5A testing could improve outcomes still further, while controlling cost. This analysis provides an evidence base supporting the movement of the 8-week regimen to the preferred regimen list for appropriate patients in the HCV treatment guidelines. Wider use of the similarly effective, significantly less expensive 8-week regimen could result in the ability to treat more individuals and improve population health.
Supplementary Data
Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
|
2018-04-03T01:33:06.396Z
|
2017-12-08T00:00:00.000
|
{
"year": 2017,
"sha1": "4bc55c03339da44b88039399ed1d0e7e9196191e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1093/ofid/ofx267",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bc55c03339da44b88039399ed1d0e7e9196191e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258473607
|
pes2o/s2orc
|
v3-fos-license
|
A Comparative Performance Evaluation of Routing Protocols for Mobile Ad-hoc Networks
—Mobile Ad Hoc Network (MANET) is a group of wireless mobile nodes that can connect with each other over a number of hops without the need for centralized management or pre-existing infrastructure. MANET has been used in several commercial areas such as intelligent shipping systems, ad hoc gaming, and clever agriculture, and non-commercial areas such as army applications, disaster rescue, and wildlife observing domains. One of the main challenges in MANET is routing mobility management which affects the performance of MANET seriously. The routing protocols have been functionally classified into proactive routing protocols, reactive routing protocols, and hybrid routing protocols. The objective of this paper is to create observations about the advantages and disadvantages of these protocols. Thus, the aim of this paper is to conduct a comparative analysis of the three groups of MANET routing protocols by comparing their features and methods in terms of routing overhead, scalability, delay, and other factors. It was shown that the proactive protocols guarantee the availability of the routes. However, it suffers from scalability and overhead. Whereas, reactive protocols initiate route discovery only when data needs to be sent. However, reactive protocols introduce an undesirable delay due to route establishment, which affects the network performance. Hybrid protocols, attempt to utilize the beneficial features of both reactive and proactive protocols, hybrid protocols are suitable for large networks and keep up-to-date information, but they increase operational complexity. It was concluded that MANET needs enhancement with regard to routing in order to meet the required performance.
The remainder of this paper is structured as follows: Section II describes the general issues and challenges in MANET, routing structure and protocols in MANET are covered in Section III, and Section IV briefly describes the functional classification of routing protocols which includes table-driven, on-demand, and hybrid routing protocols. The structural classification of routing protocols is covered in Section V. The routing protocols are discussed in Section VI with their limitations. The discussion and studies are presented in Section VII. Finally, the conclusion and future works are discussed in Section VIII.
II. GENERAL ISSUES AND CHALLENGES IN MANET
Routing and mobility management are the two key issues with MANET. Routing becomes more challenging due to mobility in MANET, which generally consists of a group of decentralized mobile nodes, that move randomly and frequently causing topology changes [2], [3], [4]. The following are the subsections that summarize the major challenges and issues in MANET.
The network can stay operational by constructing new routes by flooding to deliver the data using a multi-hopping mechanism. In the flooding procedure, control packets are moving infinitely in the entire network. As a result, the flooding procedure consumes too much energy from the network resources when it is used for data transmission. Thus, controlling flooding is one of the major challenges to such networks.
B. Nodes Mobility Management
In normal conditions, any two neighboring nodes can exchange data between them (see Fig. 1). Nevertheless, their connection will vanish if any of them leaves the transmission range of the other. Thus, in MANET with high mobility nodes, the probability of a link breaking between any two network neighbors is considerable. This is another significant obstacle to such networks. Due to MANET's dynamic nature, the network topology frequently changes, which therefore results in frequent connection failures [8], [9], [10]. The network must create new routes, just as in the case of broken links, to assure data transmission [11], [12]. A dynamic routing system is required to maintain routes between a source and its destination because of the frequent topology changes. Therefore, the reliability and success of MANET depend on the effectiveness of the routing protocol and the attribute and usefulness of the collected data [13].
C. Scalability and No Fixed Boundaries
MANET is subject to several challenges such as scalability and no fixed boundaries [14]. MANET is naturally dynamic where mobile nodes arrive and exit arbitrarily without control from a base station (BS) or other central points. Furthermore, as nodes in MANET join and leave arbitrarily, the number of nodes and the size of the network can grow erratically which introduces a heavy burden on the routing mechanism. Consequently, scalability becomes a major issue in MANET [15].
D. Node Density
The density of nodes in regions such as a national or urban park, where high density is presented, compared to highways where the density is varied from high to low depending on rush hour times, should be considered [16] Modeling the mobile nodes and communication links is one of the problems in MANET. Such modeling can provide valuable information regarding the pattern or behaviors of the wireless transmission under different situations as wireless transmissions in a MANET functioning on a flat open environment can be different from such transmissions in an ad hoc network of nodes placed on a building [17].
The scatter or the distribution of nodes in a geographical area affects the efficiency of routing, especially when there are a lot of middle nodes between the source and the destination. In Fig. 2, where S and D denote the source node and the destination node correspondingly, the light gray area shows the potential flooding and the dark area shows the potential intermediate nodes involved in routing.
E. Security Concerns
Besides the technical problems mentioned above, In MANET, where trust relationships must be established, security is a significant problem [18], [19]. It is crucial to note that using several hops can cause a problem because it enables unauthorized individuals to intercept data illegally. In addition, there is intentional electronic interference or unintentional interference occurring while many nodes share the same air interface domain. The major challenges and issues in MANET are shortened in Table I. www.ijacsa.thesai.org As nodes in MANET join and leave arbitrarily, the size of the network can grow erratically which introduces a heavy burden on the routing mechanism Node Density The scatter or the distribution of nodes in a geographical area affects the efficiency of routing
Security Concerns
The use of multiple hops can be problematic since it makes it easier for an unauthorized person to intercept data.
III. ROUTING STRUCTURE AND PROTOCOLS IN MANET
The routing process in MANET is responsible for discovering, establishing, and maintaining a route between two mobile nodes. Routing of packets can be performed using either a single-hop or a multi-hop paradigm. In a single-hop paradigm, the destination node is assumed to be within the communication range of the source node. Thus, the source can communicate with its destination directly. Within the multihop paradigm, the source node can communicate with its destination through intermediate nodes as the destination is out of the communication range of the source node [20]. MANET is considered a multi-hop network where mobile nodes in the network collaboratively help in forwarding the data or control packets between the source node and its destination. The mobile nodes are involved in the discovery of routes, and once found, the intermediate mobile nodes on the routes would have key roles in maintaining the routes. Therefore, routing protocols should be capable of managing routing in MANET efficiently. There are some difficulties in establishing a route between source and destination nodes through intermediate nodes including low bandwidth, limited coverage and connectivity due to limited transmission range, higher error rate, high possibility of interference, power consumption, no centralized mechanism for routing, and frequent network topology changes due to mobility.
A lot of protocols have been developed for routing in MANET. These routing protocols can be classified functionally and structurally according to their routing processes and structures.
IV. FUNCTIONAL CLASSIFICATION OF ROUTING PROTOCOLS
According to the methods that are used in discovering and maintaining routes, routing protocols in MANET are categorized into three groups; table-driven (proactive) routing protocols, on-demand (reactive) routing protocols, and hybrid routing protocols [4], [14], [21], [22].
A. Table Driven Routing Protocols
Tables-driven protocols also called proactive protocols are developed depending on link state and distance vector routing techniques that are traditionally used on the Internet. The main characteristic of this type of protocol is that they are proactive in the sense that every mobile node maintains an updated routing table to any other node in the network. Therefore, each node should periodically communicate routing information with other nodes in order to maintain its routing table up-todate on whether the routes are used or not [23], [24]. The frequency of updating the routing tables is crucial. Even though it can reflect the state of the network accurately, and the routing process would be robust to the dynamic changes in the network; however, the bandwidth usage for exchanging routing information will be high. This would leave not much bandwidth for delivering data packets, which affects throughput at the destination nodes considerably. Furthermore, it causes Broadcast Storm Problems (BSP) [25], [26], as the network will be flooded with routing information updates. Hence, the bandwidth for sending data packets will be reduced significantly; especially, in MANET with high node density. On the other hand, as table-driven protocols ensure that routes to destinations are always available, this would reduce the delay in sending data packets once required. In reaction to network topology changes, each proactive protocol reacts differently according to its routing structure, the size of the routing table, and the frequency of routing information updates.
B. On-demand Routing Protocols
On-demand routing protocols also called reactive routing protocols were developed to improve scalability and overhead problems presented by table-driven routing protocols. The aim is to save bandwidth by reducing the number of control messages sent across the network. Therefore, a route to a destination is only looked up when the higher protocol levels demand it, compared with the periodic search for routes and updating them as with proactive protocols. Subsequently, the routing overhead is decreased significantly, which makes it more suitable for mobile network environments [15]. There are two main processes in reactive routing; which are route detection and route maintenance. When a Source node (S) needs to forward data, it first searches its routing table to examine whether it has a route to the desired Destination (D). If there is no route found, a route detection procedure is generated in order to discover a route to the destination. In route detection, the source node floods the network by broadcasting Route Request (RREQ) packets as shown in Fig. 3 [27]. When the destination or an intermediate node that has an active path to the destination receives the RREQ packet, it broadcasts or unicasts a Route Reply (RREP) back to the source node. The route maintenance process starts when the route that is used currently to transport data is disconnected. The node that detects the route failure may repair the route using its local repair process, or otherwise, forward a Route Error (RERR) packet to the source node which will initiate a new route discovery attempt. The main difference between proactive and reactive routing methods in MANET is revealed in Table II [28]. Reactive routing protocols can be classified into two classes; which are hop-by-hop and source-based routing protocols. Source-based routing methods convey the whole path to the destination, while, hop-by-hop routing protocols hold only the destination and next hop addresses in their data packets header.
C. Hybrid Routing Protocols
Hybrid routing is a combination of distance-vector routing and link-state routing. Thus, hybrid routing protocols share the properties and useful features of both reactive and proactive protocols. These protocols are developed to increase the scalability and improve routing in MANET by determining the optimal routes to a destination and reporting network topology when there is a change only [29]. In cases where connectivity to nearby nodes should be maintained, reactive routing is used, while proactive routing can be used if routes to remote nodes are required. This minimizes the periodic propagation of routing information and may provide accurate and reliable routes for transmitting data packets to their intended destination. Moreover, these protocols are able to reduce the number of rebroadcasting nodes in the network using different hierarchical strategies [30]. These strategies enable the nodes to organize themselves to provide effective routing where only selected nodes are used to perform route discovery. Nevertheless, the disadvantage of these protocols is that their efficiency depends on the number of nodes activated in the network. In addition, the gradient of traffic volume plays an important role in reacting to traffic demand. Compared to reactive or proactive protocols, hybrid routing protocols are naturally more complex and require a high computation level to investigate their performance in large MANET.
V. STRUCTURAL CLASSIFICATION OF ROUTING PROTOCOLS
Based on their routing structures, routing protocols in MANET can be classified into three categories; flat, hierarchical, and geographic position routing protocols [31]. Every protocol in these categories performs routing whether proactively or reactively or both. For example, flat routing protocols can be reactive such as AODV and DSR, or proactive such as DSDV and OLSR. In hierarchical routing protocols such as ZRP, nodes are grouped into zones (cluster-based) or trees that would help in limiting the flooding area during the route discovery process. In hierarchical routing, the group leader is responsible for routing management within its group which can reduce the global exchange of routing information (overhead) and the size of routing tables [32]. In addition, hierarchical routing protocols scale better than flat routing protocols in large MANET. Nonetheless, these protocols cause high overhead in highly dynamic MANET due to the frequent reconstruction of zones and cluster head election [33]. Geographic position routing protocols such as ZHLS require that each mobile node must be equipped with GPS in order to acquire their location information when needed. In geographic position routing protocols, data are sent to all mobile nodes in a particular region using geographical information and routing. Hence, the propagation of routing information to the whole network is obviated. The use of geographical information makes those protocols adjust themselves to topology changes quickly. However, high overhead is introduced due to the mapping of address to location procedure.
VI. LITERATURE REVIEW
This section discussed the previously mentioned routing methods along with their limitations in terms of the route discovery process.
Some of the table-driven routing protocols, like Optimized Link State Routing (OLSR) [34], [35], Mobility based OLSR (Mob-OLSR) [36], [37], and Fisheye State Routing (FSR) [38], [39], are developed based on link-state routing algorithm where nodes maintain link-state cost to their neighbouring nodes [40] Other routing protocols in this category, such as Destination Sequenced Distance Vector (DSDV) [41] and Wireless Routing Protocol (WRP) [42] developed based on distance vector routing were the shortest paths to a destination are checked and maintained periodically by every node. DSDV routing protocol [41], which is a table-driven routing mechanism based on the Bellman-Ford algorithm [43], was developed to overcome the routing loop problems based on the sequence number of each route stored in the routing table that is announced by the destination. Hence, the data packets are routed through the route with the most recent sequence number. DSDV requires a consistent update of the routing tables [44] which utilizes some of the bandwidth even when the network is not used, which leads to fast depletion of battery power. DSDV is not appropriate for a very dynamic or large-scale MANET [45] as it needs a new sequence number whenever the network topology changes.
Similar to DSDV, WRP [42] was developed to diminish route loops and confirm reliable message exchange based on www.ijacsa.thesai.org the Bellman-Ford algorithm. WRP preserves an up-to-date view of the network by using a set of tables. Maintaining multiple tables requires a significant amount of memory and greater processing power. As WRP uses hello messages to ensure connectivity with neighbours, in highly dynamic MANET, the control overhead involved in updating tables is high and more bandwidth and energy are consumed. Therefore, WRP is not suitable for large MANET since it suffers from limited scalability issues.
FSR [38], [39] is a link state-based routing protocol that controls the overhead by sending out information about mobile nodes that are within its range only. In FSR, a node maintains the link state for every destination in the network by periodically broadcasting update messages to its neighboring nodes. In addition, route updates related to closer nodes are propagated more frequently. FSR provides good packet delivery when mobility in a MANET is low. However, in highly dynamic MANET where the network topology changes repeatedly, FSR presents inaccurate routing information to the destination which makes it not suitable for large MANET.
OLSR [35], [46] is a proactive link-state routing protocol, which discovers and propagates information using Topology Control (TC) and hello messages. OLSR is a shortest-path first-based algorithm. OSPF (Optimize Shortest Path First) floods the topology data using a reliable algorithm that is not suitable for MANET nature. Accordingly, OLSR is considered an unreliable protocol for a highly dynamic MANET. Also, it does not sense the quality of the route; it just assumes that the route is active if some of the hello packets have been received properly. Furthermore, OLSR uses many network resources i.e., bandwidth and energy that are limited in MANET. It is the same for the enhanced version of OLSR where a new technique for node mobility measurement was proposed by [36].
The common on-demand routing protocols in practice are Dynamic Source Routing (DSR) [47], [48]. Ad-hoc On-Demand Distance Vector (AODV) [49], [50], Dynamic MANET On-demand (DYMO) [51], [52], Location-Aided Routing (LAR) [53], and Temporally-ordered routing algorithm (TORA) [54]. DSR [47] is an on-demand reactive routing protocol that uses source routing rather than depending on the routing table information at each intermediate node. DSR has two main mechanisms; which are route discovery and route maintenance. Route discovery is initiated when a node requests a route to a specific destination. Route maintenance is triggered when a link between two nodes that are involved in the active route breaks down. DSR, like other on-demand routing protocols, floods the network with RREQ packets during the route discovery process. In determining the route to a destination, the addresses of the intermediate nodes between the source and destination are accumulated during the route discovery process where each node caches the route information. The learned route is used to transmit data packets that contain the address of each node along the path to the destination. DSR controls the bandwidth consumed by control packets, eliminating the periodic update messages required in proactive routing. Load balancing is achieved by using multiple routes which can increase robustness as well DSR is beacon-less. Moreover, although DSR performs well in networks with low mobility, its performance degrades significantly in highly dynamic networks [55], [56], [57]. Furthermore, its route maintenance strategy does not locally repair a broken route. If routes in the cache are stale, it can cause incompatibility when the route is reconstructed. Also, the delay in establishing a connection is higher compared to that of table-driven protocols.
AODV [48], [58] is a hop-by-hop reactive routing protocol that broadcasts discovery packets only when needed. AODV applies destination sequence numbers to find the latest route to the destination, which also helps in avoiding the infinite loops problem. In addition to that, the delay of the connection establishment in AODV is lower. Overheads and contention are reduced since AODV maintains only active routes. However, having an old source sequence number in the intermediate nodes can lead to unreliable routes. Also, heavy control overhead can be caused by multiple RREP packets in response to a single RREQ and due to the use of periodic "HELLO" packets route maintenance. Furthermore, AODV uses periodic beaconing to keep routing tables updated, which results in unnecessary bandwidth consumption. Moreover, AODV shows better performance in terms of throughput and delay in small-size MANET with no node mobility; and in dense networks with minimum mobility [59]. However, the quality of its performance decreases as the node mobility increases. The mobility-aware approach was added to AODV [60] to improve the management of high mobility in MANET by avoiding the frequent link breakages associated with using unstable paths that contain high mobile nodes. This added feature has shown some enhancement compared to AODV [61].
DYMO (also known as AODVv2) [51] is designed for dynamic environments such as MANET where network topology changes frequently. DYMO shares many benefits of the operational structure of DSR and AODV. DYMO outperforms AODV and DSR protocols as it uses accumulative routing which reduces RREQs noticeably [62]. DYMO was improved by considering the energy and traffic parameters of the network and showed better performance compared to the original DYMO and AODV routing protocols [63].
LAR [53] is an on-demand routing protocol that uses geographical location information to limit the propagation of RREQ packets to a certain number of nodes rather than flooding the network, which in turn reduces the routing overhead considerably. The location of nodes is detected using Global Positioning System (GPS) information and defined in an area that is called a "Request Zone". Only nodes in this zone are required to forward RREQ packets. This can help in avoiding the broadcast storm [20]. However, connection and tracking problems may appear with the use of LAR. It is because when a source node has to find a route to a destination, it should first get the coordinates of the destination from an external location service. LAR has been enhanced further to improve the link ability during routing [64], and to control the overhead found in LAR [65].
TORA [54] is an adaptive routing protocol designed to restrict the propagation of control messages in the highly www.ijacsa.thesai.org dynamic context of mobile computing. In TORA, each node has to explicitly start a query when it needs to forward data to a specific destination. TORA tries to figure out what is known as a Directed Acyclic Graph (DAG) which is rooted at the destination. Even though TORA performs well in dense networks, it does not scale by any means. Several evaluation studies showed that DSR and AODV outperform TORA [66], [67]. It was enhanced to provide better packet delivery, and acceptable routing overhead and packet latency [66].
SLURP [68] uses GPS instead of a cluster head to manage the node location and coordinate the transmission of data packets. It utilizes the identification (ID) of the node and zone ID of the destination to perform routing. Therefore, SLURP shares the same advantages mentioned above. Moreover, it limits the need for flooding as the nodes within the zone maintain location information with each other. Thus, nodes know how to find an efficient route to destinations when required. SLURP limits the overhead of maintaining routing information further. This is achieved by restricting route discovery to the home region or specific zone assigned to each node in the network. The home region is determined by a static mapping function that is known to all nodes in the network. The drawback of SLURP is that it depends on a predefined static zone map.
ZRP [40], [69] was developed to speed up data delivery and reduce processing overhead. In ZRP, mobile nodes are clustered into zones where communications among nodes are performed according to their locations in the zone. ZRP maintains robust network connectivity within the routing zones using the reactive routing technique. Also, it reactively discovers remote routes faster. Nonetheless, ZRP behaves as a proactive protocol if the routing zone is too large. On the other hand, if the routing zone is too small, ZRP performs as a reactive protocol. Thus, it is important to set the value of the zone radius according to the density of nodes in the network.
In ZHLS [70], the nodes are divided into non-overlapping zones where each node is associated with two identifiers which are a node ID and a zone ID that is calculated using GPS. Traffic bottleneck is avoided in ZHLS as it does not require a cluster head to coordinate data transmission Therefore, there is no processing overhead necessary for electing the cluster and restructuring the zone in case of a single point of failure. Hence, the communication overhead is reduced significantly, compared to the flooding method in reactive protocols. Furthermore, ZHLS can adapt to the changes in network topology faster as it only requires the node ID and the zone ID of the destination when routing data packets. However, in ZHLS, for a node to function, it should have a static zone map, which is not practicable in networks like MANET where the geographical boundary is dynamic.
VII. DISCUSSION AND STUDIES
This section discusses the advantages and disadvantages of the routing methods mentioned earlier and presents a comparison between different types of MANET routing protocols in terms of the common parametric evaluation metrics.
Proactive routing protocols have been evaluated theoretically and through simulation [23], [36], [44], [72], [73], [74], [75] and it was found that the main advantage of the proactive routing protocol is to ensure the availability of routes whenever needed with a minimum delay of data delivery, as every node should maintain routing information to every node in the network. The main disadvantages of this type of routing are the continuous discovery of routes and the broadcast of routing information which introduces high overhead and consumes high energy and bandwidth. Therefore, table-driven routing protocols are not appropriate for large and highly dynamic MANET since every node is required to maintain entries in the routing table about all nodes in the network. Because of the nature of MANET, a routing protocol designed for such networks should improve the scalability and decrease the routing overhead by restraining route computations to situations when a route is needed.
The evaluation of reactive routing protocols [17], [66], [76], [77] showed that reactive routing introduces lower overhead as loop free since routes are only constructed when required, which is a privilege of this type of the protocol compared to proactive routing. The disadvantages of reactive routing are that, due to the initial route discovery process, there is a critical delay between the time a source node requests a route for data transmission and the time when the actual transmission takes place. The source node must wait until a route is found then it can start transmitting its data. Rapid changes in a MANET topology due to mobility may break active routes and cause subsequent route discoveries which can substantially impact the network's performance. Additionally, the flooding technique utilized throughout the route discovery phase can result in a broadcast storm.
The performance evaluation of hybrid routing protocols [2], [29], [71], [78], [79] showed that, in comparison with reactive protocols, hybrid protocols can reduce the average length of the routes in terms of the number of nodes and the physical length of the route as well. It was found that the overhead cost in hybrid routing is tolerable in most of the evaluation scenarios. However, even though that hybrid protocols are suitable for large networks and make available up-to-date information, they increase operational complexity.
VIII. CONCLUSION AND FUTURE WORKS
This paper discusses that data packet routing is the main concern to improve the performance of MANET where nodes move arbitrarily with no central administration. This presents a heavy burden on the routing protocol in use. In regards to a functional classification of routing protocols discovering and maintaining routes are considered, the protocols have been classified into table-driven (proactive) routing protocols, ondemand (reactive) routing protocols, and hybrid routing protocols. It was shown that in proactive protocols, nodes should maintain a routing table for forwarding packets to any other node in the network. This enforces periodic exchange of information between mobile nodes to keep their routing table up-to-date. In this type of protocol, scalability and overhead become serious issues. Whereas, reactive protocols initiate route discovery only when there is data to send. However, it was recognized that such a process introduces an undesirable delay between the request for data transmission and the actual transmission of data before route establishment, which affects the network performance. Moreover, the flooding procedure used in the route discovery process can cause broadcast storms. Hybrid protocols, however, attempt to utilize the beneficial features of both reactive and proactive protocols to tackle these problems. Nonetheless, it was concluded that while hybrid protocols are suitable for large networks and keep up-to-date information, they increase operational complexity.
In this paper, in connection with the routing methods mentioned earlier, the routing protocols are also classified based on the structure of the network into flat, hierarchical, and geographical position routing protocols, along with a discussion on their performance in terms of some common evaluation metrics. It was concluded that MANET needs enhancement with regard to routing in order to satisfy the service quality requirements of the user applications with desirable performance. The insights gathered from this study will be useful to researchers, network designers, and professionals that work in this area as they design and optimize future MANETs. Future research should include recommendations for selecting the best routing protocol for various scenarios and conduct a comparative analysis of additional routing protocols, including their advantages and disadvantages. Furthermore, future research could be the investigation on how routing protocols in MANETs can be improved using machine learning and artificial intelligence techniques.
|
2023-05-04T15:09:10.875Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4f40239c808064dbe1341701917e01f8ddc8f39d",
"oa_license": null,
"oa_url": "https://doi.org/10.14569/ijacsa.2023.0140449",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c00b50694bbab582b594337c160e84c0413ea9a1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
259244524
|
pes2o/s2orc
|
v3-fos-license
|
Tailored support for preparing employees with cancer to return to work: Recognition and gaining new insights in an open atmosphere
BACKGROUND: A considerable number of cancer survivors face difficulties in returning to work (RTW). More insight is needed on how to support employees shortly after cancer treatment and help them make the transition back to work. OBJECTIVE: To gain an in-depth understanding of how and under what circumstances a Cancer & Work Support (CWS) program, which assists sick-listed employees with cancer in preparing their RTW, works. METHODS: A qualitative design was used, inspired by Grounded Theory and Realist Evaluation components. Semi-structured interviews were conducted with RTW professionals (N = 8) and employees with cancer (N = 14). Interview themes covered experiences with CWS, active elements, and impeding and facilitating factors. Interviews were transcribed and analyzed by multiple researchers for contextual factors, active mechanisms, and the outcomes experienced. RESULTS: Respondents experienced the support as human centered, identifying two characteristics: ‘Involvement’ (‘how’ the support was offered), and ‘Approach’ (‘what’ was offered). Four themes were perceived as important active elements: 1) open connection and communication, 2) recognition and attention, 3) guiding awareness and reflection, and 4) providing strategies for coping with the situation. Variation in the experiences and RTW outcomes, appeared to be related to the personal, medical and environmental context. CONCLUSION: Both professionals and employees really appreciated the CWS because it contributed to RTW after cancer. This research shows that not only ‘what’ RTW professionals do, but also ‘how’ they do it, is important for meaningful RTW support. A good relationship in an open and understanding atmosphere can contribute to the receptiveness (of employees) for cancer support.
Introduction
In Europe, cancer has increased to more than 3.5 million new cases and nearly 2 million deaths each year [1].Knowing that many of the newly diagnosed cancer patients are of working age, facilitating returnto-work (RTW) after cancer should be encouraged [2,3] but without pressure [4].The literature shows that employees diagnosed with cancer are eager to return to normality and leave behind the sick role, and this includes going back to work [5].Returning to work has additional benefits: it can be a distraction from the illness, meet financial needs, improve quality of life and reinstall a survivor's identity.However, resuming work can be challenging because of the physical and cognitive side effects that are experienced [6].Psycho-educational support is essential to facilitate RTW [7].In addition, cancer survivors may feel uncertain and vulnerable or lack self-confidence about RTW [8].
It is well known that RTW rates after cancer can vary according to cancer type, treatment and duration of absence.Also, high demands at work and lack of (social) support can diminish the chances of successful RTW.Supportive measures are therefore required.In their review, De Boer et al. [9,10] distinguished several types of supportive interventions: psycho-educational, vocational, physical, medical and multidisciplinary, with different impacts on RTW.They found that multidisciplinary interventions could enhance RTW of patients with cancer, whereas the outcomes of psycho-educational and vocational interventions are as yet unclear [9,10].However, good practices for supporting workability after cancer are scarcely known [11].Recently, Stehle and colleagues [12] reported insufficient evidence to recommend occupational therapy interventions.Also, Algeo et al. [13] pointed at the lack of workfocused interventions to support RTW for women suffering from breast cancer.
Qualitative research is needed to better understand how RTW support is experienced in more detail during the different phases of the RTW process.Moreover, to obtain clear information on what should be discussed during the phases after treatment.For instance, when to talk about RTW with a cancer patient/survivor, and when to involve the employer.Previously, a qualitative study yielded that employees with cancer perceived their work absence due to cancer treatment in different ways.While absent from work, cancer survivors mentally prepared their RTW, considering how to become a worker again instead of being a patient.Furthermore, they reflected on their capability, based on their medical situation, and on the support to expect from the workplace [10].Employers seem to play a key role in supporting the return-to-work (RTW) of their employees and in creating a good working and customized environment.Concurrently, they need support regarding information on cancer, communication with the employee, and arranging adaptations at work [14].
Depending on a country's legislation, employers are obliged to collaborate with an occupational physician regarding RTW.With the Dutch legal requirements in mind, and in cooperation with a National Occupational Health and Safety Service, a supportive method called 'Cancer & Work Support' (CWS) was developed and tested, to support (preparing) the return to work of sick-listed employees with cancer.RTW professionals (i.e.social workers and reintegration coaches) offered the CWS to employees directly after cancer treatment.The CWS included three potential and theoretically founded modules: 1) Disease coping, 2) Skills/competences and 3) Resource management.More information on the support provided is given in the Methods section.
The CWS method was based on positive experiences of the JOBS program [15], which was applied in several groups experiencing 'transition in life' [16].The principle of change in this transition (underlying the JOBS program) is creating mastery experiences thereby enhancing self-efficacy and improving the ability to deal with obstacles and setbacks [17].
The current qualitative study aims to gain an in-depth understanding from the existing method (Cancer & Work Support) to support sick-listed employees with cancer in preparing their RTW.Gathering knowledge on the experiences with the CWS can help professionals to understand the care and support needs of employees with cancer [18,19].The question to explore is: How do employees with cancer (receivers) and RTW professionals (deliverers) experience the support provided, regarding (preparing) RTW after cancer?In particular: when and how does the support provided work for employees/professionals?
Design
Using a qualitative design, semi-structured interviews with healthcare professionals, i.e. social workers and reintegration professionals (N = 8) and employees with cancer (N = 14), were conducted and thematically analyzed [20].The design was inspired by Grounded Theory (GT) using the Qualitative Analysis Guide of Leuven (QUAGOL) [21] and Realist Evaluation (RE) components (searching for contexts, mechanisms and outcomes, yet not looking for causal explanations, since our aim was not to evaluate the CWS as an intervention, but to know when and how the CWS worked) [18,19].
Ethical considerations
The medical ethical committee Brabant approved the study (NL63659.028.17/ P1756) and-because of the online interviewing -accepted an informed consent by mail, including name, date of birth and address of the participant.Anonymity of the participants was preserved in the Results section.
Context
In the Netherlands, employers have a contract with an occupational health and safety service.They are obliged to support the return-to-work (RTW) for two years, in collaboration with the occupational physician [22].Instead of paying social premiums for sickness absence benefits, employers have to provide payment (at least 70% of the income) during these years.Then, the Employee Insurance Agency (EIA) for disability benefits, will assess the employee, taking into account the efforts made by both stakeholders regarding reintegration.If both the employee and the employer have done enough to achieve RTW, disability pension will be paid by the EIA.
Cancer & work support
The supportive method was tried out in several regional Dutch Occupational Health and Safety Services.Process coordinators were involved in the recruitment of participants for the study: i.e.RTW professionals (social workers and reintegration coaches) and sick-listed employees who delivered/received the support.Initially, occupational physicians informed their sick-listed employees about the existing method and employees were free to participate.
As mentioned in the introduction, the CWS included three (potential) modules.The 'Disease coping' module was based on the dual process model of coping [23,24].The 'Skills' module was based on the social learning theory of Bandura [25] and inoculation theory of Meichenbaum and Deffenbacher [26]; and the 'Resource management' module on the Self-Determination Theory by Deci & Ryan [27].A maximum of six sessions for each module was proposed.Within every session, physical exercise was a subject.Conversations with the employer were also included.The activities in the sessions aimed to support workability and reduce fatigue and possible mental problems.The RTW professionals were trained in disease coping and skills protocols beforehand.See Fig. 1.
Data collection
Process coordinators of about 18 regional National Occupational Health and Safety Services (the Dutch ArboNed) invited RTW professionals (social workers and reintegration coaches), who had been carrying out different modules of the support: 'disease coping', 'skills/competences' and/or 'resource management', by an informed mail.Likewise, the supported employees with cancer were invited to participate, as well as those who were still involved in a module.After a few recalls, eight professionals and fourteen employees responded and were included in the study (convenience sample).An additional call was made to inform them again about the interview and to collect personal information (e.g.age, gender, diagnosis, occupation).Then they replied with an informed consent mail.In close consultation with the participants an appointment for the interviews was made.They determined the time and the form (video call/telephone interview).See Table 1.
Interviews
Due to the Covid-19 virus, we could only conduct online interviews.To protect the participants' privacy, we used MS-Teams/ZOOM H2 Handy Recorder for interviewing/recording and Express Scribe for transcribing the interviews, in consultation with the university's IT service.A topic guide was developed to structure the interviews with both the employees and the professionals -see Appendix.Participants were asked how they looked back at the support and what the support yielded.The topics of the interviews included the frequency and timing of the different modules of the support program, strengths/weaknesses of the support (attuned to the phase you were in) and RTW experiences (employer contact, what has it given you).For the professionals we added questions on protocols and scope for action.We started with an introductory talk and a few general questions (do you know when you started the Cancer & Work Support and which modules were offered/followed then; what did you appreciate most and why?).Then, we continued to ask questions about what was of particular interest for the person concerned.During the interviews, we asked -in case of doubt -for reflection on what was said, so that we could get as clear a picture of the experiences as possible.Participants could choose whether to receive a voucher or to donate the small sum to the Dutch Cancer Society (KWF).The first author, an experienced qualitative researcher (CT), performed and fully transcribed the interviews.The interviews lasted on average 45 minutes.After the interviews and analysis, the participants received the results of the research/interview; we did not receive any response to the findings.
Analysis
Inspired by Grounded Theory, using the Qualitative Analysis Guide of Leuven (QUAGOL) [21] and Realist Evaluation components [18,19], we tried to understand the support provided while labeling contexts, mechanisms and outcomes, yet not searching for causal connections.We used an open approach and did not use initially drawn-up theories and hypotheses, as we weren't aiming to measure the effectiveness of the Cancer and Work Support (CWS).We focused on the (themes in the) mechanisms, as we were most interested in what exactly happened and was experienced during the sessions.
While studying the transcripts (reading with the research question in mind, as many times as necessary) and monitoring data-saturation (which might not be reachable considering the various characteristics of the participants), narrative and conceptual reports were made per interview (CT) [21].At the same time, working mechanisms, contexts and outcomes were highlighted and coded in the transcripts by three authors independently of each other (CT, RB, MJ) [18,19].For all transcripts, and based on the conceptual reports, core messages and mean- ingful themes -derived from the contexts (about attitude, medical and work situation, environmental support), mechanisms (about communication, awareness, involvement, approach) and outcomes (return/no return after support) -were identified and listed (CT).In cooperation with the research team, these messages and themes were repeatedly and intensively discussed to be able to structure and describe the findings in a useful and logical way.Final decisions were made by consensus and in cooperation with all authors.
Results
Both receivers and providers characterized the Cancer & Work Support (CWS) as human centered.To be able to meet the employees' needs and to adapt to the situation, the RTW professionals tailored the CWS.We distinguished two characteristics of the CWS: Involvement (regarding the form: 'how') and Approach (regarding the content: 'what').
Four themes in total were covered: open connection and communication; recognition and attention; guiding awareness and reflection; providing strategies to deal with the situation.Variation in the experiences seemed to be related to the personal, medical and environmental context.Below, we first outlined how the support was tailored by the professionals.Next, the four themes of both characteristics and the different contexts were described.Finally, we considered the value of the CWS.While describing the findings, the experiences of the employees [E] and the RTW professionals [P] were integrated.
Tailored support
At the start of the support, which was in general the disease coping module, the professionals mentioned that they really wanted to adapt to the needs of the employee.As regards timing, they experienced however that the modules did not always harmonize with the employee's phase of recovery.Individuals also seemed to differ regarding disease coping and progress.In close consultation with the employee, workable choices were made.
"What are the care and support needs?That is determined together with that person.This is also much more in line with our method, connecting with the client.After that it was determined: which intervention should be used.[P2]" Throughout the sessions, this could lead to postponed or spread-out consultations (e.g. because of additional medical therapy), to choosing appropriate exercises (e.g.reflection tasks seemed less suitable for commonsensical doers), to advice to stop the coping module, or, to refer to the next module.
"The proposed protocol was not always appropriate. Questions such as 'how are you going to communicate what is going on to your environment and your employer', had often been discussed already. [P4]"
The interviewed professionals themselves (i.e.social workers, reintegration coaches) experienced that the support to be given was a nice and complete method, but with a large number of time-consuming exercises for the employee shortly after cancer treatment and a lot of preparation time for the professionals.
"One minute before you bring someone in, you don't have that program in your mind again. It really requires a lot more preparation ( . . . ). You have to know by heart, the choices that you can present to the person. [P3]"
Professionalism and experience -being able to diverge from the prepared session -was found to be important for the RTW professionals.They frequently had to adapt the tight protocol, to let go of the structure and/or improvise, in order to meet the specific needs of their client and to stay in good contact (not to lose him/her).
Open connection and communication
The interviews showed that the employees really appreciated the support, although they did not fully remember the precise content of the sessions and the modules attended.
"I have also received a number of assignments. I think I completed those properly every time, and while talking we also discussed them. But I wouldn't know exactly what it all was . . . [E11]"
The atmosphere during the sessions seemed especially valuable.The participants felt that the professional was on their side, unlike the medical staff.Communication seemed to be more on the same level and topics could be addressed and worked out together.Almost everyone mentioned that there was a 'click' with the professional in question.
"We just had a very good relationship, a good click, and she understood exactly what my problem was. [E13]"
A great connection was felt gradually.All support was welcome.Even for those who felt they did not need support when they were invited to participate, it proved useful and pleasant to be able to put everything together with an objective and non-judgmental expert.One of the first experiences mentioned was that during the conversations they felt human again, like a searching individual.
"It is nice to be able to tell your story and to get tips. To be heard by people who do not work in a hospitaland someone who is not the occupational physician. At that moment, you do not feel so much like the patient, but an individual who is looking to tie all the strings back together. [E3]"
In particular, the employees remarked that they could freely tell their story to a professional who knew what she was talking about, and who acted as a permanent point of contact.Many said they received energy from the conversations and felt more at ease about their situation.This woman mentioned how she learned about communication and that she was more willing to get in touch with her employer again.
Recognition and attention
Beyond the open atmosphere during the conversations, the attentive way the RTW professional treated the employees was highly appreciated.What stayed with them the most was that there was a 'trusted' someone who understands you, pays attention to you, thinks along with you, provides structure, motivates you and directs you; who confirms and recognizes you in the steps you take, who gives you space to discuss topics that affect you or that bother you.The employees felt able to get to the bottom of what was worrying or frightening them, whether it concerned work-related or private matters.They felt relief at being able to vent, expose their deepest inner self, to cry and laugh.
"She has guided me in dealing with my fears.I am grateful that the occupational health service gave me the opportunity, that I had a social worker who kicked my butt.Where I was allowed to cry, where I could laugh, but who understood me, and also just held my hand for a moment like, 'you are having a hard time'.I felt alone, I felt lonely.She pulled me through all of that.That's great if you have someone who can do that for you.
[E13]"
As the data showed, you were allowed to be yourself and only think of yourself.Feeling that recognition, attention, empathy and concern made the employees feel especially safe.Being guided in this way the employees could think about their situation, their competences and then shape new priorities in peace and quiet.
Guiding awareness and reflection
As the interviews revealed, the RTW professionals offered safety and confidence.One of the first things they did was to normalize the employees' intense feelings.
"I think normalization really is a task of the social worker ( . . . ) you have to know the difference.If it leans towards something psychiatric, you have to pay attention to it (...) People are often also afraid of the fear (..
.) Yes, that normalizing part, that can take away your fear. [P7]"
The professionals continued to ask questions about how the employee felt, as a person and as a worker with cancer.The interviewed employees mentioned that the coaches cleared things up, structured the person's stories and gave advice, after having listened carefully.They felt motivated in focusing and reflecting on feelings, decisions and actions.
"That you feel heard with your complaints, that is perhaps the most important thing (...) but we also just give really useful tips.It is the combination of that listening earof someone who is really independent and knowledgeable and who under-
stands you, who knows what it is aboutand the practical tips. [P6]"
Employees called this support strengthening and helpful in regaining self-confidence.
Providing strategies to deal with the situation
From the interviews, we learned that the professionals were aware of the difficulties the employees faced shortly after treatment.They might feel mentally confused, being in a process of surviving.The professionals noticed that they were able to help the employees find a new or more stable way of life.The interviewed employees mentioned that they were frequently made aware of the need to manage their energy.Many examples were given of how to take enough rest and make time for relaxation.Useful tools, various instruments with exercises and concrete tips were given regarding managing the employees' concerns, anxiety and pitfalls.
"You know, there are just really good things in the module. They help to provide insight into who am I, what are my qualities, which obstacles do I encounter, which priorities do I have to set (...) yes, with lots of tips and tools, they could really get started. [P6]"
Many employees said that they learned in this way how to cope with their feelings in different ways in order to accept their situation gradually.
"Especially putting things into perspective. I can handle it in a more relaxed way. I have learned not to keep looking back to the past. [E2]"
With regard to their work, realistic plans to return were built up, taking into account the person's competence, ability and energy.
"She was very clear with me: 'how are we going to pick it up to return?' Because I was really in the dark about that. Do I have to try again, return fulltime, and see what happens? She gave me very good tips there. [E9]"
Enough time was given to map out one's competences and establish new goals.The interviewed employees felt it also helped to explore potential new aspirations (a new study, a new job).
The support described above shows that the open atmosphere and the genuine attention was highly appreciated by the employees.Apparently, this was a good starting point for the professionals to work further with the employee in guiding awareness and providing strategies to deal with the personal and the work situation.
Context
Although both employees and RTW professionals very much appreciated the support received, respectively given, the experiences of the participants varied.This worker summarized the contextual factors regarding the support as follows: "In my case there were already many advantages such as that I have good prognoses.Besides, I have such a good relationship with everyone, with the owner of the company and with the manager.How I am as a person.That also plays a role in the reintegration.However, I do think it has helped that she has guided me a bit in listening to my body carefully, listening to my head carefully.Balancing energy.[E4]"
Personal differences (receivers/providers)
Irrespective of the support, attitudes towards the illness and work could differ.Some employees underlined their gloomy state of mind regarding the work situation or their wait-and-see attitude.Others mentioned their motivation or positive state of mind and their eagerness to proceed during the RTW process.A realistic optimist accepted his medical situation from the start and spoke of his humor despite his unfavorable prognosis:
"Well, I'm pretty easy. Look, I'm not the only one who has cancer. Yes, we have to make do with what we have. Humor is the most important thing. Yes, of course, you can sit in the corner and think gosh, I have cancer ( . . . ). Yes, why me? Yes, why not someone else? [E1]"
The professionals told of their professionalism while supporting employees who participated of their own free will.Depending on their experience as a professional, they seemed to rely on their expertise.This might set the scene for the support to be given: "I just handled it differently, treated it as a guideline.And I thought: well, I will see if it is appropriate.But I've been in the business for so many years, I can also vary it a little bit.[P4]"
Medical situation (receivers)
Due to different cancer diagnoses, prognoses and length of treatment, the physical and mental con-dition was something to keep in mind.The stories revealed that the conditional differences experienced could have an impact on the progress to be made."Exercise does help in physical recovery.It also aids in mental recovery.But it is not a guarantee that you can get back to work.[P1]"
Environmental support (receivers)
The interviewed employees referred to various aspects of support in the private environment and employer support.The majority was grateful for the support received from their family and friends, although they might spare them details out of concern for them.
"Some things you never discuss or say to your friends or family members.Because it is something heavy.This was just a very safe space, where you could just tell your whole story.That was very nice.
[E6]"
The support from the workplace ranged from a little to a lot of understanding and cooperation.The professionals were also aware of the employer's concerns in the event of a cancer diagnosis and tried to advise him or her:
"Often, an employee is at a loss what to do. But the employer is completely at a loss! Because he wants to understand and be empathetic, but he also just has a business problem. That is where we often compromise in between. Like 'yes, you can put the business first. Then you do have an employee who will become ill in a few weeks. And then it costs so much each day'. [P1]"
The meetings, together with the social worker ('three talks'), often proved to be a solution here.It helped address the employer's concerns.It also helped to explain better how cancer recovery is progressing, and how cancer can delay preparation for returning to work.
Return
From the interviewed employees we heard that they felt strengthened by the support.That it could help them to return to work earlier.
"Yes, I have personally experienced it as a success.The guidance, everything that has been there.I was very happy with that.That I recovered faster and was able to get back to work faster.And that I did not end up in a kind of self-pity.[E13]"
No return
For some employees the future remained uncertain.They felt motivated to return to the workplace, but medical reasons prevented them from doing so.
"Then, when we really started to build up a bit, it came back.So yes, then you start all over again.
Reflection
The employees explained that they had learned to put things in perspective better, which might lead to a more open-minded and positive attitude towards life.Together with the handles they received to cope with obstacles, the employees might look into the future with confidence.Because they felt able to set the boundaries again, some thought about devising new priorities.The employees said that they learned a lot during the sessions anyway and that the CWS in particular created more awareness.
"Well, I thought it was really additional support. It makes you more aware of how you feel. What you could do, what you would or wouldn't like, or what you don't want. [E10]"
The professionals also reflected on the benefits of the CWS: "For myself too, as a professional, I really found it of added value.The kind of questions and assignments, I personally think it is a good offer.I thought it was a very nice training and I have benefited a lot from this guidance.I learned a lot from that myself.I also sometimes use parts for other clients.[P3]"
Discussion
In this study, we aimed to understand how the Cancer & Work Support (CWS) was experienced by employees with cancer, and by RTW professionals who provided the support.In addition, we wanted to gain insight into when and how the support worked for employees and professionals.From the interviews, we identified two characteristics of a human-centered and tailored support.One aspect related to 'Involvement', with regard to the form ('how'); the second to 'Approach', with regard to the content ('what').Four themes were covered: open connection and communication; recognition and attention ('how'); guiding awareness and reflection; and providing strategies to deal with the situation ('what').Furthermore, we saw some variation in the experiences, based on personal, medical and environmental differences.The latter corresponds to the general finding that individual characteristics need to be considered, when deciding if and when to return to the workplace [28].
Aims of the CWS
At the start, the CWS aimed to prevent the development of depression and anxiety; to enhance the confidence of patients in their return to work and to support recovery-enhancing behavior including perseverance when returning to work.The findings show indeed that professional help may be useful in reducing symptoms of depression or anxiety, by giving individuals the opportunity to talk freely and safely about their feelings and concerns.The patients were given the time needed for their return to work or to extend working time.Moreover, healthy behavior (e.g.exercising) was a topic at the end of every session.The employees mentioned that they were aware of their reduced energy levels and that they had learned to deal with it.Shaw and colleagues [29] found that physical exercise provided positive effects on wellbeing and was essential for workability.Although we know that twelve of the fourteen employees returned to work, we cannot conclude whether and how the CWS contributed to workability and/or work resumption in a meaningful way for both the employee and the employer.However, we can perhaps agree with the findings of Dorland et al. [30] that reducing symptoms of depression and fatigue and supporting workability can help improve work functioning over time.
Human centered and tailored
The CWS was experienced as human centered.This concept is widely used in business [31] and has some overlap with CWS since the method is developed on the basis of understanding people's needs and behavior.After all, the CWS was theoretically founded (e.g. on social learning theories) and based on positive experiences of the JOBS program [16] that has been applied to several different groups experiencing 'transition in life' such as from school to work [32], from work to work [33], from work to pension [34] and from sick leave to return to work [35].
The principle of change in this transition underlying the JOBS program is creating mastery experiences thereby enhancing self-efficacy and the ability to deal with obstacles and setbacks [17] in safe surroundings, i.e. human centered.
The RTW professionals tailored their support to the needs of the client, based on their expertise as a professional counselor.The social workers, for instance, are used to providing support in case of social problems.For the reintegration coaches the skill and resources module seemed to coincide more with their professional skills.Nevertheless, the professionals were trained beforehand in disease coping and skills protocols, during two refresher-training days.One pitfall might be that they relied on their experience while providing the CWS, meaning that they had to depart from the tight protocol to tailor the program.Did they work sufficiently according to the new method, or did they provide a form of 'care as usual'?
However, according to the professionals, an important difference with 'care as usual' was that the participant employees of the study did not request assistance but were made aware of the existing new way of supporting employees with cancer by the occupational physician.In this way, CWS can be regarded as supply-driven assistance rather than demand-driven help.A second difference was that the CWS was a new and full program, including career tools (skills, resources) as well.
Involvement and approach
If we look at the way in which the RTW professionals were involved in the CWS, we think we see a comparison with the concept of 'attentiveness' (in elderly care) from Klaver and Baart [36] and the concept of 'concernful involvement' from Yanchar [37].'Attentiveness' can create a space in which good relationships may arise.This concept stems from the Theory of Presence (ToP) [38], which was developed in the Netherlands in 2011.Healthcare professionals, especially in the fields of hospital and elderly care, should have learned since then how to be 'present', and how to connect to the needs of patients.Acknowledgment and being open in a professional caring relationship seem to be needed to 'being there for someone', in order to give people the opportunity to show themselves and let them feel they are seen [39].'Concernful involvement' refers to the recognition that both parties (employees and professionals) are involved in making sense of a world "in which peo-ple, objects and events matter" [p. 4 in 40].It is about giving meaning and reflection.Based on our findings, we believe that a good mutual relationship in a trusted open atmosphere may contribute to a better reception of support.Leslie et al. [41] found that a trusting relationship promotes engagement and better collaboration in healthcare settings.
With regard to the open atmosphere during CWS, Haugli and colleagues [42] confirmed that being seen, heard and taken seriously by 'work and health' professionals is one of the most valued elements of the RTW process.Moreover, people on long-term sick leave perceive awareness and resources, as well as employer support, to be valuable [42].We found that the support provided created increased awareness.The employees were given a chance to reflect on their feelings, decisions and actions in an attentive and safe environment.Moreover, they learned how to manage their concerns, which helped them to regain their self-confidence.Together with employer understanding and recognition of their vulnerability, which can be increased or decreased in the workplace [8], this was felt to be an important step forward in preparing their RTW.
The results showed that the providers' professionalism during the CWS program was highly appreciated by the employees.Which indicates that satisfactory RTW support after cancer cannot be provided by just anyone.Professional competences are important in developing trust [43].
While mentally preparing for RTW, cancer survivors may feel insecure and vulnerable.Many of their inner thoughts and considerations can only and should therefore only be discussed in a safe environment [8].Similarly, MacLennan et al. [40] pointed out the urgency of receiving support from healthcare professionals.In their study, they found that women with breast cancer are making decisions about workability; they rethink the meaning of work and are in need of professional advice [40].We do not know whether these findings can be directly generalized to all cancer types, but adequate communication skills and a good relationship seem to be of great importance.
Communication with the workplace
The three-way discussions were held to stay in good contact with the employer and discuss possible RTW options, if desirable.These discussions during sickness-absence have proved to be helpful [44].In a study among employers, communication with absent employees was found to be crucial.Different communication styles were needed during the consecutive stages in the RTW process: from the moment of disclosure, during sickness absence, RTW planning, until the actual return [14].Recently, Yagil and Cohen [45] suggested the need for guidelines and training programs to support contact and communication in the workplace during absence from work.The participants in the current study talked about the value of the CWS with regard to communication with the workplace.Although we know that good contact with employers can lead to better RTW experiences [46], the research team did not (have the possibility to) ask the employers directly.However, the findings show that the employers assumed their role in the RTW process and most of them were supportive and understanding.During the CWS, the RTW professionals were able to further inform them regarding their concerns and needs, which was very much appreciated.
Strengths and limitations
Based on the interviews with 22 participants, who were very open during the conversations, we saw that the CWS was highly appreciated by professionals and employees.While focusing on what happened during the sessions, we were able to discover the two characteristics of the CWS.The interviews that were rich in content showed us the challenges the participants (employees and professionals) face, each with regard to their own concerns and in their own way.We mainly focused on the employees' concerns and challenges.The experiences of both employees and professionals were brought together in the results section, to show that both perspectives underline the findings.This way of describing promotes readability and contributes to the trustworthiness and theoretical generalizability of the findings.Together, eight professionals supported about 40 employees with cancer during the CWS.Thus, the global experiences of more than the 14 employees were discussed.Our sample of employees included a variety of age, cancer types and functions.The professionals also varied in age and experience.However, a limitation might be that we could not compare the different experiences of employees of different ages (the majority between 40 and 60 years; only two < 40) or cancer types (50% breast cancer); nor did we examine employees' medical conditions, cancer severity, and type of treatment.
Knowing that the study was based on a convenience sample, after six interviews with professionals, we additionally searched for two younger and less experienced social workers.We do not know why professionals and/or employees did not respond to the coordinators' call to participate in the study.We can only assume it might have something to do with workload (professionals) or with a hesitation to talk about cancer again (survivors).Twelve of the 14 employees had earlier returned to work.Perhaps some of the other supported employees preferred to close the uncomfortable cancer episode and just be thankful that they were able to live a 'normal' life again [47].
Furthermore, recall bias may have occurred as for some participants, the CWS support was provided three years ago.Concerns about memory are often reported by cancer survivors [48].The employees did not necessarily follow all three modules, nor did the professionals deliver them.For that reason, no precise statements can be made about the original aims of the CWS.Nevertheless, we discussed some issues regarding feelings, concerns and work resumption.Two types of professionals delivered the CWS: (occupational) social workers and reintegration coaches from two different providers.This might have led to a somewhat different way of working.The disease coping module seemed more familiar to the social worker, whereas the reintegration coaches were more at home with the skills and resources modules.To reduce the differences regarding the coping and skills modules, two-day training sessions were provided.
What this study adds
In the Netherlands, employees and employers have to collaborate during sickness absence and draw up a reintegration plan in collaboration with the occupational physician.With the CWS, employees with cancer are closely supported after treatment.They are supported in accepting their situation gradually and in shaping their new (working) life little by little.In-depth conversations are possible, about more than just work.Not feeling pushed to RTW, skills and competences will be looked at more closely.In the last module, if applicable, resources are mapped.Awareness is thus created.
An important finding is that the way the participants are involved: the open connection and the attention received, can be seen as a condition for being open to the substantive support to be provided.Contact is maintained with the employer and, if the situation allows, he or she is involved in (preparing) the usually gradual return.What provides peace of mind is that employees are given time to recover and at the same time think about (and prepare) their return at a later stage.Without CWS, employees are alone with their concerns and might then feel pushed to return to work (e.g. in the case of an employer who is not understanding) and feel more dependent on employers' concerns and wishes.
Conclusion
We found that both deliverers and receivers highly appreciated the human-centered and tailored CWS with regard to preparing for RTW.In particular, knowledge of the two characteristics in the CWS (involvement and approach), should be taken into account when implementing this method (e.g. in occupational health services) or when developing new supportive measures.A good relationship in an open atmosphere can contribute to a better reception for the support provided.Providing strengthening and problem-solving skills in an atmosphere in which individuals feel safe to talk about themselves can bring about a change in behavior [16].This research shows that not only 'what' you do, but also 'how' you do it, is important when supporting RTW.In order to experience the benefits of the CWS, it is necessary that experienced professionals deliver the support.
|
2023-06-26T06:16:10.108Z
|
2023-06-23T00:00:00.000
|
{
"year": 2023,
"sha1": "0d434ca608d580afe162c7464a59d86708837581",
"oa_license": "CCBY",
"oa_url": "https://content.iospress.com/download/work/wor220566?id=work/wor220566",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d61483430b5f6456a3ecd6d336a1a43ae7d750df",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236310407
|
pes2o/s2orc
|
v3-fos-license
|
SURVEY ON THE RISK FACTORS FOR CERVICAL CANCER KNOWN BY BIOMEDICINE STUDENTS
The development of cervical cancer is related with human papillomavirus infection with greater intensity the subtypes 16 and 18. Considering that the lifestyle of women influences the development of this cancer, this study aimed to perform a survey on the risk factors for cervical cancer known by biomedicine students. Descriptive and exploratory research, with a quantitative approach was performed with 101 biomedical undergraduates. Data were collected from February to March 2018 through a questionnaire and analyzed in the Statistical Package for the Social Science program. Students in the age group of 18 to 24 years old (89.11%), single (93.07%), with family income between two and three minimum wages prevailed (43.56%). It also showed that participants started their sexual life early (average 16 years old), had more than one sexual partner throughout their lives, had no relation to smoking (100%), most were not alcohol consumers (66%), did not take the Papanicolaou preventive exam (61.39%), did not practice physical activity (55.45%) and had a low frequency in the use of condoms during sexual intercourse (22.08%). It is concluded that there is a need for educational campaigns in Higher Education Institutions, which provide more information about the prevention of Cervical Cancer and the prevention of associated risk factors.
Introduction
Cancer of the uterine cervix, also known as cervical cancer (CC), has become a public health problem throughout the world, where its highest morbidity and mortality rates are found mainly in developing countries. In Brazil, this is the fourth most frequent type of cancer amongst women, with an estimated 16,340 new cases in 2016, despite being a curable disease and with preventive and early detection actions available by the health system (Ribeiro and Andrade 2016).
According to Bermudez et al. (2015), the development of cervical cancer is closely correlated with persistent infection by oncogenic human papillomavirus (HPV) subtypes, especially HPV-16 and HPV-18. These viruses are transmitted sexually and account for about 70% of cervical cancers worldwide. Although HPV infection is very common, 80% of sexually active women will become infected with these viruses, only a small number of infections will progress to cervical cancer, suggesting that HPV infection is a necessary but not enough factor for HPV infection. development of this type of cancer.
In addition to factors related to HPV, such as strain and viral load, single or multiple infection, factors linked to immunity, genetics and sexual behavior appear to be decisive for the regression or persistence of infection and, subsequently, the onset of cancer cervical. Thus, smoking, early sexual initiation, multiplicity of sexual partners, multiparity, use of oral contraceptives and age are considered risk factors for the development of CC (Rocha et al. 2014).
A study focused at the health care of adolescents and young people of a public university in the city of Belém do Pará, northern region of Brazil, showed a participation of 329 university students, presenting high rates of HPV infection among the students evaluated, with a statistically significant association with the (HSIL), age at menarche and the parity variable (Vieira et al. 2017). In the present study, the age of menarche with the highest prevalence of infection was among students over 14 years old (p = 0.0328), differing from the literature that points to higher rates in women with menarche under 12 years old.
Vaccination against HPV is effective in 91.6% for incidental infection and up to 100% against persistent infections. Thus, prophylaxis using this vaccine is a strong candidate for prevention of CC morbidity and mortality (Santos and Souza 2013). In 2013, 5,430 women died due to complications of cervical cancer (SIM -Sistema de Informação sobre Mortalidade 2013; INCA -Instituto Nacional de Câncer 2016).
Considering that the woman's lifestyle influences significantly the development of CC, it was felt the need to identify possible risk factors for cervical cancer in biomedicine academics, to promote a reflection on the choices of behavior in this phase of women's lives. In this context, this study aimed to perform a survey on the risk factors for cervical cancer know by biomedicine undergraduates.
Material and Methods
A descriptive and exploratory study (Mesquita and Matos 2014), with a quantitative approach, developed in a private institution located in the city of Teresina, State of Piauí, Brazil.
The convenience sample consisted of 123 undergraduate students from the Biomedicine Course. After meeting the inclusion criteria: being regularly enrolled in the Biomedicine Course, being over 18 years old and being present at the higher education institution (HEI) in the period of data collection (February and March 2018). 17 students refused to participate in the study and 05 were on medical leave at the time of data collection. Therefore, 101 students participated in the study. Students were recruited through daily dissemination in all classes of that course and were invited to participate in the study during the afternoon shift at the HEI, after the end of classes or during the break. Data collection was performed only after the participants' consent and availability, with an average duration of 20 minutes.
In order to obtain the empirical material, a questionnaire was used to characterize the participants in relation to the sociodemographic aspects (gender, age, marital status, work status) related to academic training (period/course term of biomedicine, time spent to get to HEI, participation in academic activities: projects/scholarships/academic leagues, external work links, satisfaction with Biomedicine course) and regarding the possible risk factors for the development of CC (number of children, smoking, alcohol use, oral contraceptives, Pap smears, sex life, occurrence and number of abortions, Sexually Transmitted Infections (STIs), family cancers, and physical activity.
For the organization of the collected data, a database was developed in the Microsoft Excel Program, with double typing of the data, which were later imported into the SPSS Program "Statistical Package for the Social Science" (version 22.0 for Windows and Program R version 3.1.2) to perform the processing of the obtained data. The results were presented in tables using descriptive statistics.
The inclusion of the participants in this study was carried out obeying the ethical and legal recommendations that govern human research (Brasil 2012
Results
The distribution of the 101 participants regarding sociodemographic aspects revealed that a majority of 89.11% (n = 90) were concentrated in the age range of 18-24 years, among them 93.07% (n = 94) singles, and 5.94% (n = 6) married, with a family income of 43.56% (n = 44) from 2 to 3 minimum wages, and 92.08% (n = 93) reported not working professionally (Table 1). Regarding the participants' academic formation, 31.68% (n=32) were from the 3rd period of Biomedicine and 45.54% (n=46) took up to 20 minutes to reach the institution, 81.19% (n=82) reported that they did not participate in research groups, extension projects or academic leagues.
Regarding the scientific initiation programs, 99.01% (n=3) reported not been participating in any program and only 0.99% (n=1) participated in PIBIC, on the research grants available in the institution 78.22% (n=79) do not have a scholarship, and it should be noted that 21.78% (n=22) have student aid, amongst which the Educa Mais Brasil Program 52.52% (n=12) and University All (PROUNI) 45.45% (n=10).
A total of 84.16% (n=85) said that they did not have another technical/technical course, highlighting the appearance of a course as a technician in clinical analysis 12.50% (n=2), nursing 12.50% (n=2) and human resources 6.25% (n=1). As well as satisfaction on the achievement of the course 95.05% (n=96) of the participants of this research demonstrated that they are satisfied with the course, and the results of the research presented the reasons for the dissatisfaction as devaluation of the course 20% (n=1).
Regarding the risk factors associated with the participants' lifestyles, they said that they did not have children 95.05% (n=96), they were not smokers 100%, and never used cigarettes (n=101). Amongst the participants, 66% (n=66) did not consume alcoholic beverages, and the others that made use of alcoholic beverages were: 58.82% (n=20) beer, 20.59% (n=7) distilled, 8.82% (n=3) wine and 32.35% (n=11) consumed all the options mentioned above.
The research showed that 79.21% (n=80) evidenced using oral contraceptives, with participants beginning their sexual life on average at 16 years of age, with the majority reported having on average 1 sexual partner in the last 3 months and an average of 6 sexual partners throughout their lives. The use of condoms in all sexes has been subdivided into 61.24% (n=47), using 22.08% (n=17) rarely use condoms. Of the study participants, 97.03% (n=98) were not pregnant at the time of data collection, 0.99% (n=1) was pregnant and 95.05% (n=96) reported never having undergone an abortion. Data collection revealed that 98.02% (n=99) did not have Sexually Transmissible Infections (STI) and only 1.98% (n=2) reported having some type of STI, 60% (n=3) candidiasis and 40% (n=2) reported having herpes. Regarding the practice of physical activity, the study indicated that 55.45% (n=56) of the participants did not perform any kind of physical activity and those who practiced some physical exercise identified as walking, 22.77% (n=23) 15.84% body weight (n=16), sports 1.98% (n=2) on an average of 241 minutes per week.
Regarding the risk factors associated with the disease, a Pap smear was observed, in which 61.39% (n=62) never performed this test and only 38.61% (n=39) reported having performed on average 4 exams with the last test carried out 14 months ago.
It was identified that 35.64% (n=36) of the interviewed women did not have cases of cancer in the family, however 34.65% (n=35) reported having or already had some case of cancer in the family, highlighting kinship as: Grandparents 47.73% (n=21), Uncles/cousins 45.45% (n=20), father 4.55% (n=2) and siblings 2.27% (n=1). As to the location of cancer in cases of cancer in the family, the study revealed that 19.23% (n=10) were in the stomach, 13.45% (n=7) breast, 9.61% (n=5) cervix and 9.61% (n=5) in the liver with mean duration 73 months, the data were shown in table 2. To the factors associated with death, the study pointed out that 36.63% (n=37) stated that they had cases of death due to cancer in the family, evidenced the relatives as: Grandparents 52.78% (n=19), uncle/cousins 44.44% (n=16) and father 2.78% (n=1). The cancer sites that led to death were 27.50% (n=11) in the stomach, 10% (n=4) in the cervix, 10% (n=4) in the breast, 10% in the liver and 42.50% (n=17) Table 3. Frequency (n), percentage (%) and mean of risk factors associated with death (n = 101) (Teresina, PI, Brazil, 2018).
Discussion
As found in the sociodemographic characterization of the participants, it was observed that the age group of the students varied amongst women from 18 to 24 years of age. Research conducted by Silva et al. (2016a) showed that sexually active women aged between 18 and 30 years old are more common with HPV infection, and women aged 25 to 45 years old already had some type of lesion intraepithelial, and the incidence tends to decrease after 30 years of age. Although the risk increases rapidly until reaching its peak usually in the age group of 45 to 49 years.
In relation to marital status, there was a predominance of single women. Ribeiro et al. (2013) reported that the prevalence of single women is predictable, since it is a research conducted in a group of young people, in which academic activities and the greater interest for professional improvement, extend the plans for a possible marriage.
In this study, the predominant family income varies from 2 to 3 minimum wages. In another study conducted in Brazil, factors such as low family income, absence of health insurance and not having consulted with a doctor in the last 12 months have been associated with development of cervical cancer, showing that there are possible inequalities in the access and coverage of CC early detection strategies (Thuler et al. 2014).
Data analysis showed that 92.08% (n=93) of the participants in this study did not work. According to Ribeiro et al. (2015), new studies indicate that women who have some type of work present more adequate attitudes regarding CC prevention, such as going through more frequent Pap smears. Because they have a certain autonomy in making their decisions regarding their own health. Another positive point is that working women have greater access to information in the work environment with other women, which may stimulate preventive health practices.
Regarding condom use, the participants reported using it frequently in sexual relations, yet the study showed that some academics do not use condoms. In a study conducted in the city of Carmo da Mata, MG, in 2015, on the perception of women about the Pap smear, 36 women (38.7%) reported that they never use it in sexual intercourse. Therefore, the low condom use in sexual relations may make women more vulnerable and may increase the number of STIs, becoming a risk factor for cancer (Vasconcelos et al. 2017). A study conducted by Dias et al. (2015), related to schooling and cervical cancer screening, showed that 45.45% (n=20) of the women in a health unit had incomplete primary education and only 9.09% (n=4) with complete higher education. In this study, only 38.61% (n=39) of women with incomplete higher education said they had already done so. However, 31.68% (n=32) of the participants were in the 3rd period of biomedicine, 45.54% (n=46) took up to 20 minutes to reach the institution, 18.81% (n=19) and 0.99% (n=1) participated respectively in research group and scientific initiation program, and 21.78% had some kind of student scholarship.
Regarding the obstetric characteristics, a number of gestations and abortions were observed, in which 64.6% (n=42) of the women had children and 20.0% (n=13) had abortions according to the data of the author Melo et al. (2009). In this study, 4.95% (n=5) had children, 0.99% (n=1) was pregnant at the time of data collection and only 4.95% (n=5) reported having suffered at least 1 miscarriage on average for 7 months, these are risk factors for the development of CCU associated with life habit.
The research on smoking showed that 100% (n=101) of the participants did not and never used cigarettes, another study points to the link between smoking and cervical cancer, which is confirmed by several studies (Rosa et al. 2009). While Duarte et al. (2011), corroborating this study, reported that none of the participants were found to be smokers.
A study conducted by Silva et al. (2016b) showed that people who were diagnosed with cancer chose healthier lifestyle habits, such as fruit and vegetable consumption and quit smoking, but they did not change. exercise and obesity, and the constant use of alcoholic beverages, was almost doubled. In the present study, many participants 66% (n = 66) stated that they did not consume any type of alcoholic beverage.
In this study, 79.21% (n=80) reported that they did not use contraception and 52.48% (n=53) never used contraception. Another study pointed out that 94.5% (n=34) of the women prisoners of the Female Penal Institute of the State of Ceara did not use oral contraceptive methods. When questioned about the antecedent use of this method, 70.5% (n=24) responded positively with an average of 46 months of use. An epidemiological study conducted in several countries has identified that women who use oral contraceptives for more than five years can double the risk of developing cervical cancer (Anjos et al. 2013).
Regarding the onset of sexual activity, the mean age of 16 years was obtained, which is considered an early age. In the study carried out by Botega et al. (2016), the average age of 16 years was found about the onset of sexual activity, with a minimum age of 10 years and a maximum of 36 years. In relation to the number of sexual partners, the average of 2 partners was obtained for each woman, with a minimum of 1 and a maximum of 10 partners.
Studies have corroborated this research by showing that in Brazil women are initiating their sexual life early, and this condition together with the occurrence of multiple sexual partners throughout life is considered as an important risk factor for the development of cervical cancer (Silva et al. 2016a).
In this study, 98.02% (n=99) did not have Sexually Transmissible Infections (STIs), of respondents who answered positively, 1.98% (n=2) cited herpes. According to Silva and Monteiro (2016), condom use is the main form of prevention against STIs, especially HPV, which is one of the main causes of precursor lesions of cervical cancer, prevalent in young women.
The practice of physical activity has numerous effects on cancer prevention, this is because it considerably increases the number of natural killer (NK) cells, which play a very significant role in innate immunity in addition to controlling body weight and avoiding sedentary lifestyle promoting well-being and health (Munhoz et al. 2016). It is important to note that in this study 55.45% (n=56) of the interviewees did not practice any type of physical activity, which makes this a worrying factor.
CC is deeply related to heredity especially in the first degree, significantly increasing the chances of developing cancer. Carvalho et al. (2015) reported that 53.69% (n=80) of the participants had cancer or had cancer cases in the family. Similar data were found in this study, showing that 34.65% (n=35) of the students said they had cases of cancer in the family, being 47.73% the grandparents, 45.45% (n=20) uncles / cousins, and of these 19.23% (n = 10) were in the stomach, 13.46 % (n = 7) in the breast and 9.61% (n=5) in the cervix. However, according to the same authors, the cancer was more frequent in the grandmothers 41.25% (n=10), followed by uncles 31.25% (n=25) and 12.50% (n=10) the mother, 50 % (n=5) of the cases located in the breast. Rafael and Moura (2012) showed that 95.37% of the women had already performed the Pap smear and only 33.8% had not yet performed the exam in the last year, they also observed that the last exam was performed with superiority at shorter intervals than 1 year. In disagreement with the above data, the nonperformance of the colpocytological examination was evident in 61.39% of the cases, and only 39% had already performed on average 4 exams, with the last examination on average 14 months ago.
Regarding the factors associated with death, the study revealed that 36.63% of the participants had cases of cancer death in the family, in which the predominance was by relatives at more advanced ages, such as grandparents, 52.78% (n=19), followed by uncles/cousins 44.44% (n=16). According to Machado et al. (2017), deaths are limited to women aged between 20 and 79 years, with morbidity and mortality being predominant in the fourth decade of life.
The data found in this study demonstrate the occurrence of cancer in the stomach, breast and cervix. Cases of death by cancer derived from the cervix location have shown a sharp decline almost everywhere in the last decades. This decrease was found in almost all the capitals, however, in the cities of the interior of the North and Northeast that there was an increase in cases of uterine cancer, this increase is justified due to the unfavorable financial situation of the women of these regions (Madeiro et al. 2016).
The limitations of this study have to do with the design itself that does not allow us to make a deeper approach to the participants, mainly to evaluate the pap smear exams. Another limitation was the refusals of the participants and the restriction of access to information, even when the researchers informed about the confidentiality of the information. However, there were important results when revealing that some risk factors addressed in the literature were also found through this research.
Knowledge of the risk factors for cervical cancer is essential for its prevention, especially in the young sexually active population. In the university context, according to the results of this study, there is a need to rethink the formation of the future health professional, its important role in articulating self-care attitudes and health promotion. Therefore, it appears that studies like this, aimed at future health professionals, can identify and change the direction of educational campaigns in higher education institutions, provide a reflection on the professional's own training and collaborate in the construction of quality education that will impact the practice of health care. Future works in this perspective are suggested, mainly those that can make a more in-depth analysis regarding the pap smear tests and trace a follow-up before, during and after educational intervention.
Conclusions
It is concluded that the Biomedicine students were in the range of 18 to 24 years old, were single and with family income of 2 to 3 minimum wages. It was possible to identify the early onset of sexual activity, multiple partners, the low frequency of condom use during all sexual intercourse, failure to perform the Papanicolaou preventive exam and lack of physical activity that are risk factors for the development of cervical cancer. It was also evidenced that the participants were not related to smoking, most did not use alcoholic beverages nor the use of oral contraceptives, and there were few reports of pregnancy and abortion.
|
2021-07-26T00:05:56.498Z
|
2021-06-10T00:00:00.000
|
{
"year": 2021,
"sha1": "431616367494c1416632a25776ea9b7a836720e6",
"oa_license": "CCBY",
"oa_url": "https://seer.ufu.br/index.php/biosciencejournal/article/download/44207/31748",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8e56c4002ee8f25c053b1941eeba9958c180abd3",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
135123556
|
pes2o/s2orc
|
v3-fos-license
|
The Effects of Land Use and Climate Change on the Water Yield of a Watershed in Colombia
Land use and climate are two determinant factors of water yield within a watershed. Understanding the effects of these two variables is key for the decision-making process within watersheds. Hydrologic modeling can be used for this purpose and the integration of future climate scenarios to calibrated models widens the spectrum of analysis. Such types of studies have been carried out in many areas of the world, including the Amazon Basin of South America. However, there is a lack of understanding on the effect of land use/land cover and climate change on Andean watersheds of this continent. Our study focused on the evaluation of water yield under different land use and climate scenarios using the semi-distributed hydrological model known as the Soil and Water Assessment Tool (SWAT) model. We worked on the Tona watershed (Colombia, South America), the most important source of water for a metropolitan population. Our results compared water yield estimates for historical conditions (1987–2002) with those of future combined scenarios for land use and climate for the 2006–2050 period. The modeling effort produced global estimates of water yield (average annual values) and, at the subwatershed level, identified strategic areas on which the protection and conservation activities of water managers can be focused.
Introduction
Water yield is defined as the net amount of water flowing past a point on a stream during a given period [1].It is also understood as the average amount of water produced by the watershed from contributions of surface, lateral, and ground water over a certain time period [2].From the ecosystem services perspective, water yield represents the potential provision of fresh water for food production, hydropower generation or drinking water [3].All of these are key for the sustainment of rural and urban communities.Over the years, many studies have been dedicated to evaluating the driving factors of water yield within a given area.Land use/land cover (LULC) change is a key factor that has been studied from diverse perspectives [4][5][6][7][8][9][10][11].Reforestation appears to have a decreasing effect on runoff (via increase in evapotranspiration-ET) at the small catchment scale and an increasing effect on precipitation and water availability at larger scales since ET returns water to the atmosphere and favors, given adequate conditions, precipitation (P) [7].The magnitude of water yield changes, due to changes in vegetation (relative to the flow under the original vegetation type), appears to be more drastic during low flow seasons, and the time to reach a new equilibrium after the LULC changes depends on the type of change (afforestation, deforestation or regrowth), taking more than five years to reach it in some cases [11].
Climate change and climate variability have a significant effect on river discharge, extreme events, and on the availability of water for various human needs [12][13][14][15][16][17][18][19][20].Worldwide, many studies have evaluated the impact of climate change on specific regions and review efforts or large-scale studies provide a means to understand the "big picture."For example, in Europe, analysis of observations indicate a general increase in extreme precipitation but no significant trends for extreme streamflow, while modeling efforts based on climate projections confirm the general increase in extreme precipitation and show large impacts (positive and negative changes) on peak flows [19].Recent findings from Betts et al. [12] show the spatial global trends in extreme precipitation and hydrologic events and point out the importance of limiting global warming because, at 2 • C, some countries could reach unprecedented levels of water scarcity.Food security will face serious challenges because there will be freshwater limitations in important irrigated regions of North America (Western United States) and Asia (China; West, South, and Central Asia), while substantial investment in irrigation infrastructure will be required in areas (Northern/Eastern United States, parts of South America, Europe, and Southeast Asia) that could compensate for the net increase in irrigation required [18].For all purposes, however, these types of studies have an intrinsic uncertainty.Evaluation of the results for 12 large river basin studies shows that the choice of global climate model (GCM) contains the largest share of uncertainty in the projections (57%), followed by the choice of representative concentration pathway (RCP) (27%) and the choice of hydrological model (16%) [13].
The combined effect of land use and climate change is of special interest from the watershed management perspective.Understanding it continues to be a challenge because water yield is the convolution of the two factors, generating positive, negative, and even neutral feedbacks [21].The answer to this problem is so far case-specific [22][23][24][25], but some studies have proposed generalizations [21,26].A search for the available peer-reviewed literature on this topic for Colombian Andean watersheds was unfruitful.However, there are some studies that deal with related issues such as the role of land use and soils on regulating flow [27], the particular hydrological processes of the humid tropics [28], or the relationship between perception of water scarcity and observed climate, land use, and demographic changes [29].
For the case of strategic watersheds (urban water supply or agricultural watersheds), regardless of size, it is critical to have not only a calibrated model that simulates, at some level, the dynamics of the hydrological cycle within the watershed, but also to evaluate scenarios that can support the decision-making process.This is especially true when managers need to decide about land purchasing and conservation practices oriented toward maximization of water yield.The Soil and Water Assessment Tool model (SWAT) has become a well-known and globally used model to study hydrologic processes within watersheds and to evaluate their water availability and quality under present and future conditions [30][31][32][33].Many SWAT studies are dedicated to analyzing a hydrologic response under these changing conditions [4,[34][35][36][37]; however, most of them have been carried out in regions other than tropical South America (Brazil being an exception [38]).In Colombia, there is a knowledge base developed by CIAT (International Center for Tropical Agriculture) that has mostly been distributed through oral presentations and reports [39].There is also the work developed at the undergraduate and graduate levels that has resulted in academic manuscripts that have not gone through the formal peer-review publication process (see, for example, [40]).Recently, Hoyos et al. [41] worked on the evaluation of the effects of drought length and land cover on streamflow recovery at a Colombian Caribbean watershed.More formal research is needed to be able to derive general conclusions about the potential impact of climate and land use changes in this region of South America.
SWAT has gained worldwide recognition because it can be used to evaluate water and sediment yield, and some water quality parameters, under present conditions, under management scenarios, or under future climate conditions, with spatial and temporal resolutions that depend on data availability and the purposes of the particular studies.The availability of information required as inputs for the model (i.e., long-term series of climate, updated land use/land cover maps, detailed soils data, sediment and nutrient data, etc.) determines not only the quality of the outputs but also the regions where it has been used the most (in order: North America and Europe, Asia and Africa, Latin America (except Brazil)).Various challenges exist for Colombia (and possibly for many of the Latin American countries) in this aspect because of the institutional disconnect that makes it difficult to find, access, and use the kind of data necessary to run a typical SWAT model.
We present here the work developed for the Tona watershed, the main source of water for a medium size metropolitan area in Colombia (South America).Our hypothesis was that not only climate but also land use changes determine water yield within this tropical Andean region.We worked together with local environmental and water management agencies to set up a SWAT model for this watershed.This allowed us to obtain annual and monthly estimates of water yield, at the watershed and subwatershed scale, for six scenarios that incorporated two future climates and three LULC types.Our methodology describes the study site, a detailed process of the model setup (including data preparation and model calibration and validation), and the definition of the future scenarios.Within the results we present the estimates of water yield for the calibrated model (current LULC and historic climate) and for the six different future scenarios.Our points of discussion focus on (1) the validity and efficiency of our calibrated model, (2) the role of precipitation patterns, topography, and LULC on the spatial distribution of water yield for the calibrated model, and (3) key aspects for watershed management derived from the modeled future scenarios.
Study Site
The Tona watershed is situated in the northeast side of Colombia with its headwaters located within the western limits of the Berlín-Santurbán páramo ecosystem, reaching elevations of up to 3850 m.a.s.l.Despite its small size (192.5 km 2 ), this watershed plays an important role on the provision of water for the metropolitan area of Bucaramanga (1,142,000 inhabitants), currently contributing approximately 55% of the resource handled by the main water company, the Metropolitan Aqueduct of Bucaramanga, AMB (Acueducto Metropolitano de Bucaramanga), with an expanded service expected by the end of 2019 after the construction of the Bucaramanga Reservoir and associated conveyance infrastructure (regulation of additional 1200 lps).The watershed has three main drainages (Arnania, Carrizal, and Golondrinas), which form the Tona River in the lower part of the basin (see Figure 1).
The environmental and territorial management plan for the watershed (called from here on "POAT" due to the document's initials in Spanish), formulated in 2012 [42], provides reference information for this study area.The watershed has a medium elevation of 2270 m and a rugged relief with an average slope of 55.7%.The main geological features of this watershed are the hills of metamorphic and igneous composition and a system of faults in the north-south direction, located on the western end.The watershed has a bimodal precipitation regime with wet periods occurring in March-May and September-November (average monthly values ranging between 130 and 300 mm).The driest conditions occur during the December-February period (average monthly values ranging between 30 and 100 mm).On an annual basis, the average precipitation for the watershed is 1400 mm but there is a distinctive spatial pattern: the highest precipitation occurs at the north and south-central areas (1900-2000 mm), the lowest precipitation occurs at the eastern side of the watershed (900 mm), corresponding to the area of highest elevations and páramo ecosystem, and the middle range values of precipitation occur at the western side of the watershed (1300 mm) (see Figure S1).Seasonal variations of temperature for this tropical watershed are not as noticeable as the spatial variations due to its topography (elevations range from 800 m.a.s.l. on the western side of the watershed to 3850 m.a.s.l. on the eastern side, see Figure S2).According to the elevations, the spatial variation of average annual temperature ranges from 23 • C on the western side to 8 • C on the eastern side of the watershed.Finally, annual average potential evapotranspiration (PET) calculated through different methods in the POAT has a spatial distribution that ranges from 1300 mm on the western side of the watershed to 840 mm on the eastern side.
Data for the SWAT Model of the Tona Watershed
The information required to feed and run the SWAT model for the Tona watershed resulted from the integration of different sources.Local stakeholders provided site-specific information such as the POAT and hydroclimatic time series for stations located within the watershed.A national environmental institution provided additional historical climate records for stations located within the area of influence of the Tona watershed.Finally, we used time series of downscaled climate models from NASA to run the future scenarios.The local environmental agency, CDMB (Corporación Autónoma Regional para la Defensa de la Meseta de Bucaramanga), provided electronic files (maps and supporting information) of the POAT study, which includes key spatial data: topography, hydrography, LULC, and soils.The development of these maps followed established methodologies and used primary and secondary information [42].We produced raster files (digital elevation model-DEM, LULC, soils) from the original maps (scale 1:25000) using ArcGIS tools and following the guidelines of Tobler (1987) [43], which recommends users to "divide the denominator of the map scale by 1000 to get the detectable size in meters.The resolution is one half of this amount."The pixel size of our raster maps is 12.5 m by 12.5 m (see Figures S2-S4).
The soils raster map for the Tona watershed (and associated properties) resulted from a combination of the information provided by the POAT, with information for the area from the ISRIC-WISE soil database [44] and parameters calculated using the SPAW (Soil-Plant-Atmosphere-Water Field and Pond Hydrology) software [45].The POAT document provided a soils classification by subwatershed (divisions shown in Figure 1), assigning to each of them 3-4 soil horizons with their
Data for the SWAT Model of the Tona Watershed
The information required to feed and run the SWAT model for the Tona watershed resulted from the integration of different sources.Local stakeholders provided site-specific information such as the POAT and hydroclimatic time series for stations located within the watershed.A national environmental institution provided additional historical climate records for stations located within the area of influence of the Tona watershed.Finally, we used time series of downscaled climate models from NASA to run the future scenarios.The local environmental agency, CDMB (Corporación Autónoma Regional para la Defensa de la Meseta de Bucaramanga), provided electronic files (maps and supporting information) of the POAT study, which includes key spatial data: topography, hydrography, LULC, and soils.The development of these maps followed established methodologies and used primary and secondary information [42].We produced raster files (digital elevation model-DEM, LULC, soils) from the original maps (scale 1:25000) using ArcGIS tools and following the guidelines of Tobler (1987) [43], which recommends users to "divide the denominator of the map scale by 1000 to get the detectable size in meters.The resolution is one half of this amount."The pixel size of our raster maps is 12.5 m by 12.5 m (see Figures S2-S4).
The soils raster map for the Tona watershed (and associated properties) resulted from a combination of the information provided by the POAT, with information for the area from the ISRIC-WISE soil database [44] and parameters calculated using the SPAW (Soil-Plant-Atmosphere-Water Field and Pond Hydrology) software [45].The POAT document provided a soils classification by subwatershed (divisions shown in Figure 1), assigning to each of them 3-4 soil horizons with their corresponding depths (mm), textures, and grain size distribution (%).This information, though useful, was not enough to comply with the SWAT database requirements.We incorporated additional necessary properties based on the ISRIC-WISE soil database (soil unit CO56) and calculated values of moist bulk density (Mg m −3 or g cm −3 ), available water capacity of the soil layer (mm H 2 O mm soil −1 ), and saturated hydraulic conductivity (mm h −1 ) with SPAW.This effort resulted in the addition of 14 new categories to the SWAT soil database corresponding to each of the 14 subwatersheds within the study site (see Figure S3 and Table S1).
The LULC raster map derived from an adaptation of the land use map from the POAT to the crop types available in the SWAT crop database and on previous studies [46,47].The Tona watershed has various types of LULC categorized, in general, as follows: agricultural land and grazing land (49.9%), forest (33.4% natural forest and 5.1% protection plantation forest), special forms of vegetation (11.4%), and only 0.2% of urban land.The 5.1% protection plantation forest corresponds to land purchased by the water company (AMB) in an effort to conserve and protect the watershed.This value has increased in the last years and AMB plans to keep purchasing and/or working with locals to promote conservation practices in the watershed.We added a new category to the SWAT crop database to include the tropical páramo vegetation category ("FESP"), following recommendations received from CIAT personnel [48].Table 1 and Figure S4 present the LULC for the Tona watershed with their corresponding SWAT codes.
Barren land
Rocky outcrop BARR Barren land 1 Based on predominant crop according to the POAT, and previous studies [46,47].
Hydroclimatic data required for site description, model set up, calibration and validation originated upon request from three agencies: the Colombian Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) [49], CDMB [50], and AMB [51] (see Figure 2 and Table S2).Analysis of the information and discussion with agency personnel allowed us to identify the time window (years 1987-2002) and the stations for which the information was complete and in the best possible Water 2019, 11, 285 6 of 19 quality conditions.We used maximum and minimum daily temperature data from stations Berlín and UIS; daily precipitation from stations Berlín, UIS, Tona-Pueblo, Galvicia, and Brasil; and mean daily flow from station Puente Tona.We could not use data for the Arnania, Carrizal, and Golondrinas hydrometric stations because only after 2011 did AMB start to use technically adequate monitoring strategies at these upstream stations.To be able to use the Puente Tona station data in the calibration process, we added an amount of 1.26 m 3 s −1 to the reported values, which is equivalent to the average of the total water withdrawals for the 2003-2016 period.
Water 2019, 11, 285 6 of 20 Analysis of the information and discussion with agency personnel allowed us to identify the time window (years 1987-2002) and the stations for which the information was complete and in the best possible quality conditions.We used maximum and minimum daily temperature data from stations Berlín and UIS; daily precipitation from stations Berlín, UIS, Tona-Pueblo, Galvicia, and Brasil; and mean daily flow from station Puente Tona.We could not use data for the Arnania, Carrizal, and Golondrinas hydrometric stations because only after 2011 did AMB start to use technically adequate monitoring strategies at these upstream stations.To be able to use the Puente Tona station data in the calibration process, we added an amount of 1.26 m 3 s −1 to the reported values, which is equivalent to the average of the total water withdrawals for the 2003-2016 period.S3 shows details on the agency providing the information, geographical location of the stations, type of information, and period of available data.
To run the future climate scenarios, we used daily inputs of precipitation and temperature (maximum and minimum) obtained from downscaled projections for representative concentration pathways (RCP) 4.5 and 8.5 of the MIROC5 model, which has been previously identified as a valid model for Colombia [52,53].The work of Gómez and Rodríguez [54] revised the performance for both the precipitation and the temperature of 36 models within Assessment Reports 4 and 5 (AR4 and AR5) for the country, finding that four models of AR4 (ECHAM5, HADCM3, ECHO-G, CGCM2.3.2) and three models of AR5 (HadGEM2-ES, MIROC5, MPI-ESM-LR) reported satisfactory results for both precipitation and temperature.To be consistent with the temporal resolution of the model inputs of the calibration process and, aiming at using models distributed within the Coupled Model Intercomparison Project Phase 5 (CMIP5), we looked for sources of downscaled climate data for the previously mentioned models.NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) [55] S3 shows details on the agency providing the information, geographical location of the stations, type of information, and period of available data.
To run the future climate scenarios, we used daily inputs of precipitation and temperature (maximum and minimum) obtained from downscaled projections for representative concentration pathways (RCP) 4.5 and 8.5 of the MIROC5 model, which has been previously identified as a valid model for Colombia [52,53].The work of Gómez and Rodríguez [54] revised the performance for both the precipitation and the temperature of 36 models within Assessment Reports 4 and 5 (AR4 and AR5) for the country, finding that four models of AR4 (ECHAM5, HADCM3, ECHO-G, CGCM2.and the observed climate data for the period 1987-2002, resulted in MIROC5 having less statistical error (by means of calculation of bias, concordance index, root mean square error, Pearson correlation) when using monthly values of the parameters of interest (precipitation, maximum and minimum temperature) for the evaluation [54].Figure S5 shows the location of the Tona watershed with respect to the center of pixels of the NEX-GDDP project raster.
SWAT Model Setup for the Tona Watershed
We used GIS tools to prepare all the spatial information and reconfigured the time series information to obtain SWAT-ready files.It was important to prepare each file in the required format and, for the spatial data, be sure to use the same spatial reference (MAGNA Colombia Bogotá) as well as the same pixel size for the case of the raster files (12.5 × 12.5 m).All the input files are available in the Figshare repository "SWAT Modeling for the Tona Watershed" [56].Table 2 shows the general description of the different inputs for the model.The file names shown in the table correspond to the files stored in the Figshare repository.We used the ArcSWAT interface [57] for the model set up.Following the recommended procedure, we generated 14 subwatersheds with 183 hydrologic response units (after HRU refinement).We added elevation bands to subwatersheds having elevations greater than 1000 m.a.s.l.We used observed daily precipitation and temperature data and installed a world weather database to help generate time series of relative humidity, solar radiation, and wind speed.We generated all of the model's tables and edited some of them in an attempt to obtain better model inputs before running it for the first time: (1) Change of the potential evapotranspiration (PET) method to Hargreaves.This change resulted in a global PET similar to values reported by the Colombian Climatological Atlas [58] (average annual values between 1000 and 1400 mm year −1 ), direct calculations carried out in the POAT through the use of different methods, and mean annual values reported by MODIS products [59].(2) Modification of baseflow parameters (ALPHA_BF and ALPHA_BF_D) at the subwatershed level based on the discharge data available and the use of a baseflow filter.This filter was developed by Arnold et al. [60] and available through the SWAT's website.(3) Global change of the TLAPS factor to a value of -6 • C/km.This change implies a decrease in temperature with increasing elevation for the subwatersheds with elevation bands.
The first model run required a definition of periods for model calibration and validation.Based on data availability, we defined these periods as shown in Table 3.We decided to obtain the model outputs on a monthly interval based on the thesis that there may be a large uncertainty on the mean daily discharge data, which was collected only twice a day by trained, local individuals.For any window of time, and at a given spatial scale (HRU, subwatershed, or watershed), SWAT calculates water yield (WYLD), the net amount of water that leaves the spatial unit and contributes to streamflow in a reach, as the difference between the Total Flow generated and the Losses and Abstractions that occur: where Q surf is the amount of surface runoff (mm), Q lat is the amount of lateral flow (mm), Q gw is the amount of return flow from groundwater (mm), tloss refers to losses from the channel via transmission to the side and bottom (mm), and pond abstractions refer to any water that is lost to natural or artificial impoundments (mm).
Model Calibration and Validation
We carried out a combined strategy for the calibration process.First, we attempted to execute a global automatic calibration using the SWAT-CUP tool [61] and finalized with manual calibration using ArcSWAT's Manual Calibration Helper.For the entire process, we compared the monthly model outputs with the average monthly discharge data at the Puente Tona station, which is located at the closing point of the watershed (see Figure 2).The automatic calibration required the identification of a series of key parameters to calibrate and their suggested range of values.Through the literature review process, we identified 23 parameters that could be important for our study site (see Table S3).We ran many iterations (each having 500-1000 simulations) trying to narrow down the sensitive model parameters and adequate ranges of values for each of them.To check on the evolution of the calibration process, we monitored two SWAT-CUP indicators (r-factor and p-factor) and three objective functions (Nash-Sutcliffe (NSE), PBIAS, and R 2 ).
The r-factor and p-factor indicators are related to the 95% prediction uncertainty (95PPU), a probability band that represents the family of model outputs after running a large number of simulations (500-1000) for a given iteration with SWAT-CUP.Ideally, p-factor, the percentage of observed discharge data that falls within the 95PPU band, should be close to 1; the r-factor, representing the thickness of the 95PPU band, should be a small number, near or less than 1.Following the recommendations of Moriasi et al. [62], we aimed to obtain an NSE greater than 0.5, a PBIAS between the ±25 range, and an R 2 tending to 1.After many iterations with still unsatisfactory results for the indicators, we moved to a manual calibration, based on the advances reached with the automatic calibration process.One by one, we changed parameter values and attempted to move from global Water 2019, 11, 285 9 of 19 changes to subwatershed changes.We observed how those changes resulted in improvement in NSE, PBIAS, and R 2 statistics.The updated model with these final values was used for the validation process for the period 1998-2002.
Future Scenarios
We used the calibrated model to obtain estimates of water yield for the historical conditions and a set of scenarios that combined future climate and land use changes.The future climate was simulated by the daily time series of precipitation and temperature (maximum and minimum) of the MIROC5 model for RCP 4.5 and 8.5, for the 2006-2050 period.For land use, we focused on the simulation of scenarios from the water manager perspective, which implies that there is a trend to conservation on this strategic watershed.We proposed that watershed managers could have three different scenarios.Scenario A: LULC as present conditions.Scenario B: a less strict path to conservation, with an LULC proposal that balances conservation and production.In this scenario, transitory crops switch to permanent crops (BANA to COFF), natural and cultivated pastures switch to natural forms of vegetation and planted forest (PAST, SPAS to MESQ, PINE), mixed lands switch to permanent crops, silvopastoral, and natural forms of vegetation (RYEG, SWRN to COFF, RNGB, MESQ), and current silvopastoral practices switch to natural forms of vegetation (RNGB to MESQ) (see Columns 1-3 of Table 4).Scenario C: a more strict path to conservation/protection that changes most of the current LULC to forest (BANA, PAST, RNGB, RYEG, SPAS, SWRN to FRST) and allows, in some areas, the transition of mixed crops to silvopastoral practices and natural forms of vegetation (RYEG to RNGB, MESQ) (see Columns 4-6 of Table 4).The "Subwatersheds" columns in Table 4 indicate the areas where changes happened for each of the two scenarios.The "Land Use Update" tool of ArcSWAT allowed us to apply the LULC changes for the B and C scenarios.Tables S4-S6 provide more clarity on the extension of these changes by showing the areas (ha) and corresponding percentage of the total area of the watershed, for each of the LULC scenarios.Figure S6 provides a visual comparison of the three LULC scenarios.1 shows the definitions of the LULC codes for the Tona watershed.
To test our hypothesis that both climate and land use have an impact on the watershed's water yield, we ran our model under six different future scenarios that combined one climate and one LULC.The effective simulation period for the future scenarios was 2010 to 2050 and the outputs of the model runs had a monthly interval (see Table 5).
Results
The initial model run in ArcSWAT for the historical period (1987-2002) produced very poor statistics (NSE = −1.96,PBIAS = −2.54,and R 2 = 0.09) when comparing monthly model outputs and mean monthly discharge (m 3 s− 1 ) at the Puente Tona station.The automatic calibration, using SWAT-CUP, improved them but not to a satisfactory level (NSE = 0.31, PBIAS = 0.4, and R 2 = 0.32; p-factor = 0.5 and r-factor = 0.58); Figure S7 shows the results for the last iteration executed.The main advantage of the automatic calibration process was that it simultaneously helped to narrow the windows for all the parameter ranges (see Figure S7c).These new ranges were the starting point when moving to a manual calibration process that improved the statistics to acceptable values (NSE = 0.48, PBIAS = 0.18, and R 2 = 0.48); Table S7 shows the parameter multipliers used for the manual calibration within ArcSWAT. Figure 3 shows the comparison between observed and modeled discharge at the Puente Tona station.Although the calibrated model can simulate the general trend of the observed flows, there are still important differences between the two lines, especially for the simulation of large flows.This needs to be addressed in future studies.Furthermore, the error metrics for the validation period were not satisfactory (NSE = −1.23,PBIAS = 32.87, and R 2 = 0.07).We present our reasoning for these poor results within the discussion section.
Water Yield for the Calibrated Model
The calibrated model provides estimates of average annual water yield for the historical period.At the watershed scale, the average annual water yield is 495.5 mm year −1 with contributions of 10.9 mm year −1 from surface runoff, 156.0 mm year −1 from lateral flow, and 328.6 mm year −1 from groundwater contributions (shallow and deep aquifer).No transmission losses or pond abstractions are accounted for in this case.The significant contribution of groundwater is consistent with the baseflow analysis where contributions from groundwater averaged 69%.At the HRU level (see Figure 4), the spatial distribution of water yield shows that HRUs within subwatersheds 12 and 14 (Golondrinas drainage area) hold the largest average annual values (larger than 66 mm year −1 ), followed by those located in subwatershed 5 (Arnania drainage area) with values in the 33-66 mm
Water Yield for the Calibrated Model
The calibrated model provides estimates of average annual water yield for the historical period.At the watershed scale, the average annual water yield is 495.5 mm year −1 with contributions of 10.9 mm year −1 from surface runoff, 156.0 mm year −1 from lateral flow, and 328.6 mm year −1 from groundwater contributions (shallow and deep aquifer).No transmission losses or pond abstractions are Table 6.Average annual water yield (WYLD) and actual ET estimates at the watershed scale for future scenarios 1 .Future climate RCP 4.5 generates a range of annual water yield by subwatershed between 170 and 900 mm year −1 (see Figure 5), while the range for future climate RCP 8.5 is between 185 and 935 mm year −1 (see Figure 6).When looking at the effects of LULC change for a given climate with respect to baseline conditions, some subwatersheds show none to very small changes (from subwatersheds 3 to 11, 13, and 14).The major changes (more than 10% with respect to baseline conditions) occur in the headwaters of the Carrizal and Golondrinas drainage areas (subwatersheds 1, 2, and 12).The actual values of water yield by subwatershed for each of the future scenarios are in Table S8.Future climate RCP 4.5 generates a range of annual water yield by subwatershed between 170 and 900 mm year −1 (see Figure 5), while the range for future climate RCP 8.5 is between 185 and 935 mm year −1 (see Figure 6).When looking at the effects of LULC change for a given climate with respect to baseline conditions, some subwatersheds show none to very small changes (from subwatersheds 3 to 11, 13, and 14).The major changes (more than 10% with respect to baseline conditions) occur in the headwaters of the Carrizal and Golondrinas drainage areas (subwatersheds 1, 2, and 12).The actual values of water yield by subwatershed for each of the future scenarios are in Table S8.
Climate Scenario
and 900 mm year −1 (see Figure 5), while the range for future climate RCP 8.5 is between 185 and 935 mm year −1 (see Figure 6).When looking at the effects of LULC change for a given climate with respect to baseline conditions, some subwatersheds show none to very small changes (from subwatersheds 3 to 11, 13, and 14).The major changes (more than 10% with respect to baseline conditions) occur in the headwaters of the Carrizal and Golondrinas drainage areas (subwatersheds 1, 2, and 12).The actual values of water yield by subwatershed for each of the future scenarios are in Table S8.
Discussion
The SWAT model calibration work allowed us to simulate the observed discharge at the closing point of the Tona watershed to an acceptable level.The error metrics for the calibration process were in the "satisfactory" category according to Moriasi et al. (2007) [62] and a T-test between observed and calibrated flows (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997) confirmed no significant difference between means for these two data sets (t = 0.036, df = 157, P = 0.971).The "unsatisfactory" results of the validation period (1999)(2000)(2001)(2002) were confirmed by the T-test (t = 4.833, df = 89, P = 5.58 × 10 −6 ) between observed and modeled flows.Because the model in its current version is weak at simulating large flows, we attribute the very poor error metrics of the validation period to the coincidence between a short period for this modeling phase (1999)(2000)(2001)(2002) and the occurrence of a La Niña event that started in the June-August trimester of 1998 and ended on the January-March trimester of 2001 [63].For the level of performance rating of our model, having a longer period for validation would have averaged out this condition of large flows and potentially resulted in better error metrics for the validation phase.The uncertainty of the discharge estimates in our study relates with the modeling effort of Siqueira et al. (2018) [64], which, for continental South America, using gauging station data and a "regions of parameter sets" calibration strategy, had satisfactory fits (NSE > 0.6) between modeled and observed values for 55% of the gauging stations evaluated.Some of the lower fits were located in regions strongly influenced by orography.For the case of the Colombian Andes, the fits were highly variable with only 45% of the stations having good performance ratings (NSE > 0.6).
Increasing the model efficiency will require further work on many aspects, such as improving the quality of model inputs, obtaining better site parameters to include within the model, and using better observed discharge data and a longer time series for the calibration and validation processes.We anticipate that modeling efforts done in other Central and South American countries could face similar challenges.We see that the lack of integration between agencies and the absence of standardized policies for data collection and processing, at the national level, have a direct influence on the quality and continuity of key climate and river discharge information.In our case, these issues narrowed down our time window for watershed modeling and reduced the amount of stations that we could Water 2019, 11, 285 14 of 19 use (hydrometric and pluviometric).It is also indispensable that local agencies understand the value of the information that they collect, the importance of making it available, and the potentiality of that information for producing new knowledge and facilitating their decision-making processes.
The spatial distribution of water yield at the Tona watershed for the historical conditions (see Figure 4) follows the average annual precipitation pattern defined by the isohyet lines (see Figure S1).Using the Martin Gil precipitation station would probably have increased the water yield for the HRUs in the Arnania drainage area (precipitation data for this station was unavailable for our study period).Given the precipitation conditions defined by the time series available for the study period, we see that LULC played an important role on water yield.For example, HRUs with natural and cultivated pastures (PAST and SPAS) located at the high-elevation subwatersheds (1, 2, 5, and 14) had low water yield with respect to other uses in these subwatersheds.Wooded pastures (RNGB) had a mixed effect: relatively higher water yield for HRUs within subwatershed 5 and lower for HRUs within subwatersheds 1 and 12. Ranges of water yield for planted forest (PINE) were similar to those of forest (FRST) and brush (MESQ) on the headwaters of the Arnania drainage area (subwatershed 5), similar to MESQ for subwatersheds 1 and 2 and, in other areas of the watershed, water yield for PINE resulted in relatively less water yield (compared to water yield of other LULC within each subwatershed).The role of LULC on water yield was harder to identify at the lower elevation areas of the watershed.It is critical to continue working on the understanding of the dependencies between soils, slope, and LULC to better inform watershed management operations within this watershed.
Our proposal of the future scenarios is directly related to a watershed management application: a water company that needs to work on watershed conservation/protection practices on its supplying watershed and that needs to make informed decisions.We did not consider pessimistic LULC scenarios because that is not a possibility for the agency (practically or legally).LULC Scenario A considered the watershed continuing the same type of current activities while Scenarios B and C looked for conservation/protection in two different ways.Scenario B aimed for a "natural" transition to recovery of the upper areas of the watershed while still allowing for selected forms of agroforestry land at the higher elevations and selected forms of agricultural land at the lower elevations.Scenario C aimed for the typical concept of reforestation throughout the higher elevations of the watershed and allowed for certain types of agroforestry and agricultural land at the lower elevations.The future climate scenarios covered two realistic possibilities: RCP 4.5 and RCP 8.5.Because there is no evidence that a significant reduction of greenhouse gas emissions will occur any time soon, we decided to dismiss the possibility of using optimistic climate scenarios.
Despite the uncertainties associated with the model calibration and the potential errors associated with the selection of a climate model for the watershed [13], running the combined future scenarios ({1} through {6}) helped to identify key aspects for watershed management.First, more water may be entering the watershed as precipitation.Our model reported an increase in mean annual precipitation of 16.2% for RCP 4.5 and 21.9% for RCP 8.5.This result is consistent with the JULES ecosystem-hydrology model of Betts et al. (2018) [12], which shows a tendency toward increased runoff in our study region.Second, it is vital that watershed managers understand the processes that connect the physical properties of the watershed and the activities that happen in it.Our study focused on LULC as a determinant of water yield.We used Scenarios {1} and {2} (corresponding to LULC Scenario A and future climates for RCP 4.5 and 8.5) as a baseline for Scenarios {3} and {5} and Scenarios {4} and {6}, respectively, to evaluate the difference between two strategies for conservation/protection of the watershed.By doing this, we avoided uncertainties related to climate.Our results (see Table 6) found that both LULC strategies (Scenarios B and C) increased water yield, but Scenario B had a larger percent increase with respect to baseline conditions.When examining the actual ET reported by each of the scenarios, we found that, while for LULC Scenario B there was a similar increase in both water yield and actual ET, this was not the case for LULC Scenario C, where the percent increase in actual ET was larger than the percent increase in water yield.These results are consistent with the postulate that reforestation or afforestation have a decreasing effect on runoff via increase of evapotranspiration at the small catchment scale [7].
The spatial variation of water yield by subwatershed under the different scenarios showed that significant (more than 10%) water yield increases occurred only at particular subwatersheds.Further investigation of the LULC changes at these watersheds (1, 2, and 12) revealed that the increases were related to changes from natural and cultivated pastures (PAST and SPAS) to the proposed LULC (brush or forest) at the headwaters of the Carrizal and Golondrinas drainage areas.Similar changes at lower elevation watersheds did not produce significant changes in water yield.These results suggest that it is possible for water managers at the Tona watershed (and any other watershed where a similar exercise was carried out) to determine the types of LULC changes and the locations within the study area where specific changes would result in a desired condition.This is consistent with the idea that watershed management practices may be more successful when supported with modeling tools [65,66].
Conclusions
The results of this paper confirm our initial hypothesis that not only climate but land use changes determine water yield within this Andean watershed.Using a semi-distributed hydrologic model, we were able to identify how these two factors could affect water yield at the watershed and subwatershed scale.Even though climate is not a factor that can be directly controlled by water managers, LULC is.Our results show that different approaches to LULC changes for a given climate, generated different estimates of water yield.For both factors, however, in terms of watershed management decisions, it is evident that there needs to be expert input so that the most adequate climate model is used and there is an advanced understanding of the role that different types of LULC play on the dynamics of the hydrologic cycle of the watershed.The use of a physical hydrologic model such as SWAT can help significantly with this task.
Even with the difficulties that we had on the calibration process of the Tona watershed and the uncertainties related to the future climate data, our model identified, at the HRU level, the areas within the watershed with the highest and lowest water yield.For the future combined scenarios, we singled out subwatersheds where land use changes result in actual increases of the local water yield: subwatersheds 1 and 2 for the Carrizal drainage area and subwatershed 12 for the Golondrinas drainage area, all of which are located in the headwaters.These results are important because the large size of these subwatersheds (with respect to the total area of the Tona watershed) imply that they will have an important contribution to the main channel's discharge.Our results also suggest the necessity to carefully evaluate, through modeling, the effect of certain land use changes on water yield.In our case, the future LULC scenarios had a "conservation" objective with two different strategies.In the end, both increased the water yield, but one (Scenario B) was more effective because there were fewer losses through evapotranspiration.S1.Fourteen new soil categories added to the SWAT soils database, Table S2.Stations with climatic and hydrological data available for the Tona watershed, Table S3.Initial calibration parameters used for SWAT-CUP, Table S4.Areas (ha) by subwatershed for LULC Scenario A, Table S5.Areas (ha) by subwatershed for LULC Scenario A, Table S6.Areas (ha) by subwatershed for LULC Scenario A, Table S7.Results of the last iteration for the automatic calibration of the Tona watershed using SWAT-CUP, Table S8.Water yield (WYLD) by subwatershed for the future scenarios.
Figure 1 .
Figure 1.Geographical location of the Tona watershed.Right panels show in red the location of the watershed within the Country and the State.Left area shows the location of the watershed, the Tona river's main tributaries, and the location of the new reservoir, at the lowest elevations of the watershed.Texts indicate the location of two of the cities that benefit from the Tona river's water (Bucaramanga and Floridablanca).The numbers and different colors represent a subwatershed division for the study area.The coordinate system corresponds to the local projection for the site (MAGNA Colombia Bogotá-EPSG: 3116).
Figure 1 .
Figure 1.Geographical location of the Tona watershed.Right panels show in red the location of the watershed within the Country and the State.Left area shows the location of the watershed, the Tona river's main tributaries, and the location of the new reservoir, at the lowest elevations of the watershed.Texts indicate the location of two of the cities that benefit from the Tona river's water (Bucaramanga and Floridablanca).The numbers and different colors represent a subwatershed division for the study area.The coordinate system corresponds to the local projection for the site (MAGNA Colombia Bogotá-EPSG: 3116).
Figure 2 .
Figure 2. Location of hydrometric and climatological stations in the area of influence of the Tona watershed.Black circles represent the location of the climatological stations and blue triangles represent the location of hydrometric stations.TableS3shows details on the agency providing the information, geographical location of the stations, type of information, and period of available data.
offers bias-corrected daily scenarios of 21 models for RCP 4.5 and 8.5, with a resolution of 0.25 degrees (25 × 25 km), with a time series of maximum and minimum temperature and precipitation for 1950-2005 for the retrospective run and 2006-2099 for the prospective run.The models available within this platform for our specific needs are MIROC5 and MPI-ESM-LR.The
Figure 2 .
Figure 2. Location of hydrometric and climatological stations in the area of influence of the Tona watershed.Black circles represent the location of the climatological stations and blue triangles represent the location of hydrometric stations.TableS3shows details on the agency providing the information, geographical location of the stations, type of information, and period of available data.
3.2) and three models of AR5 (HadGEM2-ES, MIROC5, MPI-ESM-LR) reported satisfactory results for both precipitation and temperature.To be consistent with the temporal resolution of the model inputs of the calibration process and, aiming at using models distributed within the Coupled Model Intercomparison Project Phase 5 (CMIP5), we looked for sources of downscaled climate data for the previously mentioned models.NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) [55] offers bias-corrected daily scenarios of 21 models for RCP 4.5 and 8.5, with a resolution of 0.25 degrees (25 × 25 km), with a time series of maximum and minimum temperature and precipitation for 1950-2005 for the retrospective run and 2006-2099 for the prospective run.The models available within this platform for our specific needs are MIROC5 and MPI-ESM-LR.The comparison between the time series of the retrospective runs of each of the two climate models (MIROC5 and MPI-ESM-LR) Water 2019, 11, 285 7 of 19
Figure 3 .
Figure 3. Observed (blue) vs. modeled (red) monthly discharge data at the Puente Tona station for the calibrated model.
Figure 3 .
Figure 3. Observed (blue) vs. modeled (red) monthly discharge data at the Puente Tona station for the calibrated model.
Figure 5 .
Figure 5. Average annual water yield by subwatershed under future climate RCP 4.5: (a) water yield for the calibrated model, (b) water yield for Scenario {1}, (c) water yield for Scenario {3}, and (d) water yield for Scenario {5}.
Figure 6 .
Figure 6.Average annual water yield by subwatershed under future climate RCP 8.5: (a) water yield for the calibrated model, (b) water yield for Scenario {2}, (c) water yield for Scenario {4}, and (d) water yield for Scenario {6}.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2073-4441/11/2/285/s1, Figure S1.Isohyet lines of average annual precipitation for the Tona watershed, Figure S2.Digital elevation model of the Tona watershed, Figure S3.Soils map of the Tona watershed, Figure S4.Land use / land cover (LULC) map of the Tona watershed, Figure S5.Location of the Tona watershed with respect to centers of pixel of the NEX-GDDP project, Figure S6.Land use scenarios A (top), B (center), and C (bottom) for the study, Figure S7.Results of the last iteration for the automatic calibration of the Tona watershed (Puente Tona Station) using SWAT-CUP; Table
Table 1 .
Land use/land cover (LULC) for the Tona watershed.
Table 2 .
Inputs for the Soil and Water Assessment Tool (SWAT) model of the Tona watershed.
Table 3 .
Time windows for model calibration and validation.
Table 4 .
Land use/land cover (LULC) changes for future scenarios B and C.1 1 Table
Table 5 .
Description of the future scenarios.
|
2019-04-27T13:12:52.483Z
|
2019-02-06T00:00:00.000
|
{
"year": 2019,
"sha1": "56bd1024e8ce0b9e7dc525d413695cf4873fb467",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/11/2/285/pdf?version=1550386736",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "56bd1024e8ce0b9e7dc525d413695cf4873fb467",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geology"
]
}
|
259992712
|
pes2o/s2orc
|
v3-fos-license
|
Clinicopathological and prognostic features of Borrmann type IV gastric cancer versus other Borrmann types: A unique role of signet ring cell carcinoma
Background: Evidence specifically comparing the clinicopathology of Borrmann type IV (B-IV) gastric cancer with that of other Borrmann types is insufficient. Methods: A total of 3130 patients with advanced gastric cancer who underwent gastrectomy from January 2001 to September 2017 were enrolled in the analysis. Logistic regression and survival analysis methodology were used to investigate factors associated with peritoneal metastasis and overall survival (OS). Results: Of the total cohort, 264 (8.43%) patients were B-IV type, 1752 (55.97%) were small-size other Borrmann types, and 1114 (35.59%) were large-size other Borrmann types. Signet ring cell carcinoma (SRC) was more common in B-IV types than in other Borrmann types (33.71% vs 11.42% vs 12.66%, P < 0.001). In B-IV gastric cancers, SRC was significantly associated with peritoneal metastasis (HR = 1.898, 95% CI = 1.112 ~ 3.241, P = 0.019) and poorer OS (HR = 1.492, 95% CI = 1.088 ~ 2.045, P = 0.013) in multivariable analysis. Furthermore, stratified analysis revealed that SRC had worse survival than adenocarcinoma in the B-IV subgroups, with locally advanced stages (stages II ~ III) or negative surgical margins (all P < 0.05). In contrast, SRC failed to be significantly associated with peritoneal metastasis and poor OS in other Borrmann types (all P > 0.05). Conclusion: SRC was more common in B-IV gastric cancer than in other Borrmann types. It was significantly associated with peritoneal metastasis and poorer OS in the B-IV type but not in other Borrmann types. As a unique prognostic factor for B-IV gastric cancer, SRC might help evaluate risk stratification and optimize treatment for this entity, especially for patients with locally advanced stages or R0 resection.
42.6% of the global incidence. Even worse, over 80% of gastric cancer patients in China are diagnosed at advanced stages at the first visit. [2][3][4] The Borrmann classification (types I through IV) is a classification accepted worldwide by many surgeons to describe the macromorphology of advanced gastric cancer [pT2 ~ 4NanyM0 ~1, advanced gastric cancer (AGC)]. Representing a minor entity, Borrmann type IV (B-IV) gastric cancer is defined as a lesion diffusely infiltrating the gastric wall without ulceration or distinct elevation. It is usually characterized by adverse pathological features such as advanced stages and frequent peritoneal metastasis. Although surgical resection is still the primary method for treating this type of gastric cancer, the optimal therapy remains disputed due to the extremely poor prognosis, with a median survival time of 5 ~17 months. [5][6][7] Owing to the low incidence of B-IV gastric cancer, only a few studies focusing on this neoplasia have been published so far. Evidence specifically comparing the clinicopathological and prognostic features of B-IV gastric cancer with other Borrmann types is still insufficient and mostly limited to a small cohort. [5,8] It is clinically significant to uncover risk factors predicting adverse clinicopathological features and improve outcomes.
However, gastric signet ring cell carcinoma (SRC), a relatively uncommon histologic phenotype, is observed more frequently in B-IV gastric cancer than in other Borrmann types. [9][10][11] Currently, no consensus on the prognostic role of gastric SRC has been reached. In previous studies, [9,[12][13][14] SRC was considered to have adverse biological behaviors and oncologic outcomes compared with gastric adenocarcinoma. However, recent studies have begun to question this idea. [10,15,16] In addition, the prevalence and clinical significance of SRC in B-IV gastric cancer have not been well discussed. It remains unknown whether SRC is independently associated with clinicopathological and prognostic features in B-IV gastric cancer.
Thus, based on a large cohort of AGC patients who underwent gastrectomy, this study aimed to investigate the differences in the clinicopathology and prognostic role of SRC between B-IV gastric cancer and other Borrmann types.
Definition
B-IV gastric cancer was defined macroscopically as partial or complete thickening and rigidity of the gastric wall observed on both preoperative endoscopy and intraoperative exploration that circumferentially involves at least one-third of the stomach and frequently involves the stomach from the fundus to the pylorus. [17,18] Histologically, gastric cancer can be classified as papillary, tubular, mucinous, signet ring cell, and undifferentiated adenocarcinoma, according to the World Health Organization (WHO) classification [19] and Japanese classification systems of gastric cancer. The diagnosis of SRC is based on the presence of isolated carcinoma cells containing a large amount of mucin that pushes the nucleus to the cell periphery. SRC is regarded as any (undifferentiated) gastric adenocarcinoma with a predominant (>50%) SRC component. Adenocarcinoma can be categorized into well-, moderate-, and poorly differentiated adenocarcinoma according to the WHO classification. In our hospital, a three-level system of review of pathologic diagnosis is implemented. Residents are responsible for the initial examination of the slides, the pathologist and deputy pathologist are responsible for the review of the slides and signature of the pathologic diagnosis, and the department director is responsible for the comprehensive pathologic diagnosis and review. Suspected cases are discussed, consulted within the department, issued by the chief pathologist (more than 5 years of experience in pathologic examination and diagnosis), and reviewed by a superior physician. Tumor size and Borrmann classification of AGC are assessed according to the Japanese classification of gastric cancer. The dissected stomach specimen is fixed on a flat board, and the maximum tumor diameter is determined as tumor size.
Patient selection
After institutional review board approval, we retrospectively reviewed the data of 4827 patients with AGC, between January 2001 and September 2017. After excluding patients who had incomplete medical data (n = 972), palliative treatment (n = 582), (mixed) a minor (<50%) SRC component, or other histologic diagnoses (n = 143), a total of 3130 patients were finally enrolled in this study [Supplementary Figure 1].
Data regarding patient demographics, risk factors, and family history of gastric cancer (yes/no in first-degree relatives), clinicopathological characteristics, and multimodality therapies were retrospectively collected. Pathologic classification and tumor, nodes, and metastases (TNM) staging following surgical resection is defined by the American Joint Committee on Cancer (AJCC) 7 th edition. [20] The extent of lymphadenectomy was recorded based on the Japanese Gastric Cancer Association classification system. In addition, key data of this study have been uploaded to the Research Data Deposit (RDD)
Treatment
Patients with a lesion located in the proximal or upper middle portion of the stomach underwent total or proximal gastrectomy, whereas those with a gastric carcinoma located in the distal or lower middle portion underwent distal gastrectomy. Essentially, uniform techniques were used for each of the two procedures, and informed consent for surgical treatment was obtained from all patients. Standard radical gastrectomy with D2 lymph node dissection was performed in patients with no distant metastasis or those with metastasis who received complete or partial conversion chemotherapy. Palliative gastrectomy was conducted in selected patients with resectable metastases or severe symptoms such as bleeding, perforation, and obstruction, according to the National Comprehensive Cancer Network (NCCN) guidelines. [21] Resectable metastases include patients with solitary liver metastasis, only positive cytology, and metastasis of the paraaortic lymph nodes of no. 16a2 and/or 16b1, P1 ~ 2 peritoneal seeding, or only Krukenberg tumors. [22] Metastasectomy was determined by the operator according to the performance status of the patient and the feasibility of resection.
Surgery was performed 4 ~ 8 weeks after the end of neoadjuvant chemotherapy (when administered). If the general condition of the patient was suitable for chemotherapy, then adjuvant chemotherapy was started 4 to 6 weeks post-operatively.
Follow-up and statistical methods
All patients included in the study were regularly followed up with a standardized protocol. [23] Comparisons of clinicopathological features between groups were performed using Chi-square tests for categorical variables and Kruskal-Wallis tests for continuous variables. Logistic regression was used to evaluate risk factors for peritoneal metastasis. The cause of death was determined by the treating physicians and corroborated by a chart review, telephone interview, or a death certificate. Using the Kaplan-Meier method, overall survival (OS) was estimated from the time of surgery to the event. An analysis of the difference between the survival rates was performed using the log-rank test. Cox proportional hazard models were used to evaluate risk factors for OS. Only those variables that were univariately significant were entered into the multivariate Cox analysis. All reported P values were two-sided, and statistical significance was set at P < 0.05. Statistical analysis was performed using PASW version 18.0 statistical software (IBM Corp., Somers, NY, USA).
Clinicopathological characteristics of B-IV versus other Borrmann types of gastric cancer
In this study, the median and mean tumor size of other Borrmann types of AGC were 5 cm (IQR = 3.50 ~ 6.00) and 5.05 cm (SD = 2.32), respectively. By taking the median tumor size (5 cm) as the standard, our study defined tumors less than 5 cm in size as small-size other Borrmann types and those that were more than 5 cm in size as large-size other Borrmann types in AGC. Of the total cohort, 264 (8.43%) patients were B-IV, 1752 (55.97%) were small-size (<5 cm) other Borrmann types, and 1114 (35.59%) were large-size (5≥) cm) other Borrmann types. Table 1 summarizes the clinicopathological features of patients with the B-IV type versus other Borrmann types. As shown, patients with B-IV were more frequently female (P < 0.001), younger (P < 0.001), with a family history of gastric cancer (P < 0.001), and had more aggressive pathological features (pT (P < 0.001), pN (P < 0.001), distant metastasis (P < 0.001), pTNM stage (P < 0.001), peritoneal metastasis (P < 0.001), tumor grade (P < 0.001), and surgical margin (P < 0.001), combined resection (P < 0.001), and shorter follow-up time (P < 0.001). In addition, we observed that SRC was obviously more common in the B-IV type than in the other Borrmann types (33.71% vs 11.42% vs 12.66%, P < 0.001).
Then, we focused on the clinicopathological characteristics of the patients with B-IV gastric cancer [ associated with peritoneal metastasis in the B-IV gastric cancer subgroup. In other Borrmann types of gastric cancer, regardless of small-size or large-size types, pT4, lymph node metastasis, and neoadjuvant chemotherapy were significantly associated with peritoneal metastasis (all P < 0.05).
Prognosis of B-IV gastric cancer
Concerning the prognosis of patients with B-IV gastric cancer, the median follow-up of SRC versus adenocarcinoma was not significantly different (13 vs 14 months, P = 0.529) [ Table 2]. Figure 1 displays the OS curves of the B-IV gastric cancer subgroups. As shown, the OS rates at 2 years and 3 years in patients with B-IV gastric cancer were 35.8% and 17.3%, respectively [ Figure 1a]. The OS rates at 2 years and 3 years in patients with SRC were significantly lower than those in patients with adenocarcinoma (24.4% and 7.7% vs 42.4% and 24.1%, respectively; P = 0.008) [ Figure 1b]. Then, univariate and multivariate Cox regression analyses were conducted to identify the predictors of OS in B-IV gastric cancer [ The results showed that stage II ~ III patients with SRC had worse survival than those with adenocarcinoma in the B-IV gastric cancer subgroups (P = 0.045, Figure 2a), those with stage III disease (P = 0.043, Figure 2c), those with negative peritoneal metastasis (P = 0.039, Figure 3a), and those with negative surgical margins (P = 0.004, Figure 3c). However, the survival difference was not significant among the B-IV subgroups with stage IV disease (P = 0.151, Figure 2b), peritoneal metastasis (P = 0.447, Figure 3b), and positive surgical margins (P = 0.811, Figure 3d). The Kaplan-Meier analysis was not conducted in patients with stage II disease because there were only 20 cases. We also divided all patients into four subgroups based on histologic type and metastasis status. Interestingly, the results demonstrated that there was no survival difference between SRC patients without metastasis (M0 SRC) and adenocarcinoma patients with metastasis (M1 adenocarcinoma) [ Figure 2d].
Prognosis of other Borrmann types of gastric cancer
In other Borrmann types, the Kaplan-Meier analysis demonstrated that patients with SRC had worse survival than those with adenocarcinoma among patients with large-size types (P = 0.025) but not among those with small-size types (P = 0.209) [ Figure 4]. In the multivariate analysis, however, SRC (HR = 1.256, 95% CI = 0.97 ~ 1.626, P = 0.084) failed to be an independent predictor of adverse OS for large other Borrmann types [Supplementary Table 1].
DISCUSSION
Currently, owing to the rarity of cases, there are an insufficient number of published studies exclusively evaluating the clinicopathological and prognostic features of B-IV types versus other Borrmann types in AGC. Even fewer have investigated the prognostic role of SRC in the B-IV gastric cancer subgroup, despite the epidemiologic prevalence of SRC observed in this entity. [9][10][11] Thus, we conducted this study based on a large cohort from a high-volume cancer center in China. Our study confirmed that B-IV gastric cancer was more frequently diagnosed with SRC and behaved more aggressively than other Borrmann types. More importantly, this might be the first study to suggest that SRC is an independent risk factor for peritoneal metastasis and poorer OS in B-IV gastric cancer entities but not in other Borrmann types. The prognosis of SRC-type B-IV gastric cancer remains poor, even in patients with locally advanced stages or R0 resection (R0 resection is defined as gastric resection with a negative surgical margin). Thus, patients with locally advanced SRC-type B-IV gastric cancer could not be easily recognized as M0 patients when selecting a subsequent treatment. This unique prognostic role of SRC in B-IV gastric cancer could be valuable in evaluating risk stratification and optimizing treatment for this entity, especially those in locally advanced stages or those who underwent R0 resection.
According to previous studies, [9][10][11] B-IV gastric cancer is characterized by poorly differentiated tumor cells that diffusely infiltrate the gastric wall. SRC is especially typical as a histologic component observed more frequently in this malignancy than in other Borrmann types. In our cohort, the proportion of SRC in B-IV type versus that in small-size or large-size other Borrmann types was 33.71% vs 11.42% vs 12.66% respectively, which obviously supported this phenomenon. In addition, B-IV gastric cancer represents a minor Borrmann type, accounting for approximately 10 ~ 20% of all gastric cancers. [5,24] Consistent with this rate, our primary 531 B-IV gastric cancers constituted approximately 11% of all gastric cancer patients who visited our institution during the same period.
In terms of clinicopathological features, B-IV gastric cancer is usually detected at advanced stages or even end stages despite substantial advances in the diagnosis of gastric cancer and has unfavorable pathological features, including serosal or even local extension, high tumor grade, and lymph node and peritoneal metastasis. [7,25,26] In addition, patients with B-IV gastric cancer are younger in age and have a higher female/male ratio than those with other Borrmann types. [7,25,26] Our data also observed this phenomenon (44% vs 30% vs 32%, P < 0.001; Table 1). The potential mechanism leading to this gender heterogeneity remains unclear. We hypothesize that the development of B-IV gastric cancer may be influenced by sex hormones. The estrogen receptor positivity rate was found to be higher in young females and in patients with poorly differentiated gastric cancer. [27,28] However, some series reported that SRC shared a similar tendency in terms of clinicopathological features. According to a large cohort study from Taghavi et al., [16] SRC was more likely to be stage pT3 ~ 4 (45.8% vs 33.3%), have lymph node spread (59.7% vs 51.8%), and have distant metastases (40.2% vs 37.6%) (all P < 0.001) than gastric adenocarcinoma. The results from another large cohort study [14] also observed this phenomenon. In the B-IV gastric cancer subgroup, however, our study only observed a distinct presentation difference in tumor grade and peritoneal metastasis between SRC and adenocarcinoma [ Table 2]. This clinicopathological variation indicates that the aggressivity of B-IV types over other Borrmann types may be attributed to different biologic natures.
The clinicopathological differences between SRC and adenocarcinoma may support an emerging concept that SRC carcinoma might actually be a completely distinct phenotype. [29,30] The Cancer Genome Atlas Research Network defined four major genomic gastric cancer subtypes and supported the above concept from the perspective of molecular and genomic analysis. That is, genomically stable tumors, which were enriched for the diffuse histologic variant and mutations of Ras Homolog Family Member A (RHOA) or fusions involving Ras homolog gene family (RHO) family guanosine triphosphatase (GTPase)-activating proteins, shared some characteristics with SRC. [31] However, the underlying mechanism of why SRC is more aggressive than adenocarcinoma is still poorly understood; thus, more investigations on genetic alterations and tumor cell behavior are necessary in the future.
Among the patterns of metastasis, peritoneal seeding is the most common pattern and cause of death in patients with gastric cancer. In this study, 137 (51.89%) patients with B-IV gastric cancer had distant metastasis, including 120 (45.45%) with peritoneal dissemination. Various treatment strategies have been explored to improve the survival of gastric cancer patients with peritoneal metastasis. The Cytoreductive surgery vs cytoreductive surgery and hyperthermic intraperitoneal therapy (CYTO-CHIP) study from Bonnot et al. [32] suggested that hyperthermic intraperitoneal chemotherapy (HIPEC) seemed to be a valuable option for treating resectable SRC patients with peritoneal metastasis. In this setting, in our study, SRC as an independent risk factor for peritoneal metastasis could represent an indicator for the use of HIPEC to improve outcomes of B-IV gastric cancer. We plan to test this speculation in future research and hope to present more unique results following this work.
Regarding the prognosis of B-IV gastric cancer, it has long been thought to have the worst prognosis among all Borrmann types without regard to the extent or type of resection. [5,6] Whereas the outcomes in these studies were discouraging, several prognostic factors have been identified to select patients who would benefit from surgical treatment. As the most crucial independent prognostic risk factor for B-IV gastric cancer, TNM stage and histologic type could help select candidates for gastrectomy in this subgroup. Thus, it is noteworthy that, herein, a positive surgical margin was an adverse predictor for OS, albeit not statistically significant in the multivariate analysis (HR = 1.502, P = 0.053, Table 4). This result implied the prognostic benefits of R0 resection in B-IV gastric cancer. [33,34] In addition, evidence suggests that SRC may have inherent chemoresistance [35,36] ; thus, specifically designed multimodality treatment protocols should be tested for B-IV gastric SRC.
Unlike B-IV gastric cancer, clinicians have yet to achieve a consensus on the prognosis of SRC versus adenocarcinoma at different stages. Previous studies have reported an interesting prognostic difference between SRCs and non-SRCs in subgroups of early and advanced gastric cancer. Chiu et al. [37] reported that the 5-year survival rates for patients with early SRC were better than those for patients with early non-SRC (96.1 vs 89.6%, P = 0.01). Consequently, less invasive strategies may be acceptable in selected patients with early gastric SRC. Another study by Kao et al. [15] confirmed this conclusion. In contrast, a study from Huh et al. [38] found that the mixed SRC component was an independent risk factor for lymph node metastasis and a tendency toward lower survival rates in early gastric cancer. For AGC, the prognosis of SRC vs non-SRC is debated and dependent on the stage of cancer. Kao et al. [15] assessed the outcomes of 570 SRC cases in 2152 AGC patients. Their results demonstrated that the 5-year OS rates of patients with SRC and non-SRC were 32.1% and 37.9%, respectively (P = 0.041). SRC was an independent predictor for poorer OS (P = 0.017). Studies [9] based on Asian populations observed similar results. In contrast, the large registry cohort study from Taghavi et al. [16] found no significant median survival difference between SRC and adenocarcinoma (13 vs 14 months; P = 0.073). Multivariable analyses demonstrated that SRC was not associated with mortality. The study by Zhang et al. [10] reported that when stage-matched, SRC had a similar survival to non-SRC patients. Interestingly, our study indicated that SRC could be a unique prognostic factor for B-IV gastric cancer. The survival outcome of M0 SRC was as poor as that of M1 adenocarcinoma in this subgroup [ Figure 2d]. Overall, great controversy exists among the present studies, and further validation of the prognostic role of SRC in a larger cohort is warranted.
A strength of our study includes the large cohort character. Our study is probably one of the largest cohort studies investigating this topic. More importantly, this might be the first study to report the unique prognostic role of SRC in the B-IV gastric cancer subgroup, which we believe could be helpful in the management of this entity. However, several limitations of this study merit discussion. First, our study represents a retrospective analysis of a single institutional cohort, resulting in inherent biases. Second, the occupied proportion of histologic subtypes may differ among pathologists, which could accumulate variability. For this reason, we enrolled only those cases that were clearly distinguished as adenocarcinoma or SRC. Third, some risk factors, such as Helicobacter pylori, diet, and Epstein-Barr virus infections, were not obligatory in the medical record, consequently leading to missing data in many patients. Finally, only a minority of our patients with B-IV gastric cancer received neoadjuvant chemotherapy, which might lead to a poorer prognosis for this cancer type. However, the response to chemotherapy in SRC is still under debate; this issue should be a research focus in the future.
In conclusion, SRC was more commonly observed in patients with B-IV gastric cancer than in those with other Borrmann types of gastric cancer. It was an independent risk factor for peritoneal metastasis and poorer OS in B-IV gastric cancer but not in other Borrmann types. The prognosis of SRC-type B-IV gastric cancer remains extremely poor, even for patients with only locally advanced disease and those who have undergone R0 resection. Thus, SRC serves as a unique prognostic factor for B-IV gastric cancer and might help evaluate risk stratification and optimize treatment for this disease entity.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2023-07-21T06:17:50.450Z
|
2023-06-16T00:00:00.000
|
{
"year": 2023,
"sha1": "6c020129bf353abbcca1739fbfa26aca48e57ffa",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/sjg.sjg_469_22",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bae40ac7bba17b2a53a6c39a991816634acf4540",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253504317
|
pes2o/s2orc
|
v3-fos-license
|
Resolving the interfaith conflict over burial preparation: Who has the right to bury the dead?
the deceased as stated in the legal document. In other words, the burial rites to be followed by the family of the deceased depend on the proven religious identity of the deceased, whether Christian or Muslim. Contribution: This article attempts to justify the right of the deceased to be buried according to their personal faith, which may not be identical with the faith practised and professed by their family. In cases of religious conversion, both parties claim their right to bury the deceased according to their own religious rites, and it is often difficult for both sides to reach an agreement. Nevertheless, this conflict can be resolved peacefully if the rights of the deceased are respected by the bereaved.
Introduction
Anything that has no soul is called an inanimate object. We have different ways of treating animate and inanimate objects. Human behaviour is characterised by treating others in a respectful and affectionate manner, unlike lifeless objects. This, however, does not apply to a certain kind of object which is the body of the deceased (Fahlander & Oestigaard 2008;Mathijssen 2021;Schwarz et al. 2021).
Especially for religious people it is important that the body of the deceased is treated well and given due attention and respect by those entrusted with its care. Each religious community has their own burial rites. These ritual practices are, for most religious communities, not merely an expression of their culture but intimately linked to their faith and belief in the afterlife.
However, problems may arise and develop into acute social conflicts when the body of the deceased is not put to rest according to the religious teachings of the bereaved family members. In the last decade, there occurred two cases of interfaith conflict in Indonesia and the United States of America over the issue of claiming the body of the deceased. Both individuals concerned had disputed religious identities where their families consisted of Muslims and Christian Protestants, both groups claiming their right to bury the deceased according to their respective faith.
What made this problematic situation worse is that this disagreement over the burial not only involved the immediate family but soon extended to their religious community, each side defending their right to bury the deceased in question. Reviewing these two cases, the question of 'rightful ownership' arises. Does the deceased have the sole right to decide according to which faith he wants to be buried, depending on his professed religious identity while he was still alive? Or does the family have the right to claim his body and decide on his behalf? And The body of the deceased is not an object but still a person. It deserves to be treated respectfully, and often this respect is expressed through religious rites. However, problems arise when the family of the deceased follow different faiths and disagree over the burial rite. Such a scenario is examined in this study where the immediate family of the deceased professed different faiths and could not agree on the burial rites to be performed. This research is intended to examine the issue of burial rights as a reason for interfaith conflict. Who has the right to prepare the body of the deceased for the burial? Which rites should be followed? Using the theological and legal approaches, we found that the conflict was caused by (1) belief in an afterlife and (2) law, culture and religion give the right to decide the burial or disposal of the body to living parties. The legitimate way to determine how to treat the body of the deceased and according to which religious rite the burial is to be performed is by confirming the religious identity of the deceased as stated in the legal document. In other words, the burial rites to be followed by the family of the deceased depend on the proven religious identity of the deceased, whether Christian or Muslim.
what if two parties lay a claim on his body, one Muslim and the other one Christian? If such interfaith conflicts arise, how are they properly resolved? And what can be done to prevent the occurrence of similar conflicts in the future?
The issue of ownership and burial of the dead body has been studied extensively, from a religious, cultural and legal perspective. Rearding the Islamic perspective, Aramesh (2009) discussed some issues related to body ownership in medicine. Medical training requires the dissection of corpses; however, this is prohibited in Islamic law. Thus, any form of dismemberment of the body is not allowed, even if the family of the deceased have given their consent. In Islam, the body of the departed must be treated with utmost respect, and the dignity of the deceased must remain intact. The only exception applies to organ donation when the procedure is necessary to save another person's life. Consequently, the body ownership does not lie with the deceased or his family but to God, and the burial must be in accordance with Islamic law.
Meanwhile, from a cultural perspective, the issue of body ownership and burial rituals is viewed less domatically. Every culture has its own rituals surrounding death, ranging from washing and shrouding the body to releasing it, either by burying it in the ground, burning it, or preserving it. Traditionally, the burial ceremony is part of the ethnic and religious culture of the community (Palgi & Abramovitch 1984). In contemporary secular society, on the other hand (e.g. in the United States of America), the religious, medical and commercial aspects are combined into a unique way of perceiving corpses (Emerick 2000). Thus, the fate of the body is decided by the social environment of the deceased (Foltyn 2008) In contrast to the religious and cultural perspectives, the legal debate around the issue of body ownership is based on four principles of impossibility, significance, the time limit and conflict of interest between the living and the dead. The point of Smolensky's research is that the deceased possesses legal rights, although the issue is about the corpses inheritance (Smolensky 2011).
Meanwhile, the research conducted by Woods (2013) discussed the issue of recognising the body ownership rights after death. He argued that such a right must exist in order for the family members to take care of the deceased. Thus, property rights to human corpses were proposed for ensuring proper burial. This conclusion was reached after considering the New Zealand Supreme Court case, Takamore v. Clarke, Gravatt and Toi Moko. The recommendations presented in this study state the superiority of the wishes of the living over those who are dead, and the importance of joint decision-making in matters relating to death and grief (Woods 2013). However, this study did not review the conflicts arising because of different religious beliefs of the heirs and their perception of a decent and proper burial.
In addition, Stepputat (2016) discussed the legal framework of this issue in detail, yet without considering ownership rights from the religious perspective. His study reviewed the law applying to the transfer of bodies from one country to another, bodies of victims of conflict and war and mutilated bodies.
According to Stroud (2018), the differences in opinion depend on our perception of the body of the deceased. Should it be treated as a person or an inanimate object? The human corpse as an object means that it is a material object that needs to be disposed of in some way or another because it is in a state of decay and biological decomposition. However, the body of a dead human being is not the same as the body of a dead animal. The person could be our father, our child or our friend, and one day, even ourself. Thus, dead bodies are not just objects but former people. Even to the most secular of people, corpses deserve to be treated with dignity and respect. This difference in perception brought the law into three interrelated domains, namely, definition, use and ritual.
Although none of the studies mentioned above discussed the issue of body ownership in the context of interfaith conflict, each of them represents a specific viewpoint and highlights certain aspects surrounding it.
Research methods
The authors explored this qualitative research data through in-depth interviews and documentation (Bazeley 2001;Corbin & Strauss 2008). In this case study, the authors conducted interviews with parties who were directly involved in two unrelated incidents of conflict over the body of the deceased in Indonesia and the United States of America. Both cases provided the study with rich data to examine the issue of body ownership in association with interfaith conflict variables. For privacy reasons the identity of the involved parties had to remain undisclosed, and both cases had never been covered in the media. What was permitted to be disclosed, such as the time and location, was included in the study to support the accuracy of the data in this case study. Meanwhile, the authors used two approaches to analyse the object of this study. The first approach was theological and involved examining religious burial customs and related concepts, such as belief in salvation and resurrection. The second, legal approach focused on the rights of the deceased and the ownership of the body by consulting relevant legal documents.
Findings
From a cultural perspective, individual identity ends with death, which means that the deceased has neither rights nor obligations. However, the body is not considered as a mere object because it housed the spirit of someone who used to be a parent, child, loved one or friend. Thus, many cultures regulate and protect the way in which the body is released because it deserves dignity and respect (Emerick 2000;Foltyn 2008;Sørensen 2009). It follows that the corpse is still considered a person, which is also reflected in the perspective of international law (ICRC-International Committee Geneve 2005), Islamic law (Jamiu Muhammad & Muhammad 2018;Salisu 2017) and political point of view, which concluded that if corpses are objects, then there is no way that corpses are really important for the living (Posel & Gupta 2009).
Notwithstanding, the human body is sometimes objectified. Human corpses have been positioned as useful objects for the needs of educators and medical students. Thus, the status of the human corpse is that of a kind of pseudo-property, something that cannot be bought or sold, while some have a stronger claim on it than others. The bodies of the least powerful and significant -the poor, the non-white, the unidentified -are often treated, if not officially as property, but almost indistinguishable from it (Stroud 2018).
In this case study, the ritual of burying the body of the deceased as a form of releasing it back to nature is practised by Muslims and Protestants alike; however, the permissibility of cremation is still contested (Hutchinson & Aragon 2008;Stepputat 2016;Weeks 2010). For Muslims, there is no doubt that the dead have to be buried in the ground, which is the only lawful way to dispose of them in accordance with Islamic teachings. In contrast, cremation is considered contrary to the Shariʿa (female participant 1, wife of A, Muslim, interviewed 22 February 2021; male participant 1, son of A, Muslim, interviewed 22 February 2021; female participant 2, wife of B, Muslim, interviewed 05 August 2021) and viewed as a sign of atheism and lack of humanity (Knight 2018). Meanwhile, the Protestant participants did not reject cremation or thought it contradicted the teachings of the Bible and a more efficient and cost-effective way of disposing the dead than burial (Beard & Burger 2017), especially in overpopulated cities. They have stated that there is no scriptural prohibition of cremation in the New Testament. The Bible neither favours nor forbids the process of cremation. For them, it allows that ashes to be scattered or interred in the ground, niche wall or columbarium (male participant 2, son of the late A, Protestant, interviewed, 11 January, 2021; and male participant 3, son of the late B, Protestant, interviewed, 07 March 2021).
The conflict over which religious ritual to follow for the burial is very complicated and can lead to intense disputes among families of mixed religion. Each group will insist on practising their own religious tradition to show their respect to the deceased as reflected in their belief system. Thus, the ritual burial is believed to prepare the deceased for the resurrection from the grave (Merricks 2009) and the afterlife (Filippo 2006).
In our interviews with Muslims and Protestants it transpired that all participants believe in the Day of Resurrection. However, there is a sharp difference between them in the meaning of resurrection. The Protestants believe that it does not matter if the body is cremated because the resurrection is in the form of a spiritual body, as stated in Daniel 12:2-3 (male participant 2, son of A, Protestant, interviewed 11 January 2021; male participant 3, son of B, Protestant, interviewed 07 March 2021). Muslim philosophers discussed the same issue at length and agreed with this logical assumption. It is not actually an Islamic belief that the body must be preserved: all bodies decay in the ground, yet all people who have ever lived will be resurrected. Meanwhile, the Muslims among them believe in the resurrection of the physical body. They believe that their bodies would not be able to be resurrected if they have been cremated. Therefore, cremation is prohibited in Islam. They hold this belief firmly because of the promise of Allah that all those lying in their graves will be resurrected on the Last Day, as they stated by quoting the verses in the Qur'an such as Surah al-Hajj: 7. This is excluded for the bodies of victims of fire or bombs and drowning in oceans or rivers and other destructive disasters Because of these differences in belief, the families of the deceased preferred different types of laying the deceased to rest. Both the Protestants and Muslims agreed on the burial, but some of the Protestants preferred the current trend of cremation. Both parties did not have a conflict over the ownership right of the body but were concerned about the spiritual consequences of an improper release of the body.
Regarding the doctrine of salvation of the soul, various religious rituals are performed to help the dying depart from this world and transit smoothly to the next ( Petit et al. 2015). In this case study, the Muslim relatives were more concerned about preserving the body for the afterlife than the Protestant relatives. They believe that it is important to perform the necessary rituals for the deceased to ease his state while waiting for the Day of Judgement and consider the burial as an important part of the final phase of life. It consists of a series of rituals; the body is washed, purified, shrouded, prayed over and buried. In their view, being able to complete all the elements of the ritual is a sign of blessing and hope for what is yet to come (female participant 1, wife of A, Muslim, interviewed 22 February 2021; male participant 1, son of A, Muslim, interviewed 22 February 2021; female participant 2, wife of B, Muslim, interviewed 05 August 2021). Salvation and entry into Paradise is what every Muslim hopes to experience after death (Seise 2021). Meanwhile, the Protestants do not see any specific causal relationship between a person's salvation and the burial (male participant 2, son of A, Protestant, interviewed 11 January 2021; male participant 3, son of B, Protestant, interviewed 07 March 2021). Given these different convictions, the tension between both groups and the tendency to come into conflict over a deceased relative is understandable.
The resolution of the conflict between the Muslim and Protestant relatives in the two cases was pursued through legal mediation. Based on the legal records of the deceased, both parties contested each other's right to ownership of the http://www.hts.org.za Open Access body so that they could bury or cremate it in their own fashion. In the first case, the deceased was registered as a Muslim in Indonesia; therefore, the ownership rights of the body were given to his Muslim son who buried his father following the Islamic rituals. Meanwhile, in the second case, the deceased had been an American citizen married to an Indonesian woman, but the legality of their marriage was contested in his home country. Subsequently, the right of ownership of his body was given to his Protestant son in the United States of America who proceeded to cremate his father.
Two cases of interfaith conflict on body ownership
There are two cases as objects of this study: the first locus is in Indonesia and the second is in the United States of America. Although the widow further attested that B had studied under many Islamic religious teachers in Indonesia, such as HBP (initials of his full name), a caretaker of a mosque in Bandung, who gave a statement in support, also did not have any legal force in the mediation process according to U.S. law. Also, her appeal to emphasise with her husband's situation was unsuccessful. She recalled that her husband had been reading the English translation of the Qur'an for the past one-and-a half years, as evident from a record in her diary: 'Your special present for my birthday March 2016 by finished your recitation of Quran in 2 and half years, is an amazing gift I ever ever had in my life. No one did it before. Event in this world from newbie Muslim who don't care he can't read it from the original language such mostly Muslim done on this earth. But you said you will recite it all, 6666 verses by translation. And you never give up with the obstacle! That is an ultra extra remarkable birthday's present.' (female participant 2, wife of the late B, Muslim, 2016) However, in the end, the mediator's decision was that the rights of B's body were given to the heirs of the first party, B's Protestant children who proceeded to cremate him. This legal decision was justified by U.S. law, but it was difficult for the second party to accept.
The right of the deceased: Grounded theory of conflict resolution
Various perspectives from religion, culture and law indicate that the body of the deceased is considered as a person rather than an object. Only the medical perspective allows to see the body of a deceased individual as a specimen to be dissected and studied. In any case, however, the body should be treated respectfully and without violating the dignity of the deceased. The respectful treatment of the human body in death is evident throughout human history and supported by the presence of specific rituals before, during and after the disposal of a spiritual or religious nature.
In the two cases discussed above, conflicts over body ownership may arise between the adherents of different religions because the burial ceremony and release of the body are part of the sacred, not profane. Whether Christian or Muslim, the relatives of the deceased believe that the teachings and rituals of their religion are the most appropriate to honour the body of their parents. This raises the question of who indeed is entitled to determine the fate of the deceased's body. Answering this question would help prevent the occurrence of interfaith conflicts of this kind which are bound to become more frequent in today's globalised and interconnected world.
Several perspectives have sought an answer to this question, but there is still a considerable potential for conflict among the religions. Most previous studies stopped at the issue of body ownership to determine the ritual of disposing the body, without considering the other variables. From the cultural perspective, for example, the treatment of the body depends on the prevailing culture of the location, which usually prevents any conflict. However, from a religious perspective, God has the only authority to decide how the body of the departed should be treated and prepared for the afterlife. This religious perspective is, however, only relevant in homogeneous communities where all members practise the same faith.
In contrast, the legal perspective holds that the rightful person to determine the fate of the body is the closest next of kin. The rights and obligations are handed over to the state authorities only in be identified or no known relatives exist. In addition, the law also regulates the obligation to take care of the corpse and release it respectfully. This legal perspective is effective in solving issues such as the use of bodies for medical purposes, bodies of war victims, bodies that are outside their national territory, donation of certain body parts and the like. However, the two cases examined in this study cannot be resolved from the legal perspective.
Some theories are found in resolving a conflict both in the litigation and non-litigation domains (Deutsch 1983;Deutsch, Coleman & Marcus 2006;Groom 1994;Hansen 2008). Mediation is one of the non-litigation conflict resolution strategies (Billikopf-Encina 2002;Jones 2000). In the first case, non-litigation through mediation was an option, while the second case was resolved through non-litigation in a legal setting.
Although the two conflicts followed different settlement paths, they were both resolved. This is not solely the result of a successful legal approach but because both parties were motivated to reach a peaceful solution out of respect for their father. The affected Muslim party gave priority to the need for an immediate disposal of the body rather than prolonging the process and delaying it unnecessarily.
The legal procedure followed in both cases required the submission of legal documents as proof of the religious identity of the deceased, which had implications for the religious character of the disposal ceremony. Even though this legal approach successfully resolved the conflict, it was deemed insufficient to satisfy the defeated parties in the mediation. The documents issued by religious authorities and institutions were not accepted on legal grounds, such as the shahadah certificate issued by the Islamic centre in the United States of America and the baptism certificate by the church in Indonesia. The only documents admitted in the mediation to determine the deceased's religious identity were the legal documents issued by the authorities in the state administration. In the first case, these were the documents registered in the Administration of the Ministry of Home Affairs (ADMINDUK) which records the religion or belief of Indonesian citizens. Considering these facts, there are two further recommendations to ensure the peaceful settlement of similar cases that involve interfaith conflict over body ownership in the future. Firstly, the religious authorities and institutions must confirm the legality of the religious identity of individuals before the law. Secondly, the legal authorities in the state must provide legal recognition of the status of an individual's religious identity issued by the religious authorities and institutions.
However, this is only possible if religious identity is properly regulated by the authorities. In the context of Indonesia which guarantees religious freedom and diversity in its constitution, these regulations are well in place, but the situation is different in a secular country like the United States of America where religion is considered a private and personal matter.
Thus, an important theoretical finding in this study is that the right to receive the body of the deceased and carry out the burial ceremony should not be given to next of kin but to the deceased. If the wish of the deceased is respected, in accordance with his religious identity established during his lifetime and attested by his community, current and future interfaith conflicts would be easily and swiftly resolved.
Conclusion
The main factors causing interfaith conflicts over body ownership rights are because of the perception that the body of the deceased is not an object but rather still a person. Furthermore, the issue of proper and rightful burial cannot be separated from the ethical consideration of respecting the body of the deceased and the belief in an afterlife. Religious teaching and practices regarding the proper burial ceremony ought to be respected and treated with the tact they deserve. As discussed earlier, the issue of burial or cremation is highly sensitive and has serious theological implications for the deceased as well as the next of kin.
Islam prohibits the cremation of the body, and this injunction cannot be altered for the sake of efficiency or convenience. Also, it is a tradition in Indonesia to visit the grave of deceased family members (ziarat kubur). Furthermore, there are numerous signs to be observed before and after the burial ceremony, which indicate the fate of the departed soul in the afterlife; therefore, the preparation for the burial is of great significance for Muslims. On the other hand, most modern Protestants accept cremation and have no religious objections to it as it bears no consequence to the deceased's salvation in the afterlife.
Having examined these two cases, we can conclude that the legal perspective alone is not sufficient to present a solution to such conflicts because of the lack of regulation. Therefore, based on the findings in the case settlement, the relevant authorities must formulate a regulatory framework to prevent interfaith conflict over the right of body ownership and the right of the body to be preserved and released according to the deceased's professed and proven religion or belief. Likewise, the right of body ownership exercised by the next of kin or state authorities must not violate the right of the deceased.
|
2022-11-14T16:09:17.915Z
|
2022-11-11T00:00:00.000
|
{
"year": 2022,
"sha1": "64fff55d650cd19b17229d5f8d7b9ec325ffdd52",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/7900/23571",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "79a967a65289e46f0c4e0f49d03ca9f7bb7d9f63",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
7530322
|
pes2o/s2orc
|
v3-fos-license
|
Zuonin B Inhibits Lipopolysaccharide-Induced Inflammation via Downregulation of the ERK1/2 and JNK Pathways in RAW264.7 Macrophages
We investigated whether Zuonin B exerts immunological effects on RAW264.7 cells. Zuonin B, isolated from flower buds of Daphne genkwa, suppressed the levels of nitric oxide and prostaglandin E2, as well as proinflammatory cytokines, such as tumor necrosis factor-α and interleukin-(IL-) 6, in lipopolysaccharide-stimulated macrophages. Moreover, the compound inhibited cyclooxygenase-2 and inducible nitric oxide synthase expression. Zuonin B attenuated NF-kappaB (NF-κB) activation via suppressing proteolysis of inhibitor kappa B-alpha (IκB-α) and p65 nuclear translocation as well as phosphorylation of extracellular signal-regulated kinase 1/2 and c-Jun N-terminal kinase. Additionally, IL-4 and IL-13 production in ConA-induced splenocytes was inhibited by Zuonin B. In conclusion, the anti-inflammatory effects of Zuonin B are attributable to the suppression of proinflammatory cytokines and mediators via blockage of NF-κB and AP-1 activation. Based on these findings, we propose that Zuonin B is potentially an effective functional chemical candidate for the prevention of inflammatory diseases.
Introduction
Inflammation is a multistep process mediated by activated inflammatory and immune cells, including macrophages and monocytes [1], and comprises a complex series of reactions regulated by a cascade of cytokines, growth factors, nitric oxide (NO), and prostaglandins (PGs) produced by active macrophages. Macrophages are key players in the immune response to foreign invaders, such as proinflammatory cytokines [2]. We made the highlighted change to the second address.
NO, a reactive radical produced from the guanidino nitrogen of l-arginine by NO synthase (NOS), is essential for host innate immune responses to pathogenic bacteria, viruses, fungi, and parasites [3]. However, excessive NO production can result in the development of inflammatory diseases, including rheumatoid arthritis and autoimmune disorders [4]. PGE 2 is an inflammatory mediator produced during the conversion of arachidonic acid by cyclooxygenase. In various inflammatory cells, COX-2 is induced by cytokines and other activators, such as LPS, resulting in the release of a large amount of PGE 2 at the inflammatory sites [5]. Cytokines are produced and secreted by a variety of cell types, including macrophages and monocytes. These proteins play a major role in the induction and regulation of cellular interactions (e.g., inflammation, hematopoiesis, allergy, and immunoreaction) [6].
Nuclear transcription factor kappa-B (NF-κB) regulates various genes involved in immune and acute phase inflammatory responses as well as cell survival [7]. NF-κB activation in response to proinflammatory stimuli involves the rapid phosphorylation of IκBs by the IKK signalosome complex [8]. The resulting free NF-κBtranslocates to the nucleus, where it binds to NF-κB-binding sites in the promoter 2 Evidence-Based Complementary and Alternative Medicine regions of target genes and induces the transcription of proinflammatory mediators, such as iNOS and COX-2. In addition to NF-κB, mitogen-activated protein kina-ses (MAPKs) are implicated in cytokine production in macrophages [9]. Three MAPK families (extracellular signal-regulated kinase (ERK)1/2, p38, and c-Jun N-terminal kinase (JNK)) are signaling molecules that react to extracellular stimuli (mitogens) and regulate immune responses, including proinflammatory cytokine production, mitosis, differentiation, and cell survival/apoptosis [9,10]. A major consequence of MAPK phosphorylation is activation of these transcription factors, which serve as immediate or downstream substrates of the kinases [11].
In a previous study, we isolated nine lignans from the dried flower buds of Machilus thunbergii, specifically, machilin A, licarin B, Zuonin B, macelignan, oleiferin C, meso-dihydroguaiaretic acid, licarin A, machilin F, and nectandrin B [12]. The molecular mechanism and activity of Zuonin B in macrophages remain to be clarified. To establish the mechanisms underlying the anti-inflammatory effects of Zuonin B, in the present study we investigated the expression patterns of inflammatory mediators in LPS-stimulated RAW 264.7 cells. Additionally, we examined the effects of Zuonin B on MAPK and NF-κB activation.
Cell
Culture. The RAW 264.7 cell line derived from murine macrophages was obtained from the American Type Culture Collection (ATCC, Rockville, MD, USA). Cells were maintained in Dulbecco's modified Eagle's medium supplemented with glutamine (1 mM), 10% heat-inactivated fetal bovine serum (FBS), penicillin (50 U/mL), and streptomycin (50 μg/mL) at 37 • C in an atmosphere of 5% CO 2 . Cells that reached a density of 5 × 10 4 cells/mL were activated by incubation in medium containing E. coli LPS (1 μg/mL). LPS was added to a range of concentrations of test compounds dissolved in DMSO. Cells treated with 0.05% DMSO were used as the vehicle control.
MTT Assay for Cell
Viability. Cells were seeded into 96well plates at a density of 5 × 10 4 cells/well, and incubated with serum-free medium in the presence of different concentrations of Zuonin B. After incubation for 24 h, 10 μL of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) (5 mg/mL in saline) was added and incubation continued for another 4 h. Mitochondrial succinate dehydrogenase in live cells converts MTT to visible formazan crystals during incubation. Formazan crystals were solubilized in dimethylsulfoxide, and the absorbance measured at 540 nm using an enzyme-linked immunosorbent assay (ELISA) microplate reader (Benchmark, Bio-Rad Laboratories, CA, USA). The relative cell viability was calculated and compared with the absorbance of the untreated control group. All experiments were performed in triplicate.
Preparation and Treatment of Splenocyte Suspensions.
Spleens from BALB/c mice were removed aseptically, and a single-cell suspension of splenocytes obtained by passing the cells through two needles in RPMI 1640 containing 10% fetal bovine serum, 25 mM HEPES, 2 mM glutamine, 100 U/mL penicillin, and 100 mg/mL streptomycin (GibcoBRL, NY, USA). Red blood cells (RBCs) were lysed with lysis buffer (Sigma Chemical, St Louis, MO, USA) at 37 • C for 10 min. After washing with PBS, cells were cultured in 100ϕ dishes for 4 h. Splenocytes were plated into 96-well plates at a density of 1 × 10 6 cells/mL and treated with different concentrations of p-hydroxycinnamic acid methyl ester for 1 h, followed by ConA (1 μg/mL) for a further 3 days. The IL-4 and IL-13 levels in culture supernatants were measured using ELISA kits for murine cytokines (R&D systems, MN, USA), according to the manufacturer's instructions. All experimental procedures were carried out in accordance with the NIH Guidelines for the Care and Use of Laboratory animals, and animal handling followed the dictates of the National Animal Welfare Law of Korea.
TNF-α and IL-6
Assays. TNF-α and IL-6 production in RAW264.7 cells were assayed using ELISA kits (Assay design, USA) following the manufacturer's instructions. Cells (1 × 10 6 cells/well) in 96-well plates were treated with different concentrations of Zuonin B for 1 h, production of TNF-α and IL-6 stimulated with 1 μg/mL of LPS, and incubation continued for another 24 h. The conditioned medium was used for the subsequent experiment. Specifically, 50 μL of TNFα standards (prepared for calibration) or a similar volume of Zuonin-B-treated conditioned medium was added to the wells of TNF-α and IL-6 antibody-coated 96-well plates in triplicate. Absorbance was determined at 450 nm using the microplate reader. Specific standard curves were employed to quantify the amounts of TNF-α and IL-6 released by cells.
Measurement of Nitric Oxide (NO) Production.
The nitrite concentration in culture medium was measured as an indicator of NO production, according to the Griess reaction. RAW264.7 cells (2 × 10 5 cells/well) were cultured in 96-well plates using DMEM without phenol red and pretreated with different concentrations of Zuonin B for 1 h. Cellular NO production was induced by adding a 1 μg/mL of LPS and incubating for 24 h. Next, 100 μL of conditioned medium was mixed with an equivalent volume of Griess reagent and incubated for 15 min. The absorbance of the mixture at 540 nm Evidence-Based Complementary and Alternative Medicine 3 was measured with an ELISA microplate reader (Benchmark, Bio-Rad Laboratories, CA, USA). The values obtained were compared with those of standard concentrations of sodium nitrite dissolved in DMEM, and the concentrations of nitrite in the conditioned media of sample-treated cells calculated. 2 Levels. Production of PGE 2 , one of the mediators released after activation of COX-2, was used as a marker for COX-2 assessment. RAW264.7 cells (2 × 10 5 cells/well) were cultured in 96-well plates with serumfree medium and pretreated with different concentrations of Zuonin B for 1 h. PGE 2 generation (via COX-2 activation) was stimulated by adding a 1 μg/mL of LPS and incubating for 24 h. The conditioned medium was used for PGE 2 determination with a prostaglandin E 2 ELISA assay kit (Cayman Chemical Co., Ann Arbor, MI, USA), according to the manufacturer's instructions. The absorbance was measured at 450 nm using an enzyme-linked immunosorbent assay (ELISA) microplate reader (Benchmark, Bio-Rad Laboratories, CA, USA).
2.9. Immunofluorescence Analysis. RAW 264.7 cells cultured on Permanox plastic chamber slides were fixed with ethanol for 30 min at 4 • C. After washing with PBS and blocking with 3% bovine serum albumin in PBS for 30 min, samples were incubated overnight at 4 • C with rabbit polyclonal anti-iNOS, anti-COX-2 (1 : 500 dilution, Santa Cruz Biotechnology, Santa Cruz, CA, USA), and anti-NF-κB p65 subunit (1 : 500 dilution, Assay Designs) antibodies. Excess primary antibody was removed, slides washed with PBS, and the samples incubated with Texas Red-conjugated secondary antibody (SantaCruz Biotechnology) for 2 h at room temperature.
Statistical
Analysis. Data were expressed as means ± standard error of the mean (SEM). Statistical significance was determined using the ANOVA test for independent means. The critical level for significance was set at P < 0.05.
Effects of Zuonin B on Macrophage
Toxicity. The MTT cell viability assay was performed using RAW264.7 cells grown in medium to determine the effects of Zuonin B (Figure 1(a)). The cytotoxic effect of Zuonin B in RAW 264.7 cells was examined to establish the appropriate concentration range for analysis of COX-2 and iNOS expression. Neither Zuonin Bnor DMSO exerted a significant toxic effect on RAW264.7 cells at the concentrations 3.75, 7.5, 15, and 30 μM examined after 24 h of treatment (Figure 1(d)). Thus, the nontoxic concentrations of Zuonin B were used for subsequent experiments. 2 Production in RAW 264.7 Cells. The effects of Zuonin B on LPS-induced NO production in RAW 264.7 cells were investigated by estimating the amount of nitrite released into the culture medium using the Griess reaction. To ascertain whether Zuonin B inhibits LPS-induced nitrite production and iNOS protein expression, RAW 264.7 cells were pretreated for 1 h with various concentrations of the compound and subsequently treated with 1 μg/mL LPS. No significant differences in NO production were observed in RAW 264.7 cells treated with Zuonin B alone, compared with the negative control (data not shown). As shown in Figure 1(b), Zuonin B suppressed nitrite production in a concentration-dependent manner, with >50% inhibition at a concentration of 30 μM. The COX-2 levels were examined with the PGE 2 immunoassay to determine whether Zuonin B inhibition of COX-2 production is related to modulation of PGE 2 release. Notably, pretreatment of cells with Zuonin B markedly inhibited the LPS-induced increase in PGE 2 production in a concentration-dependent manner (Figure 1(c)).
Effects of Zuonin B on iNOS and COX-2 Protein Expression in RAW 264.7 Cells.
To establish the anti-inflammatory activity of Zuonin B, we tested its effects on LPS-induced iNOS and COX-2 protein upregulation in RAW 264.7 cells via Western blot and immunofluorescence analyses. As shown in Figures 2(a) and 2(c), iNOS protein expression was not detected in unstimulated cells, but was markedly increased by 24 h after stimulation with 1 μg/mL LPS. Cells pretreated with Zuonin B displayed concentration-dependent inhibition of iNOS protein expression following LPS stimulation for 24 h. As shown in Figures 2(b) and 2(c), COX-2 protein was detected in untreated cells, and levels increased
Effect of Zuonin B on NF-κB and AP-1 Translocation.
We further investigated whether Zuonin B prevented the translocation of the p65 subunit of NF-κB from the cytosol to nucleus following release from IκB-α, leading to induction of both iNOS and COX-2, with the aid of immunofluorescence staining. Nuclear and cytosolic extracts were subjected to immunoblot analysis. PARP (nuclear protein) and β-actin (cytosolic protein) were employed as controls to confirm the absence of contamination during extraction of each fraction. Our data show that p65 is distributed in the cytoplasmic compartment prior to LPS stimulation, but accumulates in the nucleus after LPS treatment. The p65 and AP-1 level in the nuclear fraction was significantly reduced upon pretreatment with Zuonin B (Figure 3(a)). As shown in Figure 3(a), Zuonin B inhibited degradation of IκB-α as well as the LPSinduced increase in p65 in the nuclear fraction, indicating that the Zuonin-B-mediated suppression of IκB-α degradationprevents NF-κB-regulated expression. Immunofluorescence analyses revealed that in unstimulated cells, NF-κB p65 was mainly present in the cytoplasm. After LPS treatment, the majority of intracellular p65 translocated from the cytoplasm to the nucleus, as evident from the strong nuclear NF-κB p65 staining (Figure 3(b)).
Effects of Zuonin B on ERK1/2 and JNK Activation.
Since the MAPK pathway is important for NF-κB activation, we investigated whether MAPKs and NF-κB are involved in Zuonin B-induced signaling in Raw264.7 cells. MAPK activation requires phosphorylation, detected using anti-phospho-MAPK and anti-MAPK antibodies specific for ERK1/2 and JNK. As shown in Figure 3(c), LPS induced phosphorylation of ERK1/2 and JNK in nontreated cells, whereas pretreatment with Zuonin B suppressed LPS-induced MAPK phosphorylation in a dose-dependent manner. Our results clearly indicate that Zuonin B inhibits LPS-induced NF-κB activation via suppression of MAPK signaling.
Effects of Zuonin B on Th2-Type Cytokines in Splenocytes.
Next
Discussion
Inflammation is a critical factor in tumor progression. In this study, we investigated the effects of Zuonin B initially isolated from Machilus thunbergii on LPS-induced iNOS and COX-2 expression and its mode of action in RAW264.7 cells. Recent studies have shown that inflammation of these tissues is accompanied by upregulation of the inducible NO and iNOS isoforms [13]. The iNOS level is significantly correlated with the degree of inflammation [14]. Therefore, inhibitory effects against overproduction of NO and iNOS may provide a measure for assessing the anti-inflammatory effects of drugs on the anti-inflammatory process. In our experiments, Zuonin B inhibited NO production in a dosedependent manner via suppression of iNOS protein expression in LPS-stimulated RAW264.7 cells. Based on these results, we suggest that Zuonin B may effectively relieve the inflammatory pathological process associated with excessive NO production. PGE 2 is an inflammatory mediator generated at inflammatory sites by COX-2, known as prostaglandin endoperoxidesynthase, that triggers the development of several chronic inflammatory diseases, such as cardiovascular disease, cancer, and rheumatoid arthritis [15]. COX-2, an inducible form of cyclooxygenase, serves as an interface between inflammation and cancer. In response to various stimuli, including bacterial LPS, COX-2 is transiently elevated in certain tissues. Abnormally elevated COX-2 causes promotion of cellular proliferation, suppression of apoptosis, enhancement of angiogenesis, and invasiveness, which account for its oncogenic function [16]. Hence, PGE 2 and COX-2 are believed to be the target enzymes for anti-inflammatory activity. In our study, Zuonin B dose-dependently inhibited PEG 2 production via suppressing COX-2 protein expression in LPS-stimulated RAW264.7 cells. These results indicate that Zuonin B is effective in COX-2-related inflammatory responses.
Additionally, recent studies reveal that natural productsinhibit LPS-induced iNOS and COX-2 expression as well as TNF-α release in RAW264.7 macrophages by preventing NF-κB and MAPK activation. In our experiments, Zuonin B inhibited LPS-induced TNF-α and IL-6 production. These findings suggest that Zuonin B exerts anti-inflammatory effects by inhibiting the secretion of proinflammatory cytokines. This compound is predominantly responsible for NF-κB activation in response to proinflammatory stimuli [17]. NF-κB and AP-1 are strong proinflammatory transcription factors, which can regulate a variety of inflammatory genes, including TNF-α [18]. NF-κB is essential for host responses to microbial and viral infections, since the expression levels of several inflammation-related genes are regulated through the NF-κB signaling pathway [19]. Our data indicate that Zuonin B inhibits the nuclear translocation of p65 protein via suppressing IκB-α degradation, providing strong evidence that Zuonin B inhibits NF-κB activation. MAPKs involved in macrophage inflammation play important regulatory roles in cell growth and differentiation and control cellular responses to inflammatory cytokines and stress as well as NF-κB activity [20]. Moreover, MAPKs play a central role in inducing cytokine production and mediating the cellular stress response [21,22]. Several natural products inhibit the expression of these genes by modulating MAPK phosphorylation. In the current study, LPS induced rapid phosphorylation of ERK1/2 and JNK in RAW264.7 cells in the absence of Zuonin B. However, the precise signaling pathways among the three types of MAPKs are currently unclear. Zuonin B also diminished IL-4 and IL-13 production in a concentration-dependent manner in splenocytes. These results suggest that Zuonin B, at least in ConA-stimulated splenocytes, exerts the anti-inflammatory effects by suppressing the expression of proinflammatory enzymes as well as the secretion of proinflammatory cytokines.
In conclusion, Zuonin B exerts anti-inflammatory effects by suppressing intracellular NF-κB activation, which leads to downregulation of the expression of inflammation-related proteins. In view of these results, we propose that the utility range of Zuonin B can be expanded as an anti-inflammatory therapeutic agent.
|
2014-10-01T00:00:00.000Z
|
2012-02-01T00:00:00.000
|
{
"year": 2012,
"sha1": "2a0509e10ee9f7d782a3603ca44381aba49ffdfa",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2012/728196.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6721f96b23b08e87d19bdf34a907a1b442ea8b74",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233035627
|
pes2o/s2orc
|
v3-fos-license
|
Identification of human glucocorticoid response markers using integrated multi-omic analysis from a randomized crossover trial
Background: Glucocorticoids are among the most commonly prescribed drugs, but there is no biomarker that can quantify their action. The aim of the study was to identify and validate circulating biomarkers of glucocorticoid action. Methods: In a randomized, crossover, single-blind, discovery study, 10 subjects with primary adrenal insufficiency (and no other endocrinopathies) were admitted at the in-patient clinic and studied during physiological glucocorticoid exposure and withdrawal. A randomization plan before the first intervention was used. Besides mild physical and/or mental fatigue and salt craving, no serious adverse events were observed. The transcriptome in peripheral blood mononuclear cells and adipose tissue, plasma miRNAomic, and serum metabolomics were compared between the interventions using integrated multi-omic analysis. Results: We identified a transcriptomic profile derived from two tissues and a multi-omic cluster, both predictive of glucocorticoid exposure. A microRNA (miR-122-5p) that was correlated with genes and metabolites regulated by glucocorticoid exposure was identified (p=0.009) and replicated in independent studies with varying glucocorticoid exposure (0.01 ≤ p≤0.05). Conclusions: We have generated results that construct the basis for successful discovery of biomarker(s) to measure effects of glucocorticoids, allowing strategies to individualize and optimize glucocorticoid therapy, and shedding light on disease etiology related to unphysiological glucocorticoid exposure, such as in cardiovascular disease and obesity. Funding: The Swedish Research Council (Grant 2015-02561 and 2019-01112); The Swedish federal government under the LUA/ALF agreement (Grant ALFGBG-719531); The Swedish Endocrinology Association; The Gothenburg Medical Society; Wellcome Trust; The Medical Research Council, UK; The Chief Scientist Office, UK; The Eva Madura’s Foundation; The Research Foundation of Copenhagen University Hospital; and The Danish Rheumatism Association. Clinical trial number: NCT02152553.
Introduction
Glucocorticoids (GCs) have a key role in the metabolic, vascular, and immunological response to stress (Cain and Cidlowski, 2017;Oster et al., 2017). GC secretion from the adrenal gland is under tight dynamic control by the hypothalamic-pituitary-adrenal axis and is regulated in a classic circadian pattern (Cain and Cidlowski, 2017;Oster et al., 2017). Most actions of GCs are mediated by the ubiquitously expressed GC receptor (Cain and Cidlowski, 2017;Oster et al., 2017). The tissuespecific effects of GCs are regulated by many local factors, including pre-receptor metabolism of GCs and the interaction of the GC receptor with tissue-specific transcription factors, or through nongenomic mechanisms (Cain and Cidlowski, 2017;Oster et al., 2017). As a result of this complexity, circulating levels of cortisol relate poorly to tissue action of cortisol, and serum cortisol therefore has limited value as a biomarker for GC action (Karssen et al., 2001).
GCs are among the most commonly prescribed drugs, and GC treatment remains a cornerstone in the management of many rheumatic and inflammatory diseases despite the introduction of modern disease-modifying antirheumatic drugs and biological immunomodulatory treatment (Smolen et al., 2017). GC replacement is essential for survival in patients with various forms of adrenal insufficiency (Johannsson et al., 2015). However, metabolic and other side effects of GC treatment or replacement are common (Bjö rnsdottir et al., 2011;Fardet et al., 2012), indicating that current methods to monitor their action and tailor their treatment are inadequate. Unphysiological GC exposure has been implicated in the etiology of several common diseases such as type 2 diabetes mellitus, hypertension, abdominal obesity, and cardiovascular disease (Ragnarsson et al., 2019).
Against this background, it is highly desirable to be able to measure and quantify GC action as this might be useful to refine current GC therapy. Biomarkers of GC action will also provide potential mechanistic understanding for the role of GCs in the etiology of many common diseases. Previous attempts to identify biomarkers using metabolomics have identified circulating metabolites associated with GC exposure (Alwashih et al., 2017a;Alwashih et al., 2017b). Integrated multi-omic analysis provides increased robustness over analysis of individual 'omic data sets . In particular, the identification of groups within one 'omic 'layer' with shared co-regulation within another 'omic layer implies a functional relationship that can be used both to assess the mechanistical relevance and to support the identification of biomarkers (Karczewski and Snyder, 2018;Misra et al., 2018).
The aim of this exploratory study was to define multi-omic patterns derived from independent tissues related to GC action and to use these patterns to search for clinically applicable circulating biomarkers of GC action. Subjects with primary adrenal insufficiency, Addison's disease, lack GC production from the adrenal cortex and can therefore be considered a human GC 'knock-down' model ( Figure 1A). An experimental study design including subjects with Addison's disease, standardizing for diurnal variation and food intake, allowed a within-individual comparison between physiological GC exposure and GC withdrawal ( Figure 1B). A multi-omic analysis strategy combining data from gene expression in circulation (peripheral blood mononuclear cells [PBMCs]) and an important metabolic tissue, adipose tissue, integrated with circulating microRNAs (miRNAs) and metabolites was used to identify putative biomarkers. The strongest putative biomarkers were then replicated in independent study groups with different GC exposure.
Clinical experimental study Patient characteristics
Eleven subjects with well-defined Addison's disease and no other endocrinopathies were recruited and included in the study between September 2013 and September 2015. One subject discontinued the study after randomization and before the first intervention because of persistent orthostatic hypotension. Ten subjects (four women with three of them post-menopausal) with a median age of 50 years (range, 25-57) and a median disease duration of 23.5 years (range, 1-33) completed all aspects of the study between May 2014 and October 2015. The median daily replacement dose of hydrocortisone (HC) prior to the study was 30 mg (range, 20-30), and 9 out of 10 subjects had treatment with fludrocortisone (mineralocorticoid) at a median daily dose of 0.1 mg (range, 0.1-0.2).
Clinical and biochemical outcomes
The main time points for sample collection in each intervention were at 9 AM on the first intervention day ('before start') and at 7 AM on the second intervention day ('morning') ( Figure 1B). The subjects' last ordinary oral HC dose was administered the day before admission to the study unit.
Infusion of HC mixed with isotonic saline ('GC exposure') had no effect on systolic and diastolic blood pressure, body weight, serum sodium and potassium, or plasma glucose concentrations compared to the same amount of isotonic saline infusion alone ('GC withdrawal') ( Table 1). HC and saline infusion achieved the intended differences in GC exposure. Both median morning serum cortisol and cortisone during the HC infusion were within the physiological range (298 and 81.2 nmol/L, eLife digest Several diseases, including asthma, arthritis, some skin conditions, and cancer, are treated with medications called glucocorticoids, which are synthetic versions of human hormones. These drugs are also used to treat people with a condition call adrenal insufficiency who do not produce enough of an important hormone called cortisol. Use of glucocorticoids is very common, the proportion of people in a given country taking them can range from 0.5% to 21% of the population depending on the duration of the treatment. But, like any medication, glucocorticoids have both benefits and risks: people who take glucocorticoids for a long time have an increased risk of diabetes, obesity, cardiovascular disease, and death.
Because of the risks associated with taking glucocorticoids, it is very important for physicians to tailor the dose to each patient's needs. Doing this can be tricky, because the levels of glucocorticoids in a patient's blood are not a good indicator of the medication's activity in the body. A test that can accurately measure the glucocorticoid activity could help physicians personalize treatment and reduce harmful side effects.
As a first step towards developing such a test, Chantzichristos et al. identified a potential way to measure glucocorticoid activity in patient's blood. In the experiments, blood samples were collected from ten patients with adrenal insufficiency both when they were on no medication, and when they were taking a glucocorticoid to replace their missing hormones. Next, the blood samples were analyzed to determine which genes were turned on and off in each patient with and without the medication. They also compared small molecules in the blood called metabolites and tiny pieces of genetic material called microRNAs that turn genes on and off.
The experiments revealed networks of genes, metabolites, and microRNAs that are associated with glucocorticoid activity, and one microRNA called miR-122-5p stood out as a potential way to measure glucocorticoid activity. To verify this microRNA's usefulness, Chantzichristos et al. looked at levels of miR-122-5p in people participating in three other studies and confirmed that it was a good indicator of the glucocorticoid activity.
More research is needed to confirm Chantzichristos et al.'s findings and to develop a test that can be used by physicians to measure glucocorticoid activity. The microRNA identified, miR-122-5p, has been previously linked to diabetes, so studying it further may also help scientists understand how taking glucocorticoids may increase the risk of developing diabetes and related diseases.
Clinical and analytical part of the exploratory study and the replication step. (A) Subjects with Addison's disease (primary adrenal insufficiency, step 1) were studied in a random order during both physiological glucocorticoid (GC) exposure and GC withdrawal (step 2). Transcriptomics (whole-genome expression) in peripheral blood mononuclear cells (PBMCs) and adipose tissue (n = 28,869,869 genes), plasma miRNAomics (n = 252), and serum metabolomics in morning samples were analyzed (n = 164) (step 3). Integration of the multi-omic data derived a Figure 1 continued on next page respectively) and markedly lower during the saline infusion (44.4 and 42 nmol/L, respectively, both p<0.001) ( Figure 2). Serum cortisol and cortisone were detected in all subjects' morning samples during the saline infusion, but both overnight (between 12 AM and 7 AM) urinary cortisol and cortisone excretion were below the limit of detection. Both HC and saline infusions were well-tolerated, and no serious adverse events were observed. Three subjects reported mild physical and/or mental fatigue, and one subject reported mild salt craving during the GC withdrawal period. network including gene expression (derived from two independent tissues), microRNAs (miRNAs), and metabolites that were statistically differentiated between the two interventions (step 4). The miRNA findings, because of their centrality in the network, were replicated in subjects with different GC exposures (within the physiological range) from three independent studies (step 5). (B) Subjects with Addison's disease (primary adrenal insufficiency) received in a random order intravenous (i.v.) hydrocortisone (HC) infusion mixed in 0.9% saline in a circadian pattern (physiological GC exposure) or the same volume of 0.9% saline alone (GC withdrawal) during 22 hr starting at 9 AM more than 2 weeks apart. During the GC exposure, HC(Solu-Cortef) was administered at a dose of 0.024 mg/kg/hr between 9 AM and 12 PM (first day), 0.012 mg/kg/hr between 12 PM and 8 PM (first day), 0.008 mg/kg/hr between 8 PM and 12 AM (first day), and 0.030 mg/kg/hr between 12 AM and 7 AM (second day). Samples for the 'omics analyses were collected at 7 AM on day 2 of the intervention (morning samples). p.o.: oral. Differentially regulated 'omic elements associated with response to GCs Similarity network fusion (SNF) was used to demonstrate overall similarity between subjects across and between 'omic layers, prior to analysis (Appendix 1 and Appendix 1-figure 1). Differential gene expression was associated with GC response in both PBMC and adipose tissue (Appendix 1). Differential expression of metabolites and miRNA was identified in blood in relation to GC response (Appendix 1). Differentially expressed 'omic elements (DEOEs) are presented in Table 2 and Supplementary file 1a-d. All DEOEs were used for integrated analysis, and false discovery rate (FDR)-corrected DEOEs were used for all other analyses ( Table 2). DEOEs from the PBMC and adipose tissue transcriptomes were shown to have limited overlap in response to GC but were enriched for shared pathways, revealing an overlap that indicated shared mechanism in relation to GC exposure (Appendix 2 and Appendix 2- figure 3). We assessed the impact of differential expression on the entire interactome to aid in the identification of similar GC-related function. Interactome network models were generated using differentially expressed genes (DEGs) from both the PBMC transcriptome and the adipose tissue transcriptome. These were shown to be consistent with one another (Appendix 2 and Appendix 2figures 1 and 2) despite the limited overlap of DEGs. GC-responsive genes were shown to have higher connectivity in the human interactome than expected by chance, demonstrated using 10,000 permutations of this network model (Appendix 2).
Integration of PBMC and adipose tissue transcriptomes with plasma miRNAomic and serum metabolomic data
Hypernetworks are network structures where edges are not restricted to defining a relationship between two nodes but may be shared between many nodes. As such, these structures can be used to describe complex relationships that link multiple elements. Hypernetworks also allow for the same pair of nodes to be connected by multiple edges. This means that relationships between nodes can be ranked by the number of edges shared between them. Hypernetworks allow for the summary of correlation matrices, compressing the high-dimensional relationships between data points (transcripts/miRNA/metabolites) into a single metric of similarity. Hypernetworks facilitate integration of 'omic data and can be used to define strongly associated elements. Elements with large numbers of shared edges are more similar and likely to be of functional relevance; clustering allows refinement of large 'omic data sets to highly associated elements ( Figure 3A, B). Hypernetworks are robust to random error and act to filter out false-positive correlations as these will not have a uniform pattern of correlation across all 'omic elements.
To assess similarity, we defined the correlation coefficient between each differentially expressed 'omic measurement and assessed as 'present' in the network model those correlations with an rvalue >j1:5j standard deviations (sd). Edges were defined as PBMC transcripts with shared correlations, for example, two PBMC transcripts that are both correlated with the same three metabolites are connected by three edges. We summarized the shared correlations as a measure of similarity between each pair of GC-responsive PBMC transcripts, counting correlations across the other 'omic data sets (Figure 3-figure supplement 1). The greatest number of correlations shared was between PBMC and adipose tissue transcriptome (525 genes, Figure 3C), reinforcing the observation that, while the gene-level overlap of differential expression was limited, common pathways are active in both tissues related to GC action, which involve similar networks of co-expressed genes. The rank order of the number of correlations shared with the GC-responsive PBMC transcriptome was adipose tissue transcriptome > plasma miRNAome > serum metabolome, and this was confirmed both by comparison of the heat maps (Figure 3-figure supplement 1) and by a Venn diagram ( Figure 3D). The Venn diagram also reveals a strong correspondence between the serum metabolome and both PBMC and adipose tissue transcriptomes.
Identification and validation of a shared transcriptomic profile in both PBMCs and adipose tissue predicting GC response Robustness testing was performed in which hypernetworks were generated to model dissimilarity based on the absence of correlations with PBMC transcripts. Any genes that were highlighted by these hypernetworks were removed from the downstream predictive analysis. Using this approach, we defined 271 of 965 PBMC transcripts with maximum predictive potential. This set of genes Hypernetworks differ from traditional networks in that edges can connect more than two nodes. Nodes are represented by black circles, edges by colored lines and surfaces. This demonstration shows how one edge can connect (i) two nodes as a one-dimensional line, (ii) three nodes as a two-dimensional surface, and (iii) four nodes as a three-dimensional structure. Hypernetworks of 'omic data can have edges shared between hundreds of nodes. (B) Hypernetwork diagram illustrating how a pair of nodes (a-h) can be connected by more than one edge. In this example, nodes e and d share two edges, as do b and d. (C) A hypernetwork plotted as a heat map can be used to investigate clustering of blood peripheral blood mononuclear cell (PBMC) transcripts, based on correlation to, for example, adipose tissue transcriptome. A central cluster, defined using hierarchical clustering, groups PBMC transcripts based on high numbers of shared edges (red square, n = 965). This approach was applied to define groups of PBMC transcripts with similar profiles when correlated against each other 'omic layer. (D) Gene probe level overlaps between PBMC transcriptome clusters identified by hypernetwork shared with the other 'omic data sets. PBMC transcriptomic changes are correlated with changes in miRNAome, adipose tissue transcriptome, and metabolome (gas chromatography-mass spectrometry and liquid chromatography-mass spectrometry overlaps combined); overlaps are common PBMC transcripts with correlation to the 'omic data sets. Values in brackets represent the size of PBMC transcriptomic clusters drawn from all differentially expressed PBMC transcripts (n = 4426, p<0.05). Data demonstrates a fundamental relationship in glucocorticoid response between PBMC and adipose tissue (965 genes) and reinforces the presence of common pathways in these two independent tissues. The online version of this article includes the following figure supplement(s) for figure 3: perfectly classified the HC-and saline-treated groups using partial least squares discriminant analysis (PLS-DA) ( Figure 4A). We identified variables of importance using Random Forest and modeled the background experimental noise using permutation analysis (BORUTA) ( Figure 4B). This identified a set of 59 genes as variables of importance with fold changes in the same directions in both transcriptomic data sets that perfectly classified HC from saline treatment (Supplementary file 1e). Nine of these genes were significantly differentially expressed in both PBMC and adipose tissue transcriptomes ( Figure 4C), and, of these nine genes, six were associated with GC response via gene ontology (IL18RAP, JAK2, MTSS1, RIN2, KIF1B, and BCL9L) ( Figure 4D). The gene set (n = 59) that we identified, which classified both PBMC and adipose tissue transcriptomes in relation to GC exposure, was validated (area under the curve [AUC] 0.70-0.96) by further testing in five other previous studies of GC action by other research groups in cellular models (Table 3). Further robustness of the random forest observations was provided by demonstrating that the minimal depth at which the variables of importance became active in prediction was small ( Figure 4-figure supplement 1).
Integration of circulating 'omic data sets leads to miRNA and metabolite markers of GC action
We further examined interactions between the circulating 'omics data associated with GC exposure ( Figure 3D). All of the circulating 'omics data was combined to form a correlation matrix and hierarchical clustering used to identify 'omic data points with similar correlation ( Figure 5-figure supplement 1). Eleven clusters including transcriptomic, miRNAomic, and metabolomic data were identified, and these clusters were shown to have enrichment within the interactome network model (Supplementary file 1f and Appendix 2). We then quantified the number of correlations between all the circulating 'omic data associated with GC exposure (n = 336) using a hypernetwork. This approach was used to define a group of highly connected multi-omic elements with a relationship to GC exposure ( Figure 5A).
A hypernetwork model of the core group of 139 highly connected elements was generated ( Figure 5B). DCK was the only gene shared with the GC-dependent adipose tissue transcriptome that also had predictive value (highlighted with a red square in Figure 5B). Deletion of the DCK gene region has been shown to be associated with increased sensitivity to GCs (Malani et al., 2017), an observation in alignment with the reduction in expression we found in both PBMC and adipose tissue transcriptomes in association with GC exposure ( Figure 4C).
The hypernetwork model ( Figure 5B) also highlighted a range of related miRNAs and metabolites. A hierarchical model of modules within the network was assessed using the measure of network centrality ( Figure 5C). These modules revealed multi-omic relationships and demonstrated that miR-122-5p was the only miRNA present in higher order modules as measured by network centrality. miR-122-5p was correlated with cortisol exposure and the expression of FKBP5, a regulator of GC sensitivity (cluster 11 in Figure 5-figure supplement 1 and Supplementary file 1f).
Targeted replication of the plasma miR-122-5p fold change from the experimental study in subjects with Addison's disease using an independent RNA separation procedure showed a marked down-regulation of miR-122-5p by increased GC exposure (p=0.009) ( Figure 6). Two subjects did not show this miR-122-5p response, one man (disease duration 2 years, body mass index [BMI] 23.8 kg/m 2 ; hydrocortisone 20 mg daily, fludrocortisone 0.1 mg daily) and one woman (disease duration 23 years; BMI 28.1 kg/m 2 ; hydrocortisone 30 mg daily, fludrocortisone 0.2 mg daily) who both experienced mild mental fatigue during GC withdrawal.
Replication of miRNA findings in independent study groups
Based on (i) the functional association of a circulating miRNA with gene expression and metabolomics, and (ii) the correlation between the PBMC transcriptome and plasma miRNAome ( Figure 3D), a targeted replication of the plasma miRNA findings was conducted using an independent RNA separation procedure. Twelve miRNAs were re-analyzed in the current study and in three other independent studies including subjects with different GC exposures: (i) in 60 subjects with rheumatoid arthritis with and without tertiary adrenal insufficiency after a short-term stop in their GC treatment (low vs. physiological GC exposure, respectively) (Borresen et al., 2017); (ii) in 20 subjects with Addison's disease receiving HC replacement therapy and in 20 matched healthy control subjects (low vs. physiological GC exposure, respectively) (Bergthorsdottir et al., 2017); and (iii) acute low, medium, and excessive GC exposure in 20 healthy subjects (Stimson et al., 2017).
D) PBMC and Adipose Tissue Transcriptome
From this analysis, miR-122-5p was significantly associated with different GC exposure in all studies ( Figure 7A-D). The expression of miR-122-5p was higher in subjects with rheumatoid arthritis and reduced GC exposure due to tertiary adrenal insufficiency ( Figure 7A), and subjects with Addison's disease had higher expression of miR-122-5p than healthy matched controls ( Figure 7B). In the experimental study in healthy subjects, the expression of miR-122-5p was increased both after low and excessive high GC exposure compared to medium GC exposure at both high and low insulin levels ( Figure 7C, D, respectively). The other 11 miRNAs (including miR-425-3p) did not show a relationship with GC exposure in the three replication studies. withdrawal, orange points) using 271 of 965 PBMC transcripts confirmed as robust in the hypernetwork by analysis of dissimilarity. X-variates 1 and 2: PLSDA components; expl. var: explained variance. (B) BORUTA feature selection identifies variables (genes) of importance in classification using a Random Forest approach to model experimental background noise (green: confirmed classification; yellow: tentative classification: red: rejected classification; blue: 'shadow' variable modeling experimental noise). Of 271 transcripts initially used, 59 were identified as important (confirmed [green] or tentative [yellow]) in separating GC exposure from GC withdrawal, as well as having the same direction fold change in both PBMC and adipose tissue transcriptomic data sets. (C) Predictive genes that are significantly differentially expressed between GC exposure and GC withdrawal in both PBMC and adipose tissue transcriptomes and display fold change in the same direction in both tissues (n = 9). (D) Association of predictive genes (six out of nine) with GC response through gene ontology. Data demonstrates the presence of a robust transcriptomic profile predicting GC response in two independent tissues. The online version of this article includes the following figure supplement(s) for figure 4: The gene set that classified both PBMC and adipose tissue transcriptomes in relation to GC exposure with fold change in the same direction (see Figure 4B -59 genes) was validated by further testing in five other publicly available studies of GC action in cellular systems. , of which 120 map to genes/miRNA/metabolites). Blue circles: genes with differential expression; red circle: differentially expressed metabolites; green: differentially expressed miRNA. Red box highlights DCK, one of the nine genes identified as a classifier of GC response (see Figure 4C). (C) Module decomposition of the hypernetwork. Genes modules (hexagons representing multiple highly connected genes) named by the most central gene in each module. miR-122-5p is present in the core of two modules (shown); color of modules represents centrality hierarchy: red: most central in the network; green: least central in the network. Figure 5 continued on next page
Discussion
In a clinical experimental study designed to identify biomarkers of GC action, we succeeded in generating two profoundly different states of GC exposure within the physiological range in the same individual. The novelty of this study is the identification of pathways related to GC response and putative biomarkers of GC action in gene expression, metabolome, and miRNAs derived from integrated multi-omic analysis in two independent tissues. We identified a transcriptomic profile that was under similar GC regulation in both PBMC and adipose tissue transcriptomes, which was then validated by comparison to a range of previously published data by other research groups from cellular assays. We also identified a circulating miRNA, miR-122-5p, which was correlated with the circulating transcriptome and metabolome findings, suggesting for the first time a functional role in GC action. Moreover, the association between the expression of miR-122-5p and GC exposure was replicated in three independent study groups. In order to identify putative biomarkers of GC action in humans, a clinical study was considered to be the most appropriate experimental setting. Addison's disease or primary adrenal insufficiency is a rare disorder, but a unique clinical model for GC biomarker discovery due to absent or very low endogenous GC production (Gan et al., 2014;Saevik et al., 2020). Subjects with Addison's disease were studied in a random order during physiological GC exposure and GC withdrawal. During GC exposure, infusion of HC delivered in isotonic saline via an infusion pump using a circadian pattern and saline alone (using the same volume and infusion pattern as during HC infusion) was Heat map with clusters of circulating 'omic data associated with glucocorticoid exposure identified using a correlation matrix. Figure 6. Replication of miR-122-5p as a putative biomarker of glucocorticoid (GC) action in the current biomarker discovery study. Targeted replication of plasma miR-122-5p fold change from the current study population between subjects with Addison's disease during GC exposure and GC withdrawal showed a significant downregulation of miR-122-5p expression with increased GC exposure (p=0.009) conducted using an independent RNA separation procedure in the same samples. Figure 7. Replication of miR-122-5p as a putative biomarker of glucocorticoid (GC) action in independent patient groups with different GC exposure. (A) The expression of miR-122-5p was higher in subjects with rheumatoid arthritis and reduced GC exposure due to tertiary adrenal insufficiency after a short-term stop of the GC treatment (AI) than in those without tertiary adrenal insufficiency (Normal). (B) Subjects with Addison's disease (AD) had higher expression of miR-122-5p than healthy matched controls (Control). In an experimental study in healthy subjects, the expression of miR-122-5p Figure 7 continued on next page administered during the GC withdrawal in order to prevent a state of sodium and fluid deficiency. This study design therefore allowed a within-individual comparison accounting for circadian rhythm and food intake. The marked difference in serum and urinary cortisol and cortisone, and the similar serum electrolytes, glucose, body weight, and blood pressure between the two interventions support the experimental success of the study design and strongly indicate that confounders related to metabolic changes or other secondary events related to the GC exposure or GC withdrawal were not influencing the output of the study. The measurable but very low concentrations of serum cortisol and cortisone throughout the GC withdrawal may be explained by a residual adrenal steroid secretion in some subjects (Gan et al., 2014;Saevik et al., 2020) and/or due to conversion of cortisone to cortisol in the liver and adipose tissue (Stimson et al., 2014).
Network models of 'omic data can be used as a framework to assess the potential utility of biomarkers (Stevens et al., 2014). In this study, we have used a hypernetwork model of GC action based on differential gene expression in PBMCs as a basis to integrate adipose tissue transcriptome, plasma miRNA, and serum metabolomic data. Hypernetwork analysis leverages the power inherent in large data sets to assess interactions between 'omic elements in a manner that is robust to false positives (Battiston et al., 2020). The associated interactome network derived from the PBMC transcriptome was shown to contain a number of genes with previously known GC-dependent binding of NR3C1 (the GC receptor) to regulatory elements, evidence that supports the specificity of the study design (Davis et al., 2018;Casper et al., 2018). Gene ontology analysis of the differential gene expression identified a range of pathways classically associated with GC action including GCreceptor signaling, immunoregulatory pathways such as those involving NF-kB, metabolic pathways, and cell cycle pathways. The plasma miRNA and serum metabolomic data was shown to map to the interactome network model of GC action, and this was taken as support for this data being putative circulating biomarkers functionally related to GC action. Differential expression induced by GC treatment in both PBMCs and adipose tissue was indirectly associated with similar downstream elements by gene ontology analysis. These genes were not directly implicated with GC response, so, while the exact mechanisms may be different in each tissue, effects are coordinated through the same elements. Integration of the multi-omic data including both PBMC and adipose tissue transcriptomes was performed in order to increase the robustness of putative markers that could reflect action in other tissues such as adipose tissue, which is an important target organ for the metabolic actions of GCs. The 59 genes that behaved similarly in PBMC and adipose tissue were then validated in a range of studies examining GC response in different cellular systems. These included primary cell culture on keratinocytes (Stojadinovic et al., 2007) and lens epithelial cells (Gupta et al., 2005), along with PBMCs (Carlet et al., 2010) and cancer cells [both lymphoblastic leukemia (Carlet et al., 2010) and osteosarcoma (Lu et al., 2007;Jewell et al., 2012)]. The set of nine genes co-regulated in relation to GC exposure and GC withdrawal in both PBMC and adipose tissue transcriptomes can therefore be considered as putative markers of GC response. These could be used as a gene set to interrogate GC action in other experimental settings.
All the miRNA findings in this study are novel. While emerging experimental evidence indicates impact on regulation of GC action at several points by miRNAs (Clayton et al., 2018), this is the first time that miRNAs are shown to be globally correlated to GC action in humans. Both the hypernetwork analysis and the interactome network model implied the functional significance of some miR-NAs, particularly miR-122-5p. In our hypernetwork model, the expression of miR-122-5p was correlated with clusters of genes that were centrally coordinated by expression of both RNF157 and TBXAS1, the former suggested to be a key regulator of both PI3K and MAPK signaling pathways, commonly perturbated in cancer and metabolic disorders (Dogan et al., 2017). Expression of TBXAS1 is pharmacogenomic linked to inhaled GC exposure in asthma (Dahlin et al., 2020). miR-122 is precursor transcript of mature miRNAs, including miR-122-5p (Carthew and Sontheimer, 2009;Bartel, 2004). miR-122 is expressed in the liver in humans (Tsai et al., 2009;Figure 7 continued was increased both after low and excessive high GC exposure (LowGC and ExcessGC, respectively) compared to medium GC exposure (MedGC) at both (C) high and (D) low serum insulin levels. Diamond: mean. Box = median ± interquartile range. Whiskers = upper and lower quartiles. miR-122-5p axis is presented as normalized expression. GTEx Consortium, 2015;GTEx Consortium, 2013) and mice (Tsai et al., 2009). Hepatocyte nuclear factor HNF4A (Li et al., 2011;Xu et al., 2010), along with HNF3A (FOXOA1), HNF3B (FOXOA2), and HNF1A (Xu et al., 2010;Coulouarn et al., 2009), has been shown to be a key regulator of miR-122 expression in human cells. Down-regulation of miR-122 in murine models has been associated with non-alcoholic fatty liver disease (Alisi et al., 2011) and diabetes mellitus (Guay et al., 2011), and in humans, miR-122-5p has also been associated with fatty liver disease (Raitoharju et al., 2016).
miR-122-5p may be a functional link between unphysiological GC exposure and metabolic and cardiovascular disease. Increased exposure to GCs impairs glucose tolerance and may induce type 2 diabetes (Hackett et al., 2014). Indeed, reduced miR-122-5p expression has been seen in animal models of diabetes, and the reduction of this miRNA in response to increased GC exposure may suggest that miR-122-5p is a functional link between GC action and metabolism. In support of these findings are observations showing that miR-122-5p regulate insulin sensitivity in murine hepatic cells by targeting the insulin-like growth factor (IGF) 1 receptor (Dong et al., 2019). Recent human studies have also suggested that miR-122-5p is an indicator of the metabolic syndrome, with reduced expression in response to weight loss in overweight/obese subjects (Hess et al., 2020). miR-122-5p has also been suggested as a biomarker of coronary artery stenosis and plaque instability (Wang et al., 2019;Singh et al., 2020;Ling et al., 2020). As unphysiological GC exposure has been associated with obesity, diabetes, and cardiovascular disease (Walker, 2007), it is possible that miR-122-5p is reflecting different GC exposure in these disorders. The subjects with Addison's disease in our clinical experimental study had no other comorbidities previously known to be associated with miR-122-5p expression, and therefore the presence of such confounders in our miR-122-5p finding seems to be unlike.
Specific miRNAs circulating in a stable, cell-free form in plasma or serum may serve as biomarkers in some diseases (Kroh et al., 2010), and, in our integrated analysis, they seem to be a realistic and clinically useful marker of GC action. We therefore focused on the replication of the miRNA findings from the discovery study. For this purpose, we performed a targeted analysis of 12 putative miRNAs and analyzed them in 120 subjects from independent study groups with different GC exposure in terms of dose, duration of exposure, and route of administration. The rationale for selecting these groups was that their GC exposure mostly remained within the normal physiological range. Despite the experimental differences between these studies, and the fact that these studies were not designed to study miRNA biomarkers of GC action, miR-122-5p was down-regulated by increased GC exposure in all of them. One exception was when short-term excessively high GC exposure was studied in afternoon samples in 20 subjects. There is no clear explanation for this, except the possibility that high non-physiological GC exposure has other secondary effects that may affect the levels of miR-122-5p.
The network analysis also identified putative metabolomic markers of GC action. GCs have a key role in metabolic regulation of stress by mobilizing energy through glucose, protein, and lipid metabolism. Previous studies have found an association between different GC doses and levels of branched-chain amino acids, fatty acids, some acyl carnitines, and tryptophan and its metabolites (Alwashih et al., 2017a;Sorgdrager et al., 2018). In our study, the amino acid tyrosine and the pyrimidine base uracil had a central position in the hypernetwork, which defined a group of highly connected multi-omic relationships within physiological GC exposure. Some of the other metabolomic data from our study was also in line with previous metabolomic studies in patients with adrenal insufficiency (Alwashih et al., 2017b;Sorgdrager et al., 2018). Excessive exposure to GCs in healthy subjects has, on the other hand, shown a strong, immediately and long-lasting impact on numerous biological pathways in the metabolome that may be either direct or indirect through the metabolic and cardiovascular action of pharmacological doses of GCs (Bordag et al., 2015).
There are some study limitations that need to be acknowledged. The low number of subjects included in the clinical experimental study could have reduced the power to detect a putative marker in individual 'omic data sets, but this limitation was compensated for by the crossover study design and the integration of multi-omic layers. Another limitation is that we have only studied markers collected in the morning during physiologically peak cortisol exposure. However, the strengths of our study are the experimental study design, consideration of diurnal variation in GC action and impact of food intake, and the within-individual comparison, which minimizes confounders, as well as the fact that the putative markers that we have replicated are associated with known GC-responsive genes in two different tissues, suggesting their functional importance in GC action. Moreover, the integration of multi-omic layers allows for the reduction of background noise (Huang et al., 2017) and forms the basis for a detailed model of GC action. Hypernetwork summaries of correlation networks are recognized as providing signatures of mechanism (Pearcy et al., 2016;Johnson, 2011;Butte et al., 2000;Oldham et al., 2006) and, as such, are useful to assess both function and define markers of direct action.
In this clinical biomarker discovery study, we identified genes, miRNA, and metabolites that are differently expressed during GC exposure and GC withdrawal in subjects with Addison's disease. The multi-omic data showed a high degree of coherence, and network analysis identified transcriptomics and metabolites that were closely correlated. The final outcome of the study is identification of a miRNA that is regulated by GC exposure and correlated with genes and metabolites that are also regulated by GCs in this study, indicating its functional relevance. The replication of this miRNA in three independent study groups increases the likelihood that the discovered miRNA, miR-122-5p, could become a biomarker of GC action to be used in clinical settings.
Experimental study design Study design
The study was a prospective, single-center, single-blind, randomized, two-period/crossover clinical trial.
Study subjects
Men and women with Addison's disease for >12 months on stable cortisol replacement (with HC 15-30 mg/day) for !3 months followed at the Center for Adrenal diseases in the Out-patient Clinic at the Department of Endocrinology-Diabetes-Metabolism, Sahlgrenska University Hospital (tertiary referral hospital), Gothenburg, Sweden, were eligible for inclusion. Other inclusion criteria were age 20-60 years, body mass index 20-30 kg/m 2 , and ability to comply with the protocol procedures. Exclusion criteria were GC replacement therapy for indication other than Addison's disease, any treatment with sex hormones including contraceptive drugs, treatment with levothyroxine, renal or hepatic failure, significant and symptomatic cardiovascular disease, diabetes mellitus, current infectious disease with fever, and pregnancy or breastfeeding. Recruitment was stopped when all eligible subjects had been asked to participate. Power calculation was not performed because of the exploratory nature of the study. Power calculations were also difficult in the context of 'omic analysis as there may be variable effect sizes over different 'omic elements.
The study was approved by the Ethics Review Board of the University of Gothenburg, Sweden (permit no. 374-13, 8 August 2013) and conducted in accordance with the Declaration of Helsinki. Written informed consent was obtained from all subjects before participation. The study was registered at ClinicalTrials.gov with identifier NCT02152553.
Study treatment
HC infusion was prepared by adding 0.4 mL of Solu-Cortef 50 mg/mL to 999.6 mL 0.9% saline, which resulted in 1 mg HC per 50 mL intravenous infusion. HC infusion was adjusted in accordance with previous observations in healthy males (Kerrigan et al., 1993) and interventions in both sexes (Løvås and Husebye, 2007; Figure 1B). The aim was to achieve a near-physiological circadian cortisol curve with early morning rise in serum cortisol that would peak at 7 AM and trough concentrations at midnight. In the GC-withdrawal intervention, 0.9% saline infusion alone was administered using the same volume as during the HC infusion. Thus, a person weighing 75 kg received 2 L of intravenous infusion over 22 hr during each intervention.
Interventions
All subjects were admitted after an overnight fast to the in-patient Endocrinology Department at the Sahlgrenska University Hospital at 8 AM (first intervention day) and were discharged at 12 PM the following day (second day). Subjects were randomized using a free randomization plan (generated at http://www.randomization.com/ on 27 April 2014) before the first intervention to receive either HC infusion or only saline infusion in a single-blind, crossover manner at least 2 weeks apart ( Figure 1B). The researcher responsible for the clinical study generated the randomization plan, enrolled the study subjects, and assigned participants to interventions. Female subjects (when fertile) were studied during the early follicular phase (days 5-10) of their regular cycle under both interventions. Subjects were told not to take their ordinary mineralocorticoid dose on the day before each intervention but to take their ordinary HC dose. Subjects received standard meals at fixed times during both interventions. Their consumption of coffee or tea was recorded during the first intervention in order to consume the same amount and at the same time points during the second intervention.
During each intervention, the subjects' blood pressure, body temperature, and weight were monitored. Because of the study design and the variations in circadian rhythm, blood sampling was collected at exactly the same time before the start of intervention, at midnight (12 AM), and in the morning of the second intervention day (7 AM). Urine was collected between midnight and morning (overnight), and abdominal subcutaneous fat was collected in the morning of the second intervention day immediately after blood and urine sampling. Adipose tissue was collected after local injection with lidocaine under the umbilicus on the right side of the abdomen during saline infusion and on the left side during HC infusion. The study was unblinded for each study subject after the completion of all aspects of the study (the second intervention).
Replication studies Baseline samples in subjects treated with prednisolone for rheumatoid arthritis
This was a cross-sectional clinical study of prednisolone-induced adrenal insufficiency undertaken at the Department of Medical Endocrinology and Metabolism, at University Hospital, Rigshospitalet, Copenhagen, Denmark, between 2012 and 2018 (Borresen et al., 2017). In the current replication analysis, 60 subjects were included. All subjects had rheumatoid arthritis, received long-term prednisolone treatment (minimum 6 months), and treated with a current prednisolone dose of 5 mg/day. Of the 60 subjects, 23 had an insufficient response to the Synacthen test (GC-induced adrenal insufficiency, AI group) and 37 had a normal response (normal group). The samples included in the replication analysis were collected in the morning after an approximately 48 hr pause of prednisolone dosing (before the Synacthen test) and after overnight fasting. Plasma miRNA analysis of frozen samples was performed at Exiqon Services, Denmark.
Case-control study in subjects with or without Addison's disease
This was an observational, cross-sectional, single-center, case-control study undertaken in our unit in Gothenburg, Sweden, between 2005(Bergthorsdottir et al., 2017. In the current replication analysis, the subgroup of 20 subjects with Addison's disease under daily replacement therapy with oral HC ! 30 mg (AD group) and their 20 healthy control subjects with no GC therapy matched for age and gender (control group) were included. The samples included in the replication analysis were collected in the morning between 8 AM and 10 AM after an overnight fast, and for the cases after morning administration of their oral HC, which means a very low cortisol exposure during the night before sample collection. Plasma miRNA analysis of frozen samples was performed at Exiqon Services, Denmark.
Randomized, crossover study in healthy subjects
This was a randomized, double-blind study in 20 lean healthy male volunteers undertaken at the Edinburgh Clinical Research Facility between July 2010 and April 2012. The full protocol has been published previously (Stimson et al., 2017). Volunteers were randomized to receive either a low-or medium-dose insulin infusion (10 subjects in each group) and attended on three occasions after overnight fasting. Subjects received metyrapone (to inhibit adrenal cortisol secretion) with and without HC infusion (over 6.5 hr) in order to produce low, medium, or excessive GC levels (Low/Med/ ExcessGC during high insulin and low insulin cohorts, respectively). The samples included in the replication analysis were collected in the afternoon at the end of each intervention (approximately 6.5 hr after start) on three occasions (low, moderate, or excessive high GC levels). Plasma miRNA analysis of frozen samples was performed at Exiqon Services, Denmark.
Generation and preparation of 'omic data
Plasma cortisol and cortisone were analyzed using liquid chromatography-mass spectrometry (LC-MS), and urinary-free cortisol and cortisone were analyzed using gas chromatography-mass spectrometry (GC-MS) at the Mass Spectrometry Core Laboratory, Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK. PBMCs were isolated on-site from whole blood using a gradient-based separation procedure and Ficoll-Paque PREMIUM (GE Healthcare).
A microarray gene expression analysis using Affymetrix Human Gene 2.0 ST arrays in both PBMC and adipose tissue was performed at the Array and Analysis Facility, Science for Life Laboratory at Uppsala Biomedical Center (BMC), Sweden.
Preprocessing of 'omics data sets was carried out in the following ways. PBMC and adipose tissue transcriptomes were normalized using robust multichip average (RMA) via the R package oligo (Carvalho and Irizarry, 2010), which corrects for background variation, quantile normalizes, and summarizes features to gene-probe set level (Figure 3-figure supplement 2). GC-MS and LC-MS metabolomic data sets were analyzed using the R package MetaboanalystR (Chong and Xia, 2018), which filters variables based on ranked interquartile range, normalizes metabolites to sample median, and log transforms the resultant intensities ( Figure 3-figure supplement 3). Qlucore Omics Explorer (version 3.3, Lund, Sweden) was used to scale and mean center miRome data. How all these analyses were performed is described in detail in Appendix 3.
Data analysis of differential gene expression
Principal component analysis (PCA) was performed to provide further quality control and define the relationship of variance between samples, allowing structure within the data set to be defined (Qlucore Omics Explorer 3.3). Quality control of transcriptomic data was performed using PCA with cross-validation and data consistency was confirmed. No outliers were identified. Differential gene expression was determined by a paired t-test comparing the two interventions. Network analysis of DEGs was performed using Advaita Bio's iPathwayGuide (https://www.advaitabio.com/ipathwayguide); gene ontology performed using this software analysis tool implements the 'Impact Analysis' approach that takes into consideration the direction and type of all signals on a pathway, and the position, role, and type of every gene (Ahsan and Drȃghici, 2017).
Gene ontology, gene expression regulated by miRNA, and causal network analysis Gene ontologies were associated with differentially regulated gene lists (Ingenuity Pathway Analysis [IPA], Qiagen, Redwood City, CA). miRNAs were paired with genes that were theoretically regulated by specific miRNAs using IPA. The databases used for this mapping were TarBase (Vlachos et al., 2015), miRecords (Xiao et al., 2009), and peer-reviewed biomedical literature, as well as predicted miR-mRNA interactions from TargetScan (Agarwal et al., 2015).
The Encyclopedia of DNA Elements (ENCODE) data (Rosenbloom et al., 2013) was used to map genes in the interactome network model of GC action that had been previously shown to have dexamethasone dose-dependent DNA binding of NR3C1, the GC receptor gene.
Causal network analysis (CNA) allows the identification and prioritization of regulatory system elements within transcriptomic models. CNA was performed within IPA (Krämer et al., 2014). CNA identifies upstream molecules, up to three steps distant, that potentially control the expression of the genes in the data set (Krämer et al., 2014). A prediction of the activation state for each regulatory factor (master regulator), based on the direction of change, was calculated (Z-score) using the gene expression patterns of the transcription factor and its downstream genes. An absolute Z-score of !|1.4| and a corrected p-value<0.05 (Fisher's exact test) were used to compare the regulators identified.
Network model construction and comparison
Lists of DEGs were used to generate network models of protein interactions in Cytoscape 2.8.3 (Smoot et al., 2011) by inference using the BioGRID (3.4.137) database (Chatr-Aryamontri et al., 2015).
The Cytoscape plug-in Moduland (Kovács et al., 2010;Szalay-Beko et al., 2012) was applied to identify overlapping modules, an approach that models complex modular architecture within the human interactome (Chang et al., 2013) by accounting for the non-discrete nature of network modules (Kovács et al., 2010). Modular hierarchy was determined using a centrality score and further assessed using hierarchical network layouts (summarizing the underlying network topology). The central module cores (metanode of the 10 most central elements) was determined and used as a basis to integrate the miRNA and metabolomic data. Transcriptomic and metabolomic data were combined to form a single network model using the Metscape (Karnovsky et al., 2012) plug-in for Cytoscape. Differential 'omic data was compared and clustered in a correlation matrix using the corrplot plug-in (Murdoch and Chow, 1996) for R (R Development Core Team, 2020).
Similarity network fusion
Subject-level similarity network fusion (SNF) (Wang et al., 2014) was performed on 'omic data as a test for similarity. To perform SNF, the SNFTool R-package was used (Wang et al., 2014). First, Euclidean distances were calculated between gene probe sets, and these were then combined using a nonlinear nearest neighbor method over 20 iterations. The fused data was subjected to spectral clustering and presented as a heat map.
Hypernetworks
We modeled the dynamics of potentially relevant PBMC and adipose tissue transcripts, miRNAs, and metabolites by assessing their activity as measured by the number of shared correlations against the background of all 'omic elements called present after data processing.
A matrix (m rows and n columns) was generated of correlation distances (r-values) between the significantly differentially expressed multi-omic data (forming m rows) and all 'omic data called present (forming n columns). The r-values were normally distributed.
A similarity matrix was defined by dichotomizing the correlation distance based on an r-value threshold of ! j1:5jsd (if sd of jrj ! 1:5 , then value = 1; if sd of jrj<1:5, then value = 0); the new matrix was termed M and represents the incidence matrix of the hypernetwork. An element of M, m ij , where i and j are elements of m and n respectively, is defined as follows: To generate the hypernetwork, we multiplied M by the transpose of M, M T (Johnson, 2011;Ha et al., 2020), the elements of the resulting square matrix (sM, an m  m matrix) are the number of correlations shared by each pair of interacting 'omic elements; this is also the number of edges connecting each pair of nodes. sM was clustered using hierarchical clustering to identify the group of highly connected 'omic elements.
The dichotomization parameters were shown to correspond to maximum signal window in the data using chi-squared distance metric (Figure 3-figure supplement 4). The chi-squared distance (X 2 ) was defined as where N is the order of the matrix sM, i is the i 0 th element, m i , of sM, and m e is the expected value of an element of sM. The expected value of an element of sM was calculated at any chosen dichotomization threshold by dividing the total number of correlations by the order of the matrix.
Differential expression analysis was performed to refine genes for hypernetwork analysis. This approach serves to identify potentially relevant 'omic elements. FDR-corrected p-values for all elements selected for hypernetwork integration are presented in Supplementary file 1g. We identified 4426 DEGs in PBMCs, 3520 adipose tissue DEGs, 38 metabolites (17 LC-MS, 21 GC-MS), and 12 miRNAs below an uncorrected p-value of 0.05. Data was analyzed across nine matching samples (normalized log2 score was inverted between GC exposure and GC withdrawal, i.e., +1 and -1, respectively).
A hypernetwork is inherently robust as individual correlations are not considered significant; rather hypernetworks model higher order interactions between nodes ('omic elements) based on large numbers of shared edges (correlations). This approach only highlights 'omic elements that are supported by the majority of the data and, as such, is robust to a wide range of r-value thresholds as well as small sample sizes.
Further, robustness of the hypernetwork observations was determined using a dissimilarity matrix derived from the original similarity matrix (i.e., the complement of the similarity matrix). The elements assessed as dissimilar were subtracted from those defined as similar. Elements within the M Â M T output of the dissimilarity analysis that were also similar were eliminated from further predictive analysis.
The BORUTA R package (Kursa, 2014;Kursa and Rudnicki, 2010) was used for feature selection of transcriptomic data with predictive value. Random Forest (Breiman, 2001) was implemented in R using 5000 trees to determine the predictive value expressed as the area under the curve of the receiver operating characteristic.
Statistical analyses
Unsupervised analysis of metabolomic and transcriptomic data to assess how GC exposure grouped the study subjects was performed using Orthogonal Projections to Latent Structures Discriminant Analysis in SIMCA 13.0 (Sartorius) or PLS-DA MixOmics plug-in (Rohart et al., 2017) for R.
For quantitative variables with normal distribution, we performed independent samples t-test. Mann-Whitney U-test was performed for non-normally distributed variables. Chi-squared test or Fisher's exact test, as appropriate, was used for categorical variables. Wilcoxon rank test was used for detecting differences between the two interventions in quantitative non-normally distributed variables. All statistical tests were two-sided, and p<0.05 was considered to be statistically significant. Further robustness for 'omic data analysis was provided by considering the findings as clusters of coexpressed findings (Cleary et al., 2017). Statistical analyses were performed using SPSS (Statistical Package for Social Science) program, version 24 software for Mac.
(BMC), Uppsala, Sweden; the Swedish Metabolomics Center in Umeå , Umeå , Sweden; Ruth Andrew and Natalie Homer at the Mass Spectrometry Core Laboratory, Clinical Research Facility, University of Edinburgh, Edinburgh, UK; and Peter Todd (Tajut Ltd., Kaiapoi, New Zealand) for third-party writing assistance in drafting this manuscript, for which he received financial compensation from ALF funding. The study was registered at ClinicalTrials.gov with identifier NCT02152553. The exploratory study and the analyses were supported by The Swedish Research Council (Project 2015-02561 and 2019-01112) and The Swedish federal government under the LUA/ALF agreement (Project ALFGBG-719531). DC was supported by The Swedish Endocrinology Association and The Gothenburg Medical Society. BW was supported by Wellcome Trust through an Investigator Award. RHS was supported by grants from The Medical Research Council (MR/K010271/1) and The Chief Scientist Office (SCAF/17/02). The rheumatoid arthritis study (replication study) was supported by The Eva Madura's Foundation, The Research Foundation of Copenhagen University Hospital, Rigshospitalet, and The Danish Rheumatism Association. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Competing interests
network module arranged in hierarchical order as ranked by network centrality score. (10) S file 1j. Causal network analysis of the differential gene expression in PBMCs associated with glucocorticoid action.
. Transparent reporting form For the miRNA analysis in plasma, the amplification efficiency was calculated using algorithms similar to the LinReg software. All assays were inspected for distinct melting curves and the melting temperature (T m ) was checked to be within known specifications for the assay. Furthermore, assays must be detected with 5 Cq less than the negative control, and with Cq < 37 to be included in the data analysis. Data that did not pass these criteria was omitted from any further analysis. Cq was calculated as the second derivative. Using NormFinder, the best normalizer was found to be the average of assays detected in all samples. All data was normalized to the average of assays detected in all samples (average-assay Cq). Stable isotope internal standards: LC-MS internal standards: 13 C9-phenylalanine, 13 C3-caffeine, D4-cholic acid, D8-arachidonic acid, and 13 C9-caffeic acid were obtained from Sigma (St. Louis, MO). GC-MS internal standards: L-proline-13 C5, alpha-ketoglutarate-13 C4, myristic acid-13 C3, and cholesterol-D7 were obtained from Cil (Andover, MA); and succinic acid-D4, salicylic acid-D6, L-glutamic acid-13 C5, 15 N, putrescine-D4, hexadecanoic acid-13 C4, D-glucose-13 C6, and D-sucrose-13 C12 were obtained from Sigma.
Metabolic profiling of serum by GC-MS and LC-MS
Sample preparation was performed as previously described (Alwashih et al., 2017a). A designed randomized run order was made in order to minimize systematic variations within individuals and between time points and treatments. The samples were analyzed according to the designed run order on both GC-MS and LC-MS; GC-MS analysis, derivatization, and GC-MS analysis were performed as described previously .
For the GC-MS data in serum, all non-processed MS-files from the metabolic analysis were exported from the ChromaTOF software in NetCDF format to MATLAB R2016a (Mathworks, Natick, MA), where all data pre-treatment procedures, such as baseline correction, chromatogram alignment, data compression, and Multivariate Curve Resolution, were performed . The extracted mass spectra were identified by comparisons of their retention index and mass spectra with libraries of retention time indices and mass spectra (Schauer et al., 2005). Mass spectra and retention index comparison was performed using NIST MS 2.0 software. Annotation of mass spectra was based on reverse and forward searches in the library. Masses and ratio between masses indicative for a derivatized metabolite were especially notified. If the mass spectrum according to SMC's experience was with highest probability indicative of a metabolite and the retention index between the sample and library for the suggested metabolite was ±5 (usually < 3), the deconvoluted 'peak' was annotated as an identification of a metabolite.
For the metabolic profiling of serum by LC-MS, the sample was resuspended in 10 + 10 mL methanol and water. The set of samples was first analyzed in positive mode. After all samples had been analyzed, the instrument was switched to negative mode and a second injection of each sample was performed.
The chromatographic separation was performed on an Agilent 1290 Infinity UHPLC-system (Agilent Technologies, Waldbronn, Germany). A sample (2 mL) was injected onto an Acquity UPLC HSS T3, 2.1  50 mm, 1.8 mm C18 column in combination with a 2.1 mm  5 mm, 1.8 mm VanGuard precolumn (Waters Corporation, Milford, MA) held at 40˚C. The gradient elution buffers were (i) H 2 O, 0.1% formic acid and (ii) 75/25 acetonitrile:2-propanol, 0.1% formic acid with flow rate set at 0.5 mL/ min. The compounds were eluted with a linear gradient consisting of 0.1-10% B over 2 min, B was increased to 99% over 5 min and held at 99% for 2 min; B was decreased to 0.1% for 0.3 min and the flow rate was increased to 0.8 mL/min for 0.5 min; and these conditions were held for 0.9 min, after which the flow rate was reduced to 0.5 mL/min for 0.1 min before the next injection.
|
2021-04-07T06:16:53.607Z
|
2021-04-06T00:00:00.000
|
{
"year": 2021,
"sha1": "6d49cc5e378951263a72353585a3d9a6cadd7526",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.62236",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0204837cae6ee9bea6072a2e5d3ebfefe3732472",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256328852
|
pes2o/s2orc
|
v3-fos-license
|
Potential association of vacuum cleaning frequency with an altered gut microbiota in pregnant women and their 2-year-old children
Westernized lifestyle and hygienic behavior have contributed to dramatic changes in the human-associated microbiota. This particularly relates to indoor activities such as house cleaning. We therefore investigated the associations between washing and vacuum cleaning frequency and the gut microbiota composition in a large longitudinal cohort of mothers and their children. The gut microbiota composition was determined using 16S ribosomal RNA (rRNA) gene Illumina deep sequencing. We found that high vacuum cleaning frequency about twice or more a week was associated with an altered gut microbiota composition both during pregnancy and for 2-year-old children, while there were no associations with house washing frequency. In total, six Operational Taxonomic Units (OTUs) showed significant False Discovery Rate (FDR) corrected associations with vacuum cleaning frequency for mothers (two positive and four negative) and five for 2-year-old children (four positive and one negative). For mothers and the 2-year-old children, OTUs among the dominant microbiota (average >5 %) showed correlation to vacuum cleaning frequency, with an increase in Faecalibacterium prausnitzii for mothers (p = 0.013, FDR corrected), and Blautia sp. for 2-year children (p = 0.012, FDR corrected). Bacteria showing significant associations are among the dominant gut microbiota, which may indicate indirect immunomodulation of the gut microbiota possibly through increased allergen (dust mites) exposure as a potential mechanism. However, further exploration is needed to unveil mechanistic details.
Background
Hygienic behavior and westernized lifestyle dramatically changes the way we are exposed to bacteria from the environment [1]. One of several factors related to the change into westernized lifestyle is increased indoor occupancy, with house washing and vacuum cleaning being the main hygienic activities. Cleaning activities may not only influence which bacteria we are exposed to but also how this exposure affects us. Vacuum cleaning has caused a particular attention with respect to increased exposure to allergens such as dust mites [2], and it has been shown that dust mite exposure has a potential impact on immunological status of the exposed subjects [3].
It has recently been established that the indoor environment microbiota is heavily associated with the families living there [4]. However, to our knowledge, no studies have yet addressed the association between indoor hygienic activities and the gut microbiota. This relation is important with respect to understanding the impact of the surrounding allergens on the gut microbiota [5].
The aim of our work was therefore to investigate the association between washing and vacuum cleaning and the gut microbiota for a large unselected cohort of mothers and their children. To study this, we reanalyzed a previously published longitudinal 16S ribosomal RNA (rRNA) gene mother-child gut microbiota dataset, where we have shown major age-related changes in the gut microbiota composition [6]. In the current study, we included the additional metadata information about house washing and vacuum cleaning. We have also included information about potential dietary confounding factors. We analyzed the associations with both alpha-and beta-diversity, in addition to the use of ANOVAsimultaneous component analysis (ASCA) [7] and Random Forest [8] to uncover potential complex metadata correlations in the longitudinal dataset.
We present results showing an association between vacuum cleaning and the gut microbiota both during pregnancy and in 2-year-old children.
Cohort description
IMPACT (Immunology and Microbiology in Prevention of Allergy among Children in Trondheim) study is a controlled non-randomized longitudinal study involving 720 groups of pregnant women and their children (up to 2 years of age). The majority of the children were vaginally delivered and at term (>90 %), with 97 % being breast-fed exclusively for the first 6 weeks of life [9]. Stool samples were collected during pregnancy, and at 10 days, 120 days, 1 year and 2 years, and stored in Cary-Blaire transport medium at −80°C.
In the current study, samples from a subgroup of mother-child pairs (n = 356) with information about house washing and vacuum cleaning were included (Additional file 1: Table S1). We have overlapping information with microbiota data for (n = 82) pregnant women, (n = 63) 10-day-, (n = 85) 2-month-, (n = 75) 1-year-and (n = 68) 2-year-old children. We also included information about confounding dietary factors such as month for when rice, corn, wheat, bread, cooked vegetables, raw vegetables, fruits, commercial pre-made dinner, homemade dinner, fish, milk, or eggs were introduced for the first time (Additional file 1: Table S1). The information was obtained through questionnaires, as previously described [9].
16S rRNA gene dataset
We reanalyzed previously generated 16S rRNA gene data [6]. These data were generated by PCR amplification using primers targeting universally conserved regions of the 16S rRNA gene flanking the variable regions V3 and V4 [10], with DNA isolated from mechanically lysed cells as template. Sequencing was done using the Illumina MiSeq platform with the V3 chemistry. The resulting data were analyzed using QIIME [11]. Sequences were quality filtered (split_libraries.py; sequence length between 200 bp and 1000 bp; minimum average quality score 25; not more than 6 ambiguous bases; no primer mismatch allowed) and then clustered at 99 % homology level using closed-reference uclust search against Greengenes database [12] (pick_closed_reference_otus.py). Finally, 4000 sequences per sample were randomly picked from the full dataset to unify amount of information for each sample. The resulting Operational Taxonomic Unit (OTU) table contained 6920 OTUs for a total of 373 samples.
Data analyses
We used Simpson's index to investigate alpha-diversity and Variance Weighted Distance Between Cluster Centers (Ward's) based on an Euclidean distance matrix to determine beta-diversity. To uncover potential complex associations between metadata and the gut microbiota ASCA [7], Random Forest [8] analyses with the OTU table as response and cleaning frequency (washing and vacuum cleaning frequency binarized with respect to the median) as predictor were used. The rationale of using median binarization of the data is to increase the power of the analyses and to determine whether the overall associations of the microbiota with the predictor variables are statistically significant. To investigate the direct OTU associations with the predictor variables, we used Kruskal-Wallis nonparametric one-way analysis of variance, in addition to Spearman non-parametric correlations for dose response analyses. We used False Discovery Rate (FDR) to correct for multiple testing [13].
Basic statistical analyses were done using Minitab 16 (Minitab Inc, USA), while multivariate analyses were done using the PLS Toolbox (Eigenvector Inc, USA) plugin in the MATLAB® R2014a (Mathworks, USA) environment. For phylogenetic visualization, we used Itol (itol.embl.de).
Cleaning
In the IMPACT study, there were 358 mother-child pairs with complete information about monthly cleaning and vacuum cleaning frequency. The cleaning frequencies were generally stable throughout the period investigated from late pregnancy until the child was 2 years (Additional file 1: Table S1), with an average frequency of 2.9 washings and 6.6 vacuum cleanings per month. We found a slight positive correlation between washing and vacuum cleaning frequencies (R 2 = 0.13, p < 0.001, Pearson), while there were very minor or no correlations between cleaning and introduction of rice, corn, wheat, bread, cooked vegetables, raw vegetables, fruits, commercial pre-made dinner, homemade dinner, fish, milk, or eggs into infants' diet (R 2 < 0.01, p > 0.05, Pearson).
Association between cleaning frequency and the gut microbiota
We found no significant associations for alpha-diversity for any of the age categories, but for beta-diversity, we found a significant association. Based on the microbiota composition, the mothers clustered into three distinct clusters (Cluster 1 to 3; Fig. 1), where Cluster 1 showed association with high vacuum cleaning frequency and Cluster 3 with low (p < 0.0005, likelihood ratio chi-square test).
For the compositional association between the microbiota and cleaning frequencies by ASCA, we found no significant associations for washing, while we detected significant associations for vacuum cleaning at pregnancy and 2-year children (Figs. 2a and 3a, respectively). However, only a few OTUs were important for these associations (Figs. 2b and 3b, respectively). Random Forest revealed a significant discrimination between high-and low vacuum cleaning frequency only for mothers (p = 0.007, Kruskal-Wallis test), while for the 2-year old, this discrimination was at the border of significance (p = 0.058; Kruskal-Wallis test). As for the ASCA, only a few OTUs were influential in the models (Additional file 2: Figure S1).
At pregnancy, OTU851141 related to Faecalibacterium prausnitzii showed the strongest positive influence on the ASCA model for vacuum cleaning. This OTU also showed a significant direct association with vacuuming (median 6.3 % (high) vs 1.8 % (low), p = 0.006 Kruskal-Wallis test). OTU567381 related to Roseburia faecis was identified as the most influential for Random Forest. This OTU also showed significant direct association with vacuum cleaning frequency (median 1.2 % (high) vs 0.33 % (low), p = 0.003 Kruskal-Wallis test). As for the negative associations during pregnancy, only OTU584375 Fig. 1 Association between beta-diversity and vacuum cleaning for pregnant mothers. The color code represents the three main clusters detected by Ward's analyses based on Euclidean distances for the OTU abundance data related to Bifidobacterium adolescentis was detected as influential by ASCA, with a median of 2.7 % for low vacuum cleaning and 1.3 % for high vacuum cleaning frequency (p = 0.034, Kruskal-Wallis test). For the 2-year-olds, OTU1104433 classified as Blautia sp. was the most influential as determined by ASCA (median 4.9 %, vs 2.8 % for high and low vacuum cleaning frequency, respectively; p = 0.015 Kruskal-Wallis test), while OTU844941 related to Oscillospira sp. was identified by Random Forest (median 0.23 % vs 0.05 % for high and low To determine potential quantitative associations, we investigated the OTUs directly correlated with vacuum cleaning after FDR correction for mothers and 2-yearold children using Spearman non-parametric correlation. We identified the same positively associated OTUs, as identified by ASCA and Random Forest, while the negatively associated OTU identified by ASCA did not show significance. In addition to the ASCA and Random Forest identified OTUs, a set of low abundant OTUs were also identified for the Spearman correlations (Table 1).
Discussion
Mothers clustered into three distinct clusters with respect to beta-diversity with one of these clusters associated with high vacuum cleaning frequency, while another was associated with low frequency. It could therefore be that household environment and hygienic behavior can be a potential contributing factors to the overall clustering pattern observed for the adult gut microbiota [14].
Vacuum cleaning can lead to increased allergen exposure through dust mites [2], and it has recently been shown that dust mite associated Toll-like receptor 4 (TLR4) signaling has an important role for inflammation in the airways in the body [3]. Therefore, it is plausible that immunological signaling in the airway mucosa could affect the gut mucosa through a common signaling system [15] and consequently the associated gut microbiota. In concurrence, we identified correlations for vacuum cleaning and not for house washing which supports a potential importance of airway exposure for gut microbiota modulation.
For mothers, we found the largest increase of F. prausnitzii at high vacuum cleaning frequency. This bacterium is anti-inflammatory [16], harvesting energy through extracellular electron transport [17]. Therefore, inflammationinduced reactive oxygen could be an energy source for F. prausnitzii leading to an expansion of the population, while its anti-inflammatory properties could potentially counteract an inflammatory response. We also identified a relatively large decrease in B. adolescentis, which is in line with its previously observed negative association with TLR4 induction [18].
There was no overlap in the OTUs associated with vacuum cleaning for mothers and the 2-year children. A potential explanation could be that the associations for the 2-year children are due to vacuum cleaning-associated immunological/microbiota differences in mothers during pregnancy [19]. Blautia sp. showed the most pronounced positive association with vacuum cleaning for the 2-year children, while a negative association was detected for the mothers. This bacterial group shows a high degree of host specificity [20,21] and is one of the main acetogens in the gut, harvesting energy by assimilation of carbon dioxide and hydrogen [22]. Still, however, our knowledge is too limited to mechanistically link Blautia to vacuum cleaning.
The association between microbiota and vacuum cleaning could also be confounded by unknown factors. A factor not included here is the use of High-Efficiency Particulate Arrestance (HEPA) filters in vacuum cleaners and differences between vacuum cleaners with respect to particle release. Whether or not, the level of particle release is confounded with vacuum cleaning frequency remains unknown. For the dietary factors measured, there were no or only minor associations with vacuum cleaning. Therefore, these are unlikely confounders with respect to the observed association between the gut microbiota and vacuum cleaning. However, there may of course be other confounding factors not covered here.
ASCA and Random Forest identified different parts of the microbiota, with ASCA identifying the most dominant, while Random Forest identified the most discriminative. Since ASCA is a generalization of ANOVA from univariate to multivariate data [7], it is expected that this approach will identify dominant OTUs. Random Forest, on the other hand, is a machine-learning approach aimed at identifying any associations between predictor and response variables [23]. This is probably the reason why Random Forest was more sensitive to low abundant OTUs. Therefore, ASCA and Random Forest seem complementary in relating the microbiota to environmental factors through both the dominant and non-dominant part of the microbiota.
Conclusions
In conclusion, our data point toward the possibility of gut microbiota modulation through airway allergen exposure. Thus, this could potentially add an additional facet to the complexity of the human gut microbiota interactions with the host.
Additional files
Additional file 1: Table S1. Metadata for cleaning and diet. The table shows detailed metadata for cleaning and diet. The data are self-reported.
|
2023-01-29T15:29:25.113Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "3d307161fa1bcdd7a6522344438083a0167cba44",
"oa_license": "CCBY",
"oa_url": "https://microbiomejournal.biomedcentral.com/track/pdf/10.1186/s40168-015-0125-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "3d307161fa1bcdd7a6522344438083a0167cba44",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
11074675
|
pes2o/s2orc
|
v3-fos-license
|
Ultrastructure of the Adhesion of Bacteria to the Epithelial Cell Membrane of Three-Day Postnatal Rat Tongue Mucosa : A Transmission and High-Resolution Scanning Electron Microscopic Study
Togue mucosa surface of 3-day postnatal rats was examined under transmission electron microscopy (TEM) and high-resolution scanning electron microscopy (HRSEM). For HRSEM analysis, the specimens were fixed in the same solution for 24 h, postfixed in 2% osmiun tetroxide, critical-point dried and coated with platinum-palladium. For TEM analysis, the specimens were fixed using modified Karnovsky solution and embedded in Spurr resin. The results revealed the presence of numerous microplicae in the membrane surface of keratinized epithelial cells to which groups of bacteria were attached. These bacteria were staphylococcus and coccus organized either in rows or at random, which were visualized in three-dimensional HRSEM images. At high magnification, the TEM images revealed the adhesion of bacteria to the cell membrane through numerous filamentous structures comprising the glycocalyx. The fine fibrillar structures rising from each bacterium and from cell membrane were clearly seen. These characteristics on bacteria structure may be used for future control or prevention of bacterial diseases and for installation of the oral native flora.
INTRODUCTION
The oral cavity is one of the regular sites for bacterial adhesion, even before tooth eruption when the number of sites for colonization increases significantly.
In rats, incisors erupt on the 6th or 7th postnatal day and, after that, the teeth and periodontal sulci are the preferred sites for bacterial colonization causing periodontal diseases and caries in several species.
Before tooth eruption, the mucosal surfaces are the only available areas for colonization and the epithelial cells of oral mucosa have been reported by several authors (1-4) using scanning electron microscopy (SEM) and transmission electron microscopy (TEM).Bacterial adhesion to the cell membrane has been extensively reported (5)(6)(7)(8)(9)(10)(11).After adhesion, oral epithelial cells may play an important role in host immune response towards infection because the epithelial cell membrane ingrowths to engulf the bacteria forming a phagocytic cup and internalizing the bacteria (12).
This study demonstrates the presence of groups of bacteria adhered to epithelial cell membranes of tongue Braz Dent J (2007) 18(4): 320-323 mucosa of young rats using high resolution SEM (HRSEM) and TEM.This fact may play a key role in native flora formation and future bacterial disease prevention.
MATERIAL AND METHODS
Ten 3-day postnatal Wistar rats were fixed by immersion in modified Karnovsky fixative solution containing 2.5% glutaraldehyde, 2% paraformaldehyde in a 0.1 M (pH 7.3) sodium cacodylate buffer.For HRSEM analysis, the specimens were fixed in the same fixative solution for 12 h at 4ºC and were postfixed in 2% buffered osmium tetroxide solution, rinsed in distilled water for at least 3 h and immersed in 2% tannic acid aqueous solution for 1 h at room temperature.The samples were rinsed in distilled water for 5 h and postfixed with 2% osmium tetroxide solution.They were dehydrated in series of ethanol and tert-buthyl alcohol, freeze-dried in Eiko ID-2 apparatus, mounted on a metal lamina and coated with platinum-palladium in a BIO-RAD (SEM Coating System; Microscience Division, Tokyo, Japan).The samples were examined in a HRSE microscope (Hitachi, S-900; Hitachi, Tokyo, Japan).
For TEM analysis, the specimens were fixed in the same solution, postfixed in 1% osmium tetroxide solution during 12 h at 4ºC, dehydrated in an increasing series of ethanol and propylene oxide and embedded in Spurr resin, according to Watanabe et al. (13).Thick sections were obtained using glass knives, stained with toluidine blue solution and examined under light microscopy.Ultrathin sections were obtained in a ultramicrotome with a diamond knife (Ultracut Reichert, Vienna, Austria) and collected on 200-mesh grids with Formvar film.Grids were counterstained with uranyl acetate and lead citrate and examined with a TE microscope at 80 kV (1010; JEOL, Tokyo, Japan).
RESULTS
The samples of 3-day postnatal rat tongue mucosa examined under HRSEM showed the presence of numerous microplicae on epithelial cell membrane surface (Fig. 1A).The shape of the spaces between plicae varied being elongated or circular.
Several groups of bacteria attached to the surface of keratinized epithelium were detected in threedimensional images.Groups of coccus and staphylo-coccus were located in several regions of filiform and fungiform papillae being the coccus distributed at random (Fig. 1B) and staphylococci in rows (Fig. 1C).The staphylococci are attached to the epithelial cell membrane showing the several rows and the small particles in three-dimensional HRSEM images (Fig. C).
TEM images of rat tongue epithelial cell surface revealed numerous bacteria attached to the cell membrane (Figs.1D and 1E).The fine filamentous structure containing glycocalyx permitted the adhesion of each other and the surface of epithelial cells and the bacteria surface.At high magnification, TEM images revealed the meshwork of fine fibrillar material around of surface of bacteria as noted in Figure 1E.
DISCUSSION
The results revealed that filiform and fungiform papillae of 3-day postnatal rat tongue mucosa presented numerous microplicae of different forms in threedimensional HRSEM images.The epithelial cell membranes showed microplicae similar to those demonstrated in previous studies (1,4).In addition, the findings of the present study confirm that the rat tongue mucosa epithelial cell surface present bacterial groups attached to cell membranes.These characteristics were also reported by Brady et al. (6), revealing bacteria in the filiform papillae of adult rat tongue mucosa.
In 3-day postnatal rats, attachment of microorganisms was observed only in the outer surface of keratinized epithelial cells.However, Brady et al. (6) emphasized that in adult rats, bacteria penetrates to the 3rd or 4th epithelial cell layer.Penetration of microorganisms depends on the location and epithelial cell features.Location is important because there are several mechanical conditions that may detach the bacteria, e.g., mastication, salivary flow and breathing.Epithelial cell features are important because they express receptors that are adhesion sites for specific bacterial adhesins.Moreover, the epithelial cell membrane invaginates to engulf the bacteria forming a phagocytic cup and internalizing them (12).It is inferred that more time is required for bacteria to achieve deeper layers in the epithelium.
In this study, rat tongue mucosa presented microorganisms, usually in coccal form, as reported elsewhere (7), or staphylococcus attached to epithelial cell surface.Bacterial characteristics, e.g., being gram positive or gram negative, could not be seen in the SEM study.The process of attachment of microorganisms to epithelial cell surface occurred through an interaction between fibrillar substance and epithelial cell membranes, as demonstrated by the TEM images and by Vitkov (14).However, streptococcal adhesion to cells may occur in the complex structure formed by proteins (5).The results of the present study showed that microorganisms are attached to epithelial cell membranes of the filiform and fungiform papillae through numerous fibrils structure like glycocalyx.Similar findings in rat tongue mucosa have been reported.
Tokunaga et al. (11) and Howlett and Squier ( 15) have presented ultrastructural findings on Candida albicans adhesion, reporting the mechanisms of interaction between this yeast and epithelium cell surface.However, there are evidences that cell wall is the most important site for Candida adhesion (10,16).Salivary flow rate and salivary neutrophil function have also been described as factors that increase the incidence of Candida in aged people (17).
The outcomes of this study confirmed the existence of a complex network of filamentous materials between bacteria surface and epithelial cell membranes clearly noted by TEM and in three-dimensional HRSEM images.These characteristics are important on the examination of the molecular structure of adhesins and may be used for the future control or prevention of bacterial diseases and for installation of oral native flora.
Figure 1 .
Figure 1.Panel of high resolution scanning electron microscopy (HRSEM) and transmission electron microscopy (TEM) images.A=HRSEM image of rat filiform papilla surface.The epithelial cell membrane shows numerous microplicae of different forms (×37,000); B=HRSEM image of filiform papilla showing numerous bacteria located at random (×60,000); C= HRSEM image of staphylococcus showing the several rows in three-dimensional HRSEM images (×17,000); D= TEM image of keratinized epithelial cell membranes revealing the adherence of bacteria (×35,000); E= TEM image of epithelial cell surface.Shows the round bacteria attached by filamentous material (arrows), glicocalix (×56,000). .
|
2017-08-28T14:46:58.735Z
|
2007-01-01T00:00:00.000
|
{
"year": 2007,
"sha1": "cc06fd267ede98228f558abc6686ee7eaa49ff03",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bdj/a/fppyGJ5nWqxPF5r7FHnphHr/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc06fd267ede98228f558abc6686ee7eaa49ff03",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
123269925
|
pes2o/s2orc
|
v3-fos-license
|
Theoretical limitations to the determination of bandwidth and electron mass renormalization: the case of ferromagnetic iron
Recent experimental advances have allowed electronic band structures to be investigated by angle-resolved photoemission in considerably more detail. A recent study of ferromagnetic iron finds the occupied bandwidth, for the two shallow bands observed, reduced by ∼30% as compared to the calculated ground state, rendering bcc iron comparable with the strongly correlated transition metal Ni. Fermi velocities were reported to deviate from the ground state and these deviations have been assigned entirely to electron correlation. We show that spin–orbit splitting, final-state transitions and final-state broadening significantly change the band dispersion as measured by a modern energy analyzer, and a simple model that accounts for their effects is introduced. Applying our model, we find for the occupied bandwidth a narrowing of the order of only 10% in agreement with the literature. Substantial renormalization of the Fermi velocities is confirmed but a significantly smaller fraction of it is attributed to correlation effects, namely many-electron interactions.
Introduction
Angle-resolved photoemission spectroscopy (ARPES) is the major experimental method to investigate the electronic structure of solids (Hüfner 1995, Kevan 1992. It directly probes the electron-wave-vector dependence of occupied electronic states. The photoemission spectrum is modeled by a spectral function A(k, E) with electron wave vector k and binding energy E which includes electron-electron interactions by a complex self-energy (Braun 1996, Kevan 1992, Pendry 1976. Band structures of metals are often successfully calculated by densityfunctional theory (DFT) without self-energy corrections. For certain solids such as the noble metal Cu, the one-electron band-structure picture of DFT has been found to agree very well with the E(k) dispersion obtained by ARPES (Thiry 1981, Thiry et al 1979. However, for its neighbor in the periodic system, the 3d transition metal Ni, the strong effects of electron correlation on core-level (Hüfner and Wertheim 1975) and valence-band spectra (Guillot et al 1977) have been established. The occupied 3d bandwidth is reduced by ∼30% with respect to density-functional calculations, and the ferromagnetic exchange splitting by ∼ 50% Plummer 1980, Himpsel et al 1979). It is believed that the localized character of the 3d orbital leads to enhanced electron correlation and that this causes the deviations from the band structure picture. The deviation between DFT in the local-density approximation (LDA) and ARPES can therefore be considered a measure of electron correlation. The enhanced electron correlation in the 3d orbital is also the cause of ferromagnetic and anti-ferromagnetic interactions in transition metals. For these reasons, the question posed for the other magnetic 3d transition metals is: how well does the band dispersion calculated for the ground state by the LDA agree with the one obtained by ARPES? For Cr, Fe, Co and Ni, detailed band dispersions have been measured and, with the exception of Ni, good agreement in binding energies within 10% has been found (Rader and Gudat 1999). For Fe, in particular, an evaluation of the energy range from 5 eV below the Fermi energy (E F ) to 2 eV above E F gave a compression of the bands by 10% (Santoni and Himpsel 1991). Thus, solely based on bandwidth arguments, Ni is strongly correlated, whereas Fe is not. On the other hand, the photoemission peaks show a large linear broadening (∼60% of the binding energy) in Fe (Santoni and Himpsel 1991). The strong broadening was predicted to be accompanied by a strong loss in spectral weight (Katsnelson and Lichtenstein 1999) and, for Co, this effect was confirmed and studied in detail. The majorityspin bands lose intensity for binding energies larger than 2 eV (Monastra et al 2002) and 3 are difficult to detect even in spin-resolved spectroscopy (Alkemper et al 1994). Many-body calculations identified this as a correlation effect with increased Coulomb correlation energy, U , leading to a stronger damping (Monastra et al 2002). An important sign of strong electron correlation in Ni is the appearance of photoemission satellites in the core-level (Hüfner and Wertheim 1975) and valence-band spectra (Guillot et al 1977, Höchst et al 1977. For Fe, a satellite at 7 eV was predicted by a many-body calculation for a relatively high value of U of 2.3 eV but is not observed in experiment (Grechnev et al 2007). In resonant photoemission at the Fe L-edge, a satellite was observed at a binding energy of ∼3 eV (Hüfner et al 2000) but it has not yet been established why, unlike in Ni, it is not observed at low photon energies at or below the M-edge.
Recent experimental advances in ARPES have allowed energy resolutions of a few meV and angle resolutions around 0.1 • to be obtained. This has led to renewed interest in the question of electron correlation effects on the E(k) band dispersion of the magnetic transition metals. The high angle and thus k resolution (where k denotes the projection of k onto the surface plane of the sample) and especially the simultaneous detection of a range of k values allow for precise measurement of the electron binding energy, velocity and effective mass. In a study of surface states of Fe(110), a considerable renormalization of the electron velocity, i.e. a deviation from the LDA value, with a kink on the energy scale of spin-wave excitations (∼125 meV), has been observed (Schäfer et al 2004). The above seminal work on the near-surface electronic structure was recently followed by studies on bulk states of Fe, and indications of a strong electron correlation were observed there as well (Schäfer et al 2005, Cui et al 2006. Bandwidth reductions of 27% for the minority-spin states of Fe, and of 33% for majority spin were obtained (Schäfer et al 2005). Because of simultaneous measurement of a k range, modern energy analyzers are particularly suited for measuring the Fermi velocity v F . In the recent experiment, large mass renormalizations of v F up to a factor of 3.5 or more were obtained (Schäfer et al 2005). These are assigned to strong electron correlation (Schäfer et al 2005). We will show that other factors contribute to the observed deviations, and inclusion of various aspects of the photoemission process and consideration of correlation effects, via the imaginary component of self-energy, in the theoretical description lead to an improved modeling of the experimentally observed spectra and to lower bandwidth and mass renormalization values.
The present paper reviews the determination of bandwidth and mass renormalization including all relevant effects in the LDA calculation and the photoemission process. We show how neglect of each of these effects results in a different and hence model-dependent renormalization value. One-step photoemission calculations of Fe (Redinger et al 1988) could also be used to address these questions but, to our knowledge, this has not been reported as yet. The advantage of our treatment is that of conceptual and computational simplicity. We examine the limitations of the recently published analysis (Schäfer et al 2005) and demonstrate that the use of a more complete model, including among others final-state photoemission effects, leads to a drastic decrease in the bandwidth renormalization and a significant decrease in the mass renormalization observed. All published values are re-examined and the new massrenormalization values show no relation between Fermi-sheet size and high electron correlation, or any relation to the proximity of smaller sheets of opposite spin, as speculated previously (Schäfer et al 2005).
Previous bandwidth and mass renormalization studies
A study of Fe was undertaken (Turner et al 1984) in which ARPES measurements were compared with three-step photoemission calculations. The theoretical model for the initial states was the DFT in the local spin density approximation (LSDA) published for Fe previously (Callaway and Wang 1977). A free electron final state (FEFS) approximation was used to determine the expected transitions and a correlation potential was used in the calculations to obtain agreement between the theoretical and the experimental band dispersion. Only a small correlation potential, as compared with that from Ni investigations, was required to correctly describe the bands to within experimental error.
ARPES data on Fe with much higher energy and angle resolution than those measured 25 years ago (Turner et al 1984) have been presented in a recent publication (Schäfer et al 2005). Comparison of the higher resolution data, however, is made with the calculated initial state only. Calculation of the initial states was performed using the augmented-plane-waveplus-local-orbitals (ALW + LO) potentials, and scalar relativistic effects were included but spin-orbit coupling effects were neglected. Fermi vectors and Fermi velocities are derived from the initial states and compared with experimental results, with the Fermi velocity ratio (theoretical velocity over experimental velocity) used to determine the mass renormalization. A second aspect of comparison between theory and experiment employed is the bandwidth renormalization, calculated as the ratio of theoretical over experimental band extrema at critical points. The mass renormalization values obtained (Schäfer et al 2005) lie in the range of 1.5-3.6, with bandwidth renormalization values of 1.4 and 1.5 at the and P points, respectively. From the obtained renormalization values, a non-uniform scattering mechanism is speculated for the electron correlation with an increase in the scattering caused by proximity to a smaller Fermisurface sheet of opposite spin. Moreover, larger Fermi surface sheets were found to lead to higher mass renormalization (Schäfer et al 2005).
Recently, a theoretical investigation of the correlation effects in Fe, Ni and Co was performed (Grechnev et al 2007). In this study, LDA calculations were used in conjunction with the dynamical mean field theory (DMFT) formalism, and the resulting electron self-energy, density of states and the spectral density were discussed. The correlation parameters, U and J , were found to be substantial in Fe and Co (2.3 and 0.9 eV) but smaller than those in Ni (3 and 1 eV). In particular, U is so large that a correlation satellite at ∼7 eV was predicted, which is, however, not observed in experiment. For the binding energies in the range 0 -1 eV, consideration of figures 3, 17 and 18 (Grechnev et al 2007) appears to indicate a real component of the self-energy varying from 0 to 1 eV in Fe. In particular, at the expected critical point energies at (minority) and P (majority), this corresponds to a bandwidth renormalization of about 50%. This is higher than that observed from comparison between the experiment and initial states alone (Schäfer et al 2005) and is even higher than that previously established for Ni (∼30%).
Spin-orbit splitting of initial states and inclusion of transitions to the final state
The present modeling employs the same APW + LO initial state calculations including scalar relativistic effects as used before (Schäfer et al 2005). These calculations were performed using WIEN2k computer code (Blaha et al 2001). It is expected that the spin-orbit coupling affects the valence bands of Fe, and a spin-orbit splitting of 110 meV has been observed in ARPES (Sakisaka et al 1990). The present calculations use the second variational method to approximate spin-orbit coupling. This accounts for spin-orbit splitting and changes of the band topology since bands of like symmetry do not cross. As these two calculations only consider the initial states, they will be labeled the initial state model (ISM) and the initial state model plus spin-orbit splitting (ISM + SO), respectively. The ISM calculations of Schäfer et al (2005) and the ISM + SO are shown in figures 1(a) and (b), respectively. Differences are observed in bands II, III and IV where the spin-orbit interactions cause extra hybridization with subtle changes of shape and a separation of the bands III and IV; a comparison of the critical point energies indicates significant differences. Fermi vectors, Fermi velocities and mass renormalization values for the two models are included in table 3 and will be discussed in section 5.
Photoemission experiments probe the transitions from the initial states to a final state above the Fermi energy. Our next step in developing an appropriate model is to include these transitions in the calculation. We will use the FEFS approximation, as was done by Turner et al (1984). The FEFS approximation is widely used in photoemission experiments and assumes the final state to be of the form The inner potential, V 0 , was determined from experimental ARPES as 14.8 eV (Schäfer et al 2005). Transitions from the ISM + SO initial states to the FEFS via the photon energy, hω, will be considered in the direct transition model (DTM). Equation (1) the deviation of the probed k-point from H is 0.4 Å −1 or 18% of the distance from to H. This causes significant energy shifts. For example, close to the Fermi energy a significant change in bands II, III and IV occurs at H with similar changes for several other bands at higher binding energies. At there is no appreciable change, indicating that the FEFS at the chosen photon energy of 139 eV intersects the point as indicated previously (Schäfer et al 2005). Determination of the allowed transitions between the initial and final states does not exhaust the theoretical process of ARPES; in addition, broadening of these transitions occurs. A procedure to include this broadening in the current calculations is presented in section 4.
Broadening of the electron transitions
In previous analyses of ARPES data of Fe (Fedorov et al 2002, Schäfer et al 2004, Turner et al 1984, Yamasaki and Fujiwara 2003, the effect of broadening has, to our knowledge, not been considered. The broadening has been investigated via a one-step model calculation (Redinger et al 1988) but was limited to low photon energies and normal emission; in contrast, the current calculations were performed for high photon energies (139 eV) and for a range of emission angles. Note that the effect of broadening processes on the Fermi maps and band maps is not just to smear out the observed bands but to also allow the bands close to the intersection with the FEFS to be observed. Two distinct broadening processes have been described previously (Strocov 2003): broadening of the initial state caused by the finite hole lifetime, τ h , and broadening of the final state wavefunction due to inelastic absorption and elastic reflection from the crystal potential. The final state spectral function, A final (k ⊥ ), is characterized (Strocov 2003) by a Lorentzian distribution in k ⊥ centered on the real component of k ⊥ (k 0 ⊥ ) and with a width, δk ⊥ , related to the inelastic electron mean free path, λ, via the relation δk ⊥ = 2 Im(k ⊥ ) = 1/λ. The finite hole lifetime initial state spectral function, A initial (k ⊥ ), is characterized by a Lorentzian distribution in energy centered on the band energy and with a full width, δ E, given by δ E =h/τ h . This broadening is a result of correlation and is generally attributed to the imaginary component of the self-energy (Grechnev et al 2007, Monastra et al 2002 and, in conjunction with the real component of the self-energy, has in particular been shown to increase the agreement between experiment and theory in Co (Grechnev et al 2007, Monastra et al 2002. The photoemission current, I (E final , E initial ), is then of the following form (Strocov 2003): where |T final | is the final state surface transmission factor and |M fi (k ⊥ )| is the photoexcitation matrix element. In the calculation of the spectral function below, these two terms are set to unity. The values for δk ⊥ and δ E have been determined by comparison with experiment. Figure 2 shows an example calculation that compares allowed transitions, with and without broadening, and the initial state calculations along the [1-11] direction of (110) BCC Fe (figure 10; Schäfer et al 2005). The diffuse intensity (yellow) in figure 2 is a calculation according to our broadened transition model (BTM). It is in excellent agreement with the experimental data. From such comparisons, δk ⊥ was found to be 0.3 Å −1 . This result is consistent with the electron inelastic mean free path λ (Tanuma et al 1991, Werner et al 2000). From a linear interpolation between the values for 85 and 150 eV (Tanuma et al 1991), we obtain λ ≈ 5 Å for Fe or δk ⊥ ≈ 0.2 Å −1 . Momentum broadening in the band structure regime is frequently represented as a fraction of the Brillouin zone width k BZ ⊥ and values lying in the range of 10% (Paggel et al 2000) to (110) BCC Fe with k BZ ⊥ = 1.56 Å −1 , we obtain δk ≈ 0.2|k BZ ⊥ | = 0.31 Å −1 (20%). We find that the δk ⊥ broadening dominates over the effect of δ E. Fermi-liquid theory predicts a quadratic dependence of δ E on the binding energy E B but only a linear broadening with δ E ≈ 0.6 |E B | was extracted from direct and inverse photoemission data of Fe along the surface normal (Santoni and Himpsel 1991). Most recently, high-resolution low-temperature (10 K) data of bulk states of Fe (110) allow the extraction of a quadratic dependence in the vicinity of E F (Cui et al 2006(Cui et al , 2007. The current model employs a quadratic dependence of δ E on E B close to the Fermi energy with a smooth transition to a constant δ E at higher binding energy. We use the fit applied to the two majority-and minority-spin bulk bands (Cui et al 2006(Cui et al , 2007, which is described by δ E = 3.47 E 2 B (eV) −1 . In order to stay within the known broadening (Santoni and Himpsel 1991), we use the quadratic behavior down to E B = 0.34 eV and a constant value (0.4 eV) in the energy range 0.34 eV < E < 1 eV. This behavior is similar to the imaginary component of the self-energy in Fe calculated using DMFT (Grechnev et al 2007). Table 1 presents the calculated BTM Fermi vectors and Fermi velocities obtained using this parameterization for δk ⊥ and δ E ('BTM var. E broad' in table 1). Our results do not depend greatly on the choice of δ E = δ E(E B ). In fact, even the choice of a constant δ E = 0.4 eV up to E F does not change the results for the mass renormalization appreciably ('BTM const. E broad' in table 1). Overlaid on figure 2 are the DTM calculated bands for majority spin (red solid lines) and minority spin (blue dashed lines). A comparison of the DTM and the BTM shows how strongly the broadening affects the band dispersion. For the two bands that cross the Fermi surface (bands I and VI) there are significant differences in the Fermi wave vectors and velocities, which affect the bandwidth and mass renormalization values.
Experimental comparison: bandwidth and mass renormalization
Bandwidth and mass renormalization values for all the transition models discussed above will be compared to show the effect of the choice of the model on the data analysis. Only two experimental occupied band widths are observed in the experimental data: the minority-spin band VI at and the majority-spin bands III-II at P. The values of critical point energy, bandwidth renormalization (ratio of the theoretical to the experimental critical point energy) and bandwidth reduction (percentage difference in critical energy) are shown in table 2. Figure 3 demonstrates that the bandwidth renormalization and bandwidth reduction are significantly reduced as the complexity of the model is increased. The BTM gives band width renormalization (bandwidth reduction) values of 1.01 (1%) at and 1.14 (12%) at P, significantly less than in the previous ISM analysis (Schäfer et al 2005). This is in contradiction to the significantly high values (∼50%) obtained via LDA + DMFT calculations (Grechnev et al 2007). The calculated values are also significantly larger than those determined by comparison of experimental data with the initial states alone (Schäfer et al 2005). The small range of binding energy and a lack of critical energies observed in the experimental data make a detailed discussion of this discrepancy impossible. Table 3 lists the Fermi wave vectors and Fermi velocities from experiment (Schäfer et al 2005) for each of the theoretical models considered the obtained mass renormalization values. Comparison of the Fermi velocities is done via the mass renormalization, which is calculated as the ratio of theoretical to experimental Fermi velocity. Comparison of the Fermi vectors indicates that the average difference between experiment and theory decreases the more exhaustive the theoretical model becomes. Although the effects of spin-orbit coupling, transitions into final states and broadening on the Fermi vectors are rather small, these effects are more pronounced when considering the Fermi velocities, i.e. the mass renormalization values. Figure 4 indicates that mass renormalization decreases substantially when increasing the integrity of the theoretical treatment (moving from left to right in table 3 and figure 4) in 80% of the Fermi level crossings considered. When assessing the whole set of sampled k-points, it is seen that the mass-renormalization values are overestimated in Schäfer et al (2005) by 60% on average. Instead of an average mass renormalization of 2.3, we obtain a value of 1.4. Thus, inclusion of spin-orbit coupling, initial to final state transitions and broadening significantly increases the accuracy of the theoretical description of the experimental data.
Discussion
We have seen that the recently reported large bandwidth renormalizations are mostly an effect of the photoemission measurement and of the choice of a semi-relativistic initial state. This is in agreement with literature data, which lead to a bandwidth reduction of only ∼10% without a conceivable trend towards enhancement for shallow bands (Rader and Gudat 1999).
12
It is, however, problematic for the analysis that scattering between different LDA calculations increases near E F (Rader and Gudat 1999). Concerning the comparison of the Fermi velocity, which is also based on a larger number of experimental data points than the bandwidth (Schäfer et al 2005), we find with 1.4 on average a considerably smaller mass renormalization than Schäfer et al but within the limits of previous de Haas-van Alphen results (Lonzarich 1984).
There are not many studies on the Fermi velocity of metals by photoemission. For Ni, a large mass renormalization of v F of 1.9-2.8 has recently been reported (Higashiguchi et al 2005). It was found to be in agreement with both de Haas-van Alphen data and the established bandwidth reduction in Ni of ∼30%. The mass normalizations that were before (Schäfer et al 2005) in part found in the upper range of the values inferred from de Haas-van Alphen data (Lonzarich 1984) are now well inside the rather broad range of de Haas-van Alphen values of 1.5-3 (Lonzarich 1984). The mass renormalization of v F in Ni is somewhat smaller for enhanced sp-character in de Haas-van Alphen (Zornberg 1970) and photoemission data (Higashiguchi et al 2005). In fact, mass renormalization of the sp-states in Ag is small as derived from de Haas-van Alphen data (1.05) and photoemission from quantum-well states (0.96) (Hong et al 2002). Spin-dependent scattering has been suggested (Fedorov et al 2002, Hong and Mills 1999, Hong et al 2002 to lead to increased mass renormalization in either majority-or minority-spin bands. This is not observed for the current analysis, with the two highest values, 2.24 and 1.51, found on bands of opposite spin. A possible mechanism relating the proximity of opposite spin sheets of different sizes to high mass renormalization (Schäfer et al 2005) is not supported by the current analysis since our highest mass renormalization value, 2.24, remains on sheet I along P with no sheet of opposite spin nearby. Spin-wave renormalization (electron-magnon interactions) and electron-phonon interactions of bulk (Cui et al 2006(Cui et al , 2007 and surface (Schäfer et al 2004) states in ferromagnetic iron have been observed and may be responsible for the additional renormalization; comparison of sheet I (k F = 0.95 Å −1 ) in figure 2 and the corresponding experimental band (see Schäfer et al (2005); figure 12) indicates that the deviation between experiment and theory continues down to at least 0.5 eV binding energy. This is significantly larger than expected from magnon (surface, 0.125-0.16 eV (Schäfer et al 2004), and bulk, 0.270 eV (Cui et al 2006(Cui et al , 2007) and phonon (bulk, 0.04 eV (Cui et al 2006(Cui et al , 2007) renormalization. Such interactions are observed as kinks or deviations from the predicted band structure. In the vicinity of the kink, these deviations amount, in the vicinity of the kink, to <0.02 Å −1 in the case of coupling to magnons (Cui et al 2006, Schäfer et al 2004 and <0.004 Å −1 for coupling to phonons (Cui et al 2006(Cui et al , 2007), while the current variation is an order of magnitude larger (∼0.15 Å −1 ); a similar deviation is observed for the other bands where mass renormalization is observed. It is therefore believed that the mass renormalization observed in the current data is the result of many electron interactions as the deviation between experiment and theory is significantly larger than expected for electron-magnon and electron-phonon interactions.
Conclusion
Recently, large correlation-induced renormalizations of the occupied bandwidth and the Fermi velocity of ferromagnetic iron based on DFT of initial states were reported. We have applied a similar DFT for initial states and three models of angle-resolved photoemission. The most comprehensive model is a broadened transition model that incorporates the calculation of spin-orbit coupling, initial to final state transitions and broadening effects into the DFT of initial states in the three-step model of photoemission. The Fermi vector, mass renormalization and bandwidth renormalization values obtained from this model give greater agreement with experimental data than density functional calculations (Schäfer et al 2005) of the initial states alone. The bandwidth renormalization within 1 eV of the Fermi energy is not found to be ∼30% as in Ni but of the order of 10%. This is in contradiction to recent LDA + DMFT calculations (Grechnev et al 2007), which indicated that the real component of the selfenergy would provide a bandwidth renormalization of ∼50%; a detailed investigation of this discrepancy is left to future work including a larger number of critical point energies over a wider binding energy range. Neither a preferred scattering for minority spin as found for Gd (Fedorov et al 2002) nor for majority spin as predicted by spin-flip scattering under the emission of a spin wave (Hong and Mills 1999) is found here, in agreement with the previous analysis (Schäfer et al 2005). No obvious spin dependence of the mass renormalization is found here, and the vicinity of Fermi surface sheets of opposite sign and much smaller occupied fraction does not enhance the mass renormalization. The increased mass renormalization is attributed to many-electron interactions due to the large energy (0.5 eV) and momentum (∼0.15 Å −1 ) deviations between experiment and theory. It is to the merit of Schäfer et al that they have pointed out the persistence of a substantial mass renormalization in ferromagnetic transition metals, an effect that has largely been ignored since the de Haas-van Alphen results (Lonzarich 1984). It is shown in this paper that while photoemission enables detailed kdependent investigations, the size of the renormalization is reduced by a better description of the photoemission process. A description with improved accuracy is achieved within the three-step model of photoemission without significant computational complexity. In this context, it would be desirable to verify the present results on final-state effects on the Fermi velocity also within a one-step model (Braun 1996, Redinger et al 1988 and using advanced theoretical methods treating electron correlation such as dynamical mean field theory (Lichtenstein et al 2001).
|
2019-04-20T13:11:38.806Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "80cecfa88cee03f28e20a6ac462e56aaf977eed2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/12/1/013007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "968c31bdcd93a45943c824e8777b16fb92dfcea6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236491344
|
pes2o/s2orc
|
v3-fos-license
|
"Smart City" Management: Environmental and Ecosystem Sustainability
This paper aims to analyse the potential of "smart city" management facilitated through the integration of economic-legal and technological tools. Based on an expert survey, measures are identified to integrate economic-legal and technological tools in the concept of "smart city" management, and a model of "smart city" management is developed. Foreign practices of "smart city" management are reviewed in the context of raising environmental and ecosystem sustainability. It is shown that "smart city" management is contingent upon the implementation of specific measures to integrate economic-legal and technological tools, as well as on the development of a model of "smart city" management and adoption of existing foreign practices in raising environmental and ecosystem sustainability.
Introduction
Despite the spread of suburbanisation in developed countries, globally, the trends are still toward the growth of cities and, accordingly, growing urban populations (Goryainova: 2020;Deeva et al.: 2020). The WHO estimates that by the 2030s, 60% of the world's population will reside in urban areas. Such rapid growth of urbanisation is likely to cause excessive pressures on urban infrastructure and result in environmental problems (Halpern et al.: 2013).
To mitigate potential risks that will inevitably occur in overpopulated cities, the solution might be to implement the "smart city" concept (Fiofanova: 2020;Kirillova et al.: 2019;Kozlov et al.: 2020). It is also likely that "smart cities" can right now significantly improve living standards in major cities and agglomerations with millions of populations (Gogiberidze et al.: 2020;Bancerova: 2020). This approach would enable prompt response in complex situations and smooth coordination.
It would also contribute to better quality and resource efficiency of services, in particular, with regard to spending. All city utilities would operate in a single information environment, resulting in improved service standards and better environmental and ecosystem sustainability and enabling realtime public control of their performance (Deakin, Al Waer: 2011;Glebova et al.: 2019). The primary lines of modernisation and improvement were energy-efficient housing and transportation, prudent management of energy, deployment of information and communication systems across the utility sectors and so on (Savina et al.: 2020;Nurutdinova, Bekisheva: 2020).
Examples of investment programmes in "smart cities" in the EU are COOPERATE (Control and Optimization for Energy Positive Neighbourhoods), BESOS (Building Energy decision Support systems fOr Smart cities), DAREED (innovative business models and energy efficiency), EPIC-HUB (Energy-Hub approach) and others (Sujata et al.: 2016). employment and livability perspectives; strategy cities implementing high-technology projects, innovation concepts targeting improvements in long-term comfort of living for residents; innovative cities which survived a crisis as a result of the demise of traditional economic industries and were able to employ an innovative approach to set new growth spots and attract intellectual resources.
Under this concept, "smart cities" are understood not as objects of public governance but as objects of spontaneous economic development, which is not always driven along a set strategic course.
Another approach is adopted by the developers of the concept of European "smart cities" from the Vienna University of Technology. In their view, a "smart city" is a management category, a city that makes optimal use of all the interconnected information available today to better understand and control its operations and optimize the use of limited resources, including those of its residents (Neirotti et al.: 2014).
Consider what comes into the notion of a "smart city" through the lens of different approaches to its interpretation. Table 1 lays out definitions of the notion of "smart city" through the lens of institutional, social, economic and technological approaches and the integrated management approach based on them. Definition of a "smart city" Institutional approach A city connecting the physical infrastructure, the IT infrastructure, the social infrastructure and the business infrastructure to leverage the collective intelligence of the city (Al-Hader, Rodzi: 2009) A city that makes a conscious effort to innovatively employ information and communication technologies (ICT) to support a more inclusive, diverse and sustainable urban environment (Albino et al.: 2015) A "smart community" shaped and aligned around a system of specialised institutions integrated into the urban environment (Hollands: 2008) Social approach A city well performing in a forward-looking way in six smart characteristics (factors): smart economy, smart mobility, smart environment, smart people, smart living, smart governance, built on the smart combination of endowments and activities of self-decisive, independent and aware citizens (Shapiro: 2006) A comfortable livable environment created, in particular, through the efficient use of human factor and intellectual capital to support progressive institutional and economic transformations in the city (Komninos: 2011) Economic approach A city well performing in a forward-looking way in economy, people, governance, mobility, environment and living built on the smart combination of endowments and activities of citizens and fueling sustainable economic growth and high quality of life, with a wise management of natural resources, through participatory governance (Bronstein: 2009) Technological approach A city combining ICT and Web 2.0 technology with other organizational, design and planning efforts to dematerialize and speed up bureaucratic processes and help to identify new, innovative solutions to city management complexity to improve sustainability and liveability (Maeda: 2012) A city where computing technologies are used to make the critical infrastructure components and services of a citywhich include city administration, education, healthcare, public safety, real estate, transportation and utilitiesmore intelligent, interconnected and efficient (Kourtit, Nijkamp: 2012) Integrated management approach A city that monitors and integrates conditions of all of its critical infrastructures, can better optimize its resources, plan its preventive maintenance activities and monitor security aspects while maximizing services to its citizens (Paskaleva: 2009) The term refers to the relation between the city government administration and its citizens. Good governance or smart governance is often referred to as the use of new channels of communication for the citizens, e. g., e-governance or e-democracy (Deakin et The above classification of definitions according to the four proposed approaches helped us to aggregate the findings and arrive at our own definition of a "smart city" following the integrated management approach, referring to a complex and multi-factor municipal system combining institutional, social, economic, environmental and technological constituents ensuring, through their efficient, well-aligned and integrated operation, sustainable urban development and better usability of improved services.
This paper aims to analyse the potential of "smart city" management facilitated through the integration of economic-legal and technological tools. Objectives: • to define measures to combine economic-legal and technological tools; • to propose a model of "smart city" management; • to analyse foreign practices of "smart city" management in the context of raising environmental and ecosystem sustainability.
Research hypothesis: "Smart city" management is contingent upon the implementation of specific measures to integrate economic-legal and technological tools, development of a model of "smart city" management and adoption of existing foreign practices in raising environmental and ecosystem sustainability.
The findings suggest that the research objective is met.
Methods
The research was conducted from 05.08.2020 to 25.09.2020.
The following general scientific methods were used to address the objectives: a) theoretical methods: analysis of peer-reviewed scholarly sources on the subject to find the aspects of the management approach to a "smart city" and a review of foreign practices of "smart city" management; b) empirical methods: an expert survey. The experts were asked to address the following objectives: to determine measures to integrate economic-legal and technological tools, to substantiate technical instruments of a "smart city" management model, to refer to foreign practices to showcase main requirements to the "smart city" management. The first stage of research involved the analysis of available data and scholarly papers on the subject.
The second stage involved the development of a questionnaire as a semi-formalised list of questions concerning the problems of "smart city" management.
The third stage involved conversations with the experts (online) in accordance with the performed questionnaires. The survey was conducted in Russian on 16.09.2020 among the experts (30 individuals), specifically municipal officials with more than five years of professional experience in digital adoption in urban management. All participants were notified regarding the purpose of the survey and the intent of the organisers to publish the findings in aggregate.
Results
According to the experts, efficient "smart city" management requires the implementation of measures to integrate economic-legal and technological tools (Table 2). Based on the expert survey, a model of "smart city" management was developed with a focus on the quality of functioning and organisation of the objects of an urban environment using modern technology to accommodate the needs of people and improve environmental and ecosystem sustainability (Table 3). A concept of a communication network of physical or virtual objects ("things") powered by technologies enabling their interaction between each other and with surrounding environments, as well as perception, communication and generation of new information without human engagement. The use of measuring devices, sensors, video cameras, GPS devices, thermostats and weather stations, drones to monitor traffic, weather and CO2 levels, track the routes of public transport, control emergencies in utilities, safety of buildings, etc. The concept of "smart city" means these devices would be interrelated, interoperable and linked to a single city operating platform 3 Smart personal devices Providing residents and visitors with the opportunity to use personal devices (smartphones, tablets) for their own needs by developing useful applications for prompt and qualitative use of e-services, tourist routes and information, the opportunity to track public transport services and congestion, locate free parking spaces, take part in voting, submit petitions, attract consultations, etc. 4 Cloud computing Sets of technologies of data storage and/or processing as services offered to the customer by the provider using hardware and software available on the Internet with the traditional client-server architecture 5 Big data analysis Big volume, high speed and wide range of information assets requiring new forms of operation, including more wideranging decision-making and process optimisation. Big data for "smart cities" provide an opportunity for transition to a higher quality of management and improved response to numerous needs of residents and the city as an ecosystem. Big data is a new technology requiring considerable investment and powerful IT-infrastructures. Technological standards contribute to quality decision-making related to technology Note: based on the expert survey.
Discussion
As can be seen from foreign practices, "smart cities" are largely powered by flexible telecommunications architecture, open platforms and continuous monitoring. Urban "smart infrastructures" are developed on modular (cassette) technologies.
"Smart city" systems crucially integrate sensors, meters (wired or wireless) and various peripheral devices transmitting on-line data to the processing centre on a continuous or regular basis. E. g., the information and communications networks in Santander, Cantabria, in northern Spain integrate more than 20,000 sensors (to register pollution levels, noise, traffic, parking, etc.), buildings, utility grids (water supply, gas, electricity, lighting), transport links, utility and support services. Such "smart infrastructure" facilitates efficient urban operation, coordination of services and institutions and helps to ensure proper security (Aletà et al.: 2017).
Over the past decades, "smart cities" have operated broadband telecommunication networks to provide e-services and monitor the ecosystem in a cluster, community or region. Berlin has been advancing intellectual networks powered by Big Data and analytical processing of data from global monitoring of the vital structures of the megapolis (Harris: 2014).
Across the world, green "smart cities" have been on the rise recently as an innovative model based on the digitalisation of municipal development and facilitating not only cardinal improvements in prosperity and living standards but also environmental security and energy-saving. A major proponent of "smart city" transformations has been the EU.
A green IT strategy has been developed and implemented in Stockholm as part of the "smart city" strategy. The Green IT strategy aims to reduce the environmental footprint by using such IT functions as energy-efficient buildings, monitoring traffic and development of e-services (Bibri, Krogstie: 2020b).
London, which is one of the leaders of digital adoption in municipal governance among European "smart cities", experimented to set a green area, i. e., an area of restricted access in the city centre, which helped to considerably decrease harmful emissions in the air due to an increase in the (Bibri, Krogstie: 2020a).
In June 2014, the Smart City Wien Framework Strategy was adopted, which envisaged the introduction of advanced solutions powered by ICT. The strategy will be implemented till 2050 to facilitate gradual and continuous modernisation of the city which is meant to produce: energy consumption cuts, reductions in greenhouse gas emissions without abandoning the technology responsible for their accumulation, mobility driven by broadband, intellectual ICT and innovative solutions; responsible efficient use of resources; efficient methods of organisation of urban transport networks; protective management of water resources, waste, heating systems and lighting in buildings, streets, ad or information billboards, etc., interactive method of work in urban administration; improved security in public spaces (Kylili, Fokaides: 2015).
According to the experts, a functional, optimal and reasonable system of managing a "smart city" is the one generating data, integrated down to the detail level, which may be analysed and allows optimisation of such principal resources and urban functions as transport, infrastructure, energy, public health and security. When using such data gathering tools, cities may be unaware of the directions and methods of how to improve the functioning of the urban environment. All such data is practically invisible and cannot be seen physically, but if a prudent approach is adopted to gather city data to inform decision-making, local living standards would improve significantly. It is also crucial that such data should be integrated at the lowest granular level and systematised in a single picture.
It is specifically data integration that is, according to the experts, a city's path to become a "smart city." To enable data integration and processing, city administrations should prepare in advance. One of the first steps toward rising as a "smart city" is the analysis of the already available "smart city" systems and the respective data. The focus should be also placed on the existing models of data gathering and processing. Based on the above, city authorities have the opportunity to determine optimal programmes and architectures for storing, merging and using data according to the functional purpose and take strategic management decisions based on the expected outcomes.
Conclusion
Following a concept analysis of the notion of "smart city" through the lens of institutional, social, economic, technological and integrated management approaches, as proposed in previous research, we refined the category framework of the subject area of this research.
In a general sense, a "smart city" is a system facilitating the most efficient use of available resources of city services and ensuring maximum security of city life. Such city ceaselessly raises the number and quality of services for people, while ensuring the sustainability of the environment to support comfort and the quality of life through such improved environmental and ecosystem sustainability.
From the point of public governance, a "smart city" is a managed and multi-factor complex system integrating the above components and aligning them with the context of sustainable development. The purpose of development in such system is to ensure improved comfort of urban life and environmental security, a key requirement underlying the idea of what is reasonable in city management.
As can be seen from the findings and review of foreign practices, the research hypothesis holds that "smart city" management is contingent upon the implementation of specific measures to integrate economic-legal and technological tools, development of a model of "smart city" management and adoption of existing international practices in raising environmental and ecosystem sustainability.
Further substantiation is needed to derive a complex of scholarly propositions to build the principal methodological basis for implementing and continuous improvement of the "smart city" system for big Russian cities, which should be facilitated by the adoption of corresponding methodological and technological support for these processes.
|
2021-07-29T12:45:21.349Z
|
2021-07-10T00:00:00.000
|
{
"year": 2021,
"sha1": "c71a8a69f6447d59c39f8d06bfd14b9a880fccb3",
"oa_license": "CCBYNC",
"oa_url": "https://www.revistageintec.net/index.php/revista/article/download/2158/1479",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c71a8a69f6447d59c39f8d06bfd14b9a880fccb3",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
246068327
|
pes2o/s2orc
|
v3-fos-license
|
Intracellular Trafficking of Cationic Carbon Dots in Cancer Cell Lines MCF-7 and HeLa—Time Lapse Microscopy, Concentration-Dependent Uptake, Viability, DNA Damage, and Cell Cycle Profile
Fluorescent carbon dots (CDs) are potential tools for the labeling of cells with many advantages such as photostability, multicolor emission, small size, rapid uptake, biocompatibility, and easy preparation. Affinity towards organelles can be influenced by the surface properties of CDs which affect the interaction with the cell and cytoplasmic distribution. Organelle targeting by carbon dots is promising for anticancer treatment; thus, intracellular trafficking and cytotoxicity of cationic CDs was investigated. Based on our previous study, we used quaternized carbon dots (QCDs) for treatment and monitoring the behavior of two human cancer cell MCF-7 and HeLa lines. We found similarities between human cancer cells and mouse fibroblasts in the case of QCDs uptake. Time lapse microscopy of QCDs-labeled MCF-7 cells showed that cells are dying during the first two hours, faster at lower doses than at higher ones. QCDs at a concentration of 100 µg/mL entered into the nucleus before cellular death; however, at a dose of 200 µg/mL, blebbing of the cellular membrane occurred, with a subsequent penetration of QCDs into the nuclear area. In the case of HeLa cells, the dose-depended effect did not happen; however, the labeled cells were also dying in mitosis and genotoxicity occurred nearly at all doses. Moreover, contrasted intracellular compartments, probably mitochondria, were obvious after 24 h incubation with 100 µg/mL of QCDs. The levels of reactive oxygen species (ROS) slightly increased after 24 h, depending on the concentration, thus the genotoxicity was likely evoked by the nanomaterial. A decrease in viability did not reach IC 50 as the DNA damage was probably partly repaired in the prolonged G0/G1 phase of the cell cycle. Thus, the defects in the G2/M phase may have allowed a damaged cell to enter mitosis and undergo apoptosis. The anticancer effect in both cell lines was manifested mainly through genotoxicity.
Introduction
Over recent decades, the carbon dots (CDs) have been very popular as promising fluorescent probes due to their low photobleaching, lack of optical blinking, tunable photoluminescence, versatile surfaces, and excellent biocompatibility [1,2]. These excellent properties of CDs have made them prosperous in the applications of bioimaging, drug delivery, biochemical detection, and sensors [3][4][5][6][7]. Passivation of nanoparticles including CDs is a very commonly used procedure which equips the nanomaterials with different properties such as charge [8,9], fluorescence, surface specificity, and affinity to the cellular structures (cellular membrane, organelles, proteins, genes) [9] and influent cellular uptake [8,10], drug delivery [11][12][13], gene transfection efficacy [14][15][16][17], and general biological effects [18]. Synthesis of cationic carbon dots can be achieved by a facile ultrasonic method [19], green melting method [20], hydrothermal method [21,22], thermal oxidation [23][24][25]. Positively charged carbon dots are promising for in vitro applications (genetic engineering [26], biosensors [27], diagnosis [28], antibacterial agents [25]). They can show different behaviors in various cell lines, as we already studied with regard to healthy animal cell lines [8,29,30]. A variety of CDs have been applied for labeling different subcellular structures [31]; however, most CDs mainly accumulate in the cytoplasm, especially in endo/lysosomes, mitochondria, Golgi apparatus, and endoplasmic reticulum [32]. CDs have been studied also for nucleus labeling, photodynamic therapy, and optical monitoring of anticancer drugs [33][34][35][36]. DNA-damage can be caused by the small size of nanoparticles together with their surface charge and surface functional groups, which alters the interaction of CDs with cells [37]. Since many anticancer drugs are required to enter the cell nucleus where the drugs damage the genes to stop the proliferation of the cancer cell [28,38,39], nucleus targeting, optical monitoring, genotoxic effect, inhibition of proliferation, and targeting of mitochondria are all regarded as crucial properties of nanomaterials for anticancer treatment. Thus, the present work is focused on QCDs trafficking and cytotoxic assays to obtain a comprehensive study for future anticancer drug delivery complexes.
Intracellular Observation of QCDs-Labeled MCF-7 Cells
Cancer MCF-7 cell line was incubated with various doses of QCDs, i.e., 50, 100, 200, 400 µg/mL for 24 h, and imaged ( Figure 1). MCF-7 cells treated with 50 µg/mL of QCDs had full endo/lysosomes in perinuclear area (Figure 1a). Concentration of QCDs from 100 µg/mL changed the cellular morphology to the ring-shape (Figure 1b-d); thus, it was not possible to recognize whether the QCDs were present in the nucleus or not. For better information on interaction of cells with a nanomaterial, live monitoring was recorded immediately after addition of QCDs. It was obvious that QCDs enter into the nuclei after 45 min incubation with cells (see Figure 2) and after 24 h, the cells revealed weak adherence and were dying, as in the case of L929 cells [29]. Weak adherence of labeled MCF-7 cells caused a decrease in a number of cells for imaging in higher doses. At a concentration of 200 µg/mL, we observed that the penetration into the nuclei occurred surprisingly later (between 65-70 min, see Video S1) than at a concentration of 100 µg/mL; however, the blebbing happened at the same time as the signal enriched the nuclei ( Figure 3). Thus, according to microscopy evaluation, we know that the morphology of MCF-7 cells was significantly deformed at a dose of 100 µg/mL and that the presence of QCDs in the nuclei and nucleoli happened before cellular dying as blebbing of cellular membrane occurred after the QCDs contrasted the nuclear area of MCF-7 cells. Although, cells had weak adherence and viability was not reduced strongly, as witnessed from the cytotoxicity measurements (Figure 4a), we considered this dose as critical based on changes of morphology. The comet assay demonstrated a damage of DNA. A concentration of 50 µg/mL evoked a value of the tail less than 10% (please note: the "head" is intact DNA in the nucleus, the "tail" is damaged DNA migrated away from the nucleus; when the tail value is less than 10%, the dose is non-genotoxic). At a concentration of 100 µg/mL, the value of the tail was 14.11% ( Figure 4b). Subsequently, genotoxicity was growing with the increasing concentration of QCDs and reached 50% at 400 µg/mL (see Figure 4b). In summary, very sensitive doses of QCDs for MCF-7 cells are 100 and 200 µg/mL. However, the faster killing, monitored during the first two hours immediately after adding of the sample, occurred at a concentration of 100 µg/mL. QCDs at higher concentrations covered the cells and restricted movement and proliferation. This is also viewed as a beneficial effect in antitumor treatment ( Figure 5).
Intracellular Observation of QCDs-Labeled HeLa Cells
The same concentrations, conditions, and techniques were also used in the case of HeLa cells. At a concentration of 50 g/mL, QCDs were located in cytosol and around the nuclei in endo/lysosomes ( Figure 6a); however, 100 µg/mL of QCDs filled the whole cytosol and probably interacted with mitochondria (see Figure 6b). Mitochondria in human cancer cells are closely related to cancer cell proliferation, invasion, metastasis, and drug-resistant mechanisms, making them promising target organelle for the anticancer treatment [40]. Carbon dots, from a concentration of 200 µg/mL, entered into the nuclei (confirmed by live monitoring-QCDs entered in nucleus after 75 min, see Video S2) and after 24 h, they caused cellular death especially in mitotic cells (Figures 6c and 7). Those cells which survived the highest dose, i.e., 400 µg/mL, of QCDs were massively deformed and detached, since after washing, the resulting number of cells was very low genotoxicity of MCF-7 cells exposed to the concentration line of QCDs after 24 h incubation. Please note genotoxicity terms: "head" is intact DNA in nucleus, "tail" is damaged DNA migrated away from the nucleus. When the tail value is more than 10%, the dose is considered genotoxic [29]. Genotoxicity in the HeLa cells occurred nearly at all the doses (12.13% at 50 µg/mL, 8.54% at 100 µg/mL, 13.42% at 200 µg/mL, and 29.56% at 400 µg/mL)- Figure 8b. Despite the fact that QCDs enter into the nuclei and nucleoli, as also seen in our previously tested healthy cell lines NIH/3T3 and L929 [29], we can assume that they caused genotoxicity only in cancer MCF-7 and HeLa cells. Until now, only a few studies have focused on genotoxicity caused by carbon dots themselves [30,41]; CDs are usually examined as biosensors for DNA detection [42][43][44][45]. genotoxicity of HeLa cells exposed to the concentration line of QCDs after 24 h incubation. Please note genotoxicity terms: "head" is intact DNA in nucleus, "tail" is damaged DNA migrated away from the nucleus. When the tail value is more than 10%, the dose is genotoxic [29].
Concentration-Dependent Uptake and Oxidative Stress in Both Cell Lines
Intracellular trafficking and cytotoxicity depend also on the surface properties of the sample and on the number of incorporated nanomaterials into the cells as it can disrupt cellular homeostasis [46]. From our microscopic results, especially on the MCF-7 cell line, it was obvious that the concentration of QCDs in the interval of 100-200 µg/mL evoked different contrasting and cellular death. Thus, we measured fluorescence intensity of QCDs in individual cells by flow cytometry and tried to reveal the cause of different cell sensitivity. Cells were incubated with QCDs at concentrations from 0 to 400 µg/mL for 24 h at 37 • C, then trypsinized and measured by flow cytometry (Figure 9). From the observed results, it is obvious that the uptake of both cell lines is similar at a concentration of 50 µg/mL. At a dose of 100 µg/mL, a high increase in the uptake of MCF-7 cells is observed in comparison to HeLa cells, for which the uptake occurs in a more gradual manner. This measurement confirmed the microscopic results because the highest uptake of MCF-7 cells is found between the concentrations of 100-200 µg/mL, and the saturated concentration of QCDs is 200 µg/mL in the case of the HeLa cells. This assay was measured in parallel with mouse NIH/3T3 and L929 fibroblasts and we found out interesting similarities between NIH/3T3 and HeLa cells vs. L929 and MCF-7 cells (see Figure S1). The main changes occurred at 100 µg/mL, when similarity of uptake can be strongly visible in two groups (NIH/3T3 and HeLa cells vs. L929 and MCF-7 cells) and persisted up to the highest doses. These results showed that L929 and MCF-7 cells have a significantly stronger uptake than NIH/3T3 and HeLa cells. Quantitative determination of internalized CQDs in different cell lines was also described in the study [7] focused on the concentration-dependent photoluminescence of nitrogen-containing carbonaceous quantum dots (N-CQDs). Moreover, the authors in another study analyzed oxidative stress conditions by examining the cellular anti-oxidative capacity as a defensive response to the increasing concentration of N-CQDs [47].
Cellular uptake of the nanoparticles relates to the induction of intracellular reactive oxygen species (ROS) [48]. A low ROS level is generated by normal cell metabolism (physiological oxidative stress) whereas a high ROS production leads to oxidative damage of cells and death caused by excessive and toxic oxidative burden [49]. Thus, the oxidation stress of both cell lines was tested after 24 h incubation with QCDs ( Figure 10). The ROS level was not significant up to 100 µg/mL and although subsequently ROS increased up to a concentration of 250 µg/mL, the value of the ROS level was not high in comparison to other studies [8,50]. Therefore, the ROS production was not identified as the major mechanism of cell damage. The oxidative stress may also lead to the genomic instability [51], but according to these results, the genotoxicity was likely evoked by the nanomaterial. Cellular damage depends on the repairing of DNA in the cell cycle, where only cells with intact DNA can continue to mitosis and cells with damaged DNA undergo cellular death [52]. Thus, the cell cycle profile was measured.
Cell Cycle Analysis of MCF-7 and HeLa Cells
The cell cycle profile was analyzed by the flow cytometer using the DNA kit (BD Cy-cletestTM Plus DNA kit, East Rutherford, NJ, USA) as in our previous work [29]. The cell cycle of MCF-7 labeled by QCDs was found to show no significant changes at the concentrations of 50-300 µg/mL. Nevertheless, in comparison to the highest dose (400 µg/mL) with the control non-labeled cells, the G0/G1 phase was slightly prolonged and the G2/M phase shortened (Figure 11a). Defects in the G2/M phase may allow a damaged cell to enter mitosis and undergo apoptosis [53]. In the case of QCDs treated-HeLa cells, all doses affected the cell cycle profile because, with growing concentration of QCDs, the G0/G1 phases were prolonged and G2/M phase shortened (Figure 11b). These changes can be a sign of low proliferation with increasing doses, of the entering of damaged cells in G0, and of the activation of apoptosis, or may be also a sign of the repairing of damaged DNA, which was confirmed in our genotoxicity measurement (Figure 8b).
Materials and Methods
This work was performed in parallel with our previous study [29]; thus, the method is very similar or the same.
Carbon Dots
Quaternized carbon dots (QCDs) were prepared by thermal oxidation of a tris(hydroxy methyl)aminomethane (Tris)-betaine hydrochloride salt precursor, where Tris provides the carbon source and betaine the surface modifier [23]. The QCDs have sizes in the range 4-9 nm, quasi-spherical morphology (Figure 12), and display a positive zeta potential of +43 mV at neutral pH (e.g., the pH of the QCDs' aqueous dispersion). The presence of -N(CH 3 ) 3+ groups in QCDs was evidenced by NMR spectroscopy, as well as, by the highly positive zeta potential at neutral pH. Furthermore, based on elemental analysis and TGA, an anion-exchange capacity of 2.1 mmoL g −1 was estimated in the chloride form (Cl − ions compensate the positively charged quaternary ammonium groups). Regarding the quantum yield of the dots (4%), this was estimated for the blue part of the emission spectrum, where PL had the highest intensity. As we moved to greater wavelengths the emission, and hence the quantum yield of the dots, was significantly decreasing. High quantum yields are usually considered suspicious in terms of purity of the dots and origin of fluorescence [54]. In the present case, the value of 4% is typical of carbon dots void of any fluorescent impurities. In general, temperature synthesis below 200 • C favors the formation of molecular fluorophores that dramatically increase the quantum yield. However, removal of such impurities by extensive purification results in quantum yields of 1-3%. On the other hand, temperature synthesis above 200 • C (as is true for our dots) results in higher degrees of carbonization and less fluorescent impurities formation with concurrent drop in quantum yield [55]. The purity of the dots used in this work was evidenced by capillary electrophoresis, where a narrow and single peak was noticed in the corresponding chromatograph. For more information on the material characterization, please see our previous studies [23,29].
Cell Cultivation, Microscopy
Both cell lines (MCF-7, HeLa) were purchased from American Type Culture Collection (ATCC, Manassas, VA, USA) and were cultivated in high glucose DMEM (Life Technologies, Carlsbad, CA, USA). Both media also contained 10% fetal calf serum (FCS), 10,000 U/mL penicillin, and 10,000 µg/mL streptomycin. Cells were incubated at 37 • C and under a 5% CO 2 -enriched atmosphere. A light microscope Olympus IX 70 equipped with a phase contrast was used for control of cell confluence.
Fluorescence Microspectroscopy
MCF-7 or HeLa cells were seeded on glass-bottom cell culture dishes (NuncTM, ThermoFisher Scientific, Waltham, MA, USA) at a cell density of 7 × 10 3 and cultured for 24 h at 37 • C and 5% CO 2 . The next day, QCDs were diluted in the cell medium and added to each culture dish to achieve the desired final concentration (50, 100, 200, or 400 µg/mL), and left to incubate for 24 h. Before imaging, the dishes were washed twice with PBS and filled with a solution of HEPES and PBS (1:9). For live monitoring of the uptake, images were taken for 2 h in 5-min intervals immediately after the addition of QCDs (400 µg/mL).
CDs were excited by widefield illumination with an Hg arc lamp (Sutter Lambda LS, Novato, CA, USA) through an excitation filter with the transmission window 430-490 nm, and emission was collected within 506-594 nm (all filters by Semrock, West Henrietta, NY, USA), using a 100×/1.4 oil immersion objective (Nikon, Tokyo, Japan). As reported previously [56], spectrally resolved images were acquired sequentially by scanning the transmission window of the liquid crystal tunable filter (Cri Varispec VIS-10-20, Cambridge Research & Instrumentation, Inc., Hopkinton, MA, USA), placed in front of an EMCCD camera (Andor iXon3 897, Oxford Instruments, Oxfordshire, UK), at 5-nm steps. From each 3D dataset (x,y,λ), spatially resolved emission spectra were extracted and analyzed using a custom spectral fitting software written in Wolfram Mathematica [57] to determine local intensities and spectral peak positions.
Cell Cycle and Concentration-Dependent Uptake
A BD FACSVerse flow cytometer (BD biosciences, East Rutherford, NJ, USA) was used for determination of cell cycle, concentration-dependent uptake, and endocytosis analysis. This LIVE/DEAD®Viability/Cytotoxicity Kit is the assay utilized to quantitate apoptotic cell death. Thus, cells were incubated with 2 µL of ethidium bromide (2 mM) a 2 µL of calcein-AM (50 µM), diluted in DMSO. The fluorescence signal was measured by flow cytometry (red-exc. 488/ em. 700, green-exc.488/em.527). Red signal of ethidium bromide marked dead cells because they lost membrane integrity. However, green cells had active intracellular esterases and catalyzed the non-fluorescent calcein-AM to highly fluorescent green calcein [29].
Comet Assay
Genotoxicity was studied by the comet assay which detects DNA damage. The principle of this method is based on single cell gel electrophoresis (SCGE), during which intact DNA stay in the nucleus (called "head") and damaged DNA migrate away from the nucleus (resemble "tail" of comet). Specific DNA fluorescent probe allows to compare fluorescent intensity of the nucleoid (head) and migrated DNA (tail) in the image analysis [58]. In our study, we followed the methods from the article [59]. Microscope slides were covered with 1% HMP agarose, thereafter, the cells were trypsinized, washed with DMEM with 10% FBS, and centrifuged (6 min, 1000 rpm). Agarose (85 µL of 1% LMP) was added to the cell suspension and 85 µL of this mixture was given to the microscope with agarose gel. The microscope slides were immersed in a lysis buffer for 1 h, and then placed in an electrophoretic tank and dipped into a cool electrophoresis solution for 40 min. Electrophoresis was run at 0.8 V/cm and 380 mA for 20 min. Finally, slides were neutralized in buffer (0.4 M Tris, pH = 7.5) and the samples were stained with SYBR®Green and immediately scored using SW Comet Score [29].
Reactive Oxygen Species
Intracellular oxidative stress caused by QCDs was investigated by ROS analysis. At first, MCF-7 and HeLa cells were treated with 50-400 µg/mL of QCDs and incubated for 24 h. After incubation, the growth medium containing QCDs was removed and replaced by PBS solution (20 µL per well) containing fluorescence ROS probe (pre-dissolved in DMSO, 500 mmoL × L −1 , General Oxidative Stress Indicator CM-H2DCFDA, Life Technologies) [8,60]. The fluorescence signal was measured by a microplate reader PRO M200 (Tecan, Austria) with excitation/emission wavelength of 505/529 nm [8].
Conclusions
In this work, we observed different sensitivity of two cancer cell lines to cationic carbon dots with an average size of 7 nm, and similarity between human cancer cells and mouse fibroblasts in the uptake of QCDs. Time lapse microscopy of QCDs-labeled MCF-7 cells showed that morphology changes and targeting of the nuclei occurred faster at a lower dose than at a higher one but just in case of the 2 h measurement, which was performed immediately after the addition of QCDs into the growth medium. Viability after 24 h did not change significantly at a concentration of 100 µg/mL, but the morphology of MCF-7 was still deformed. From the same concentration, genotoxicity was pronounced. In the case of HeLa cells, the dose-depended effect during the first 2 h did not happen; however, cells incubated with 200 µg/mL for 24 h were dying mainly in mitosis, probably because of weaker adherence. Genotoxicity occurred nearly at all the doses; moreover, contrasted subcellular compartments (probably mitochondria) were obvious after 24 h incubation with 100 µg/mL of QCDs. Knowledge of the intracellular fate of carbon dots is useful in a wide range of biological and biomedical applications. This sample deformed cellular shape, entering into the nucleus, causing mitotic catastrophe, and restricting movement and proliferation, which can all be viewed as beneficial effects in antitumor treatment.
|
2022-01-21T17:17:33.691Z
|
2022-01-19T00:00:00.000
|
{
"year": 2022,
"sha1": "b564a71805f8d95a83cfc543bff160afeae717da",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/3/1077/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad65ea404875b007b0c4cc50c1178de454fc6031",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
898194
|
pes2o/s2orc
|
v3-fos-license
|
Antimycobacterial and Nitric Oxide Production Inhibitory Activities of Ocotea notata from Brazilian Restinga
The genus Ocotea (Lauraceae) is distributed mainly in tropical and subtropical regions. Some species of this genus as O. puberula and O. quixos have been described in the literature, showing antibacterial activity. And Ocotea macrophylla showed anti-inflammatory activity with inhibition of COX-1, COX-2, and LOX-5. The purpose of this study was the phytochemical investigation of the plant species Ocotea notata from Restinga Jurubatiba National Park, Macaé, RJ, Brazil, and the search for antimycobacterial fractions and compounds. The crude extract was evaluated for antimycobacterial activity and presented 95.75 ± 2.53% of growth inhibition at 100 µg/mL. Then, it was subjected to a liquid-liquid partition and subsequently was chemically investigated by HPLC, revealing the major presence of flavonoids. In this process the partition fractions hexane, ethyl acetate, and butanol are shown to be promising in the antimycobacterial assay. In addition, ethyl acetate fraction was chromatographed and afforded two flavonoids identified by MS and NMR as afzelin and isoquercitrin. The isolated flavonoids afzelin and isoquercitrin were evaluated for their antimycobacterial activity and for their ability to inhibit NO production by macrophages stimulated by LPS; both flavonoids isoquercitrin (Acet22) and afzelin (Acet32) were able to inhibit the production of NO by macrophages. The calculated IC50 of Acet22 and Acet32 was 1.03 and 0.85 µg/mL, respectively.
Introduction
Brazilian Atlantic forest areas were considered the fourth in relevance among a total of 25 hotspots worldwide. Hotspots are areas that hold an exceptional concentration of endemic plants and vertebrates experiencing exceptional loss of habitat [1].
The plant communities at the periphery of the Atlantic rainforest complex, such as restingas, differ from the core formation in that they exhibit more extreme environmental conditions found in these systems; drought, salinity, high temperatures, and low soil nutrient contents are the main limiting factors in the open scrub habitat of the restinga vegetation [2]. 2 The Scientific World Journal Ocotea notata (Nees & Mart.) Mez, Lauraceae, is a medium sized tree, popularly known as white cinnamon. The genus is distributed throughout tropical and subtropical regions, especially along the Brazilian coast. Ocotea species have been studied for their diversity in secondary metabolites such as alkaloids, neolignans, lignans, terpenes, and flavonoids [3][4][5][6][7][8] and stand out for their biological activities such as anti-inflammatory [9,10], antioxidant [11,12], antiprotozoan [13], antiallergic [14], central nervous system depressant [15], antimicrobial [11,16], and anti-herpetic [8].
The present study aims at the investigation of the chemical profile of Ocotea notata ethanol extract collected in Jurubatiba Restinga (Macaé, RJ, Brazil) through HPLC analyses and phytochemical study. In addition, due to the previously described antimicrobial activity of O. notata, the antimycobacterial activity of the samples obtained in this chemical study and their ability to inhibit NO production by macrophages stimulated by LPS were also investigated.
Ethanol Extraction and Partitions.
Fresh leaves (1,300.00 g) were triturated and extracted exhaustedly with ethanol ACS at room temperature (crude extract). An aliquot (60 g) of the dried extract (78.07 g) was resuspended with methanol and partitioned with hexane to obtain the hexane fraction (26.68 g). Methanol residual phase was dried and resuspended with pure water and partitioned sequentially with ethyl acetate and butanol, affording ethyl acetate fraction (3.40 g) and butanol fraction (10.50 g), respectively. The residual aqueous phase was named the aqueous fraction (13.99 g). An aliquot of ethyl acetate fraction (1.50 g) was chromatographed on a silica Kieselgel 60 silanisiert (0,063-0,200 mm) (H 2 O/MeOH gradient), yielding 295 fractions. Similar fractions between 92-96 (Acet22) and 166-168 (Acet32) were grouped observing the chromatographic similarities, after TLC revealed with NP-PEG solution with 1% ethanolic diphenylboryloxy-ethylamine acid (NP) followed by 5% polyethylene glycol-4000 (PEG) with 10 mL and 8 mL, respectively. By TLC analysis, it was possible to observe a single spot in these subfractions, suggesting the purity of the sample. (HPLC). Extracts, fractions, and isolated compounds were submitted to high-performance liquid chromatography using a Shimadzu Prominence LC-20. The detection was performed at fixed wavelengths of 254 nm and 332 nm. The column used was a Nucleosil 100-5 RP-18 (
Quantification of Flavonoids.
The flavonoid quantification was carried out using calibration graph with ten data points. Calibration graph for HPLC was recorded with rutin (quercetin 3-O-rutinoside) amounts ranging from 0.20 to 10.0 g. The linearity range of the detector response was verified using a series of twofold diluted solutions of rutin. The relationship between peak areas (detector responses) and amount of rutin was linear over 1000-20 g/mL ( 2 = 0.9999). To evaluate the repeatability of the injection integration, the rutin standard solution and the extract were injected three times and the relative standard deviation values were calculated.
Statistical Analysis.
All experiments were performed in triplicate and results were expressed as mean ± standard deviation (M ± SD). Data was evaluated by one-way ANOVA followed by Tukey test and considered statistically significant for < 0.05. The IC 50 (concentration able to modulate at 50% maximum activity) of the samples tested was calculated by nonlinear regression using the results of the concentrationresponse curves. Microsoft Office Excel and GraphPad Prism software were used.
Results and Discussion
Ocotea notata ethanol extract (crude extract) was analyzed by reversed-phase HPLC-DAD to study its chemical profile. A suitable methodology was developed and five major peaks were identified with retention time of 35.57, 39.13, 39.88, 41.22, and 44.25 minutes (Figure 1). UV spectrum of each peak revealed the flavonoid absorption profile (typical max 251-271 and 335-350 nm) [20]. For the predominance of flavonoids in the sample, they were quantified based on an area × g calibration curve obtained using a rutin external standard. The sum of all identified peaks in the chromatogram was assumed to represent the total flavonoid content in the extract, expressed as rutin equivalents, percentage (w/w) g/100 g of crude extract. For this purpose, crude extract was analyzed in triplicate resulting in a flavonoid content of 2.71±0.16% w/w. Crude extract was assessed in view of verifying its antimycobacterial activity. Antimycobacterial activity was evaluated on Mycobacterium bovis BCG strain. This strain shows a very similar genetic profile when compared to M. tuberculosis [21]. Ocotea notata crude extract exhibits, at concentration of 20 g/mL, 73.63 ± 1.86% of mycobacterial growth inhibition and only 26.40 ± 1.50% of cytotoxicity (Figures 2(a) and 2(b)). At concentration of 100 g/mL, it showed an inhibition of 95.75 ± 2.53%, but it was toxic when evaluated in RAW 264.7 macrophages culture (Figures 2(a) and 2(b)). The inhibition extract capacity was compared to rifampicin, drug tested at different concentrations, and used as a positive control. Tuberculosis (TB) is one of the leading causes of mortality worldwide and its etiologic agent is Mycobacterium tuberculosis bacilli but also M. bovis, M. africanum, and M. microti [22]. For the promising activity observed from the crude extract, it was fractioned by liquid-liquid partition and afforded four fractions with different polarities, hexane, ethyl acetate, butanol, and water. The fraction that had the best performance in the inhibitory mycobacterial activity growth was the hexane fraction. At concentrations of 0. 8 growth inhibition. Hexane fraction showed activity even in small concentrations and it was toxic for macrophages only at the highest concentration. This finding suggested selectivity for antimycobacterial activity without being cytotoxic to macrophages at 0.8, 4 and 20 g/mL. This fraction is the most apolar and usually this kind of fraction is mainly composed by terpenes, sterols, and fatty acids [23]. Hexane fraction was followed by ethyl acetate fraction that was the second on inhibition of mycobacterial growth.
The inhibitory activity of the ethyl acetate fraction was at concentrations of 0.8, 4, 20, and 100 g/mL, 43.63 ± 1.06, 57.75 ± 0.46, 83.38 ± 3.54, and 80.75 ± 1.15%, respectively. But when the cytotoxicity to macrophages was evaluated, the ethyl acetate fraction showed low toxicity when compared to hexane fraction at the highest concentration ( Figure 2). The same was observed in butanol fraction (Figure 2). According to Moresco and Brighente [24] fractions as ethyl acetate, butanol, and aqueous are rich in phenolic compounds; this can be explained by the polarity of these substances. Comparing the demonstrated results, it was noticed that butanol and ethyl acetate fractions showed an excellent inhibitory effect and lower cytotoxicity, especially the last one, so that these polar fractions were investigated by HPLC to identify the chemical constituents responsible for this activity. HPLC profile of polar fractions pointed the presence of secondary metabolites such as flavonoids (Figures 3(a) and 3(b)). Butanol fraction presented two major peaks, 21.94 and 34.29 min at 254 nm, the second one with UV flavonoid characteristic. Ethyl acetate fraction showed a complex profile with four major peaks that were identified by UV as flavonoids (36.53, 37.55, 39.16, and 42.48 min). Ethyl acetate fraction was chosen by fractionation although butanol and hexane fractions were also selected except for future investigations. To compare the total flavonoid content of the ethyl acetate fraction and the crude extract, as reported above, this fraction was analyzed in triplicate resulting in a flavonoid content of 37.3 ± 1.5% w/w, rate over fifteen times higher than that found in crude extract.
Reversed-phase chromatography of ethyl acetate fraction afforded two isolated flavonoids codified as Acet22 and Acet32. These two flavonoids were analyzed by HPLC and their purity was confirmed. Mono and bidimensional 1 H and 13 C NMR analyses of Acet22 allow this flavonoid characterization as isoquercitrin (quercetin 3-O--D-glucopyranoside) ( Figure 4). NMR data are in accordance with the literature [25]. The flavonoid Acet32 was analyzed by NMR and MS. (Figure 4) as Acet32, in accordance with literature [25].
As can be seen, the isolated compounds, isoquercitrin (Figures 5(a) e 5(b)) and afzelin (Figures 6(a) e 6(b)), showed no antimycobacterial activity and moderated cytotoxicity. In the literature there are few reports about flavonoids with antimycobacterial activity. Yet some of them, especially those that are less polar, could be found, as chalcones [26] and prenylated flavones [27]. However isoquercitrin and afzelin are glycosylated flavonoid with considerable hydrophilicity. This fact complicates the permeability of these substances through lipophilic bacterial wall.
For the genus Ocotea there are few records about flavonoids isolation, and the main secondary metabolites are alkaloids, lignans, and terpenoids. Flavonoids are divided into classes according to their chemical and biosynthetic characteristics and have numerous pharmacological and biochemical effects [28]. There is only one report on the isolation of the flavonoid isoquercitrin from O. notata, in addition to a proanthocyanidin trimer and flavonoids quercitrin and reynoutrin [8]. There are reports about isoquercitrin isolation also from O. corymbosa [29]. No data describing the antimicrobial activity of isoquercitrin were found.
Funasaki [30] reported the phytochemical study of O. catharinensis leaves, describing the glycosylated flavonoid afzelin isolation, the same that is isolated and described in this study, for O. notata. Afzelin showed no antimicrobial activity described in the literature. But it presents antinociceptive and anti-inflammatory activities [31] and strong neural protective effect and antioxidant [32].
In addition, considering that Ocotea notata isolated flavonoids, isoquercitrin, and afzelin do not show significant antimycobacterial activity, as demonstrated in the present study, they were evaluated to verify their capacity of inhibiting the NO production by LPS-stimulated macrophages. Nitric oxide is a chemical mediator with microbicide activity that is produced by activated phagocytes during inflammation [33]. Inflammation is strongly involved in the pathogenesis of most infectious diseases, including tuberculosis [34]. In general, the production of proinflammatory mediators by the infected macrophages, such as IL-1 , TNF-, and NO, is essential for protection against mycobacteria [35]. However, tissue concentrations of NO required for microbicide action are toxic to the host cells and must be tightly regulated [33]. In the case of TB most severe forms, additional anti-inflammatory therapy to prevent excessive inflammation could be required [35]. In report about treatment of TB together with anti-inflammatory drugs it was demonstrated that corticosteroids can be effective in reducing mortality for all forms of TB [36]. The benefits of antiinflammatory treatment have also been demonstrated in some TB cases using nonsteroidal anti-inflammatory drugs (NSAIDs). Result with infected animals and treatment with ibuprofen (anti-inflammatory drug) showed decrease in pulmonary infiltrates and in bacterial load and increased survival, compared to untreated animals [37].
As could be seen in Figure 7, crude extract and both flavonoids, isoquercitrin (Acet22) and afzelin (Acet32), were capable of inhibiting the NO production by macrophages, with < 0.001 at concentrations of 0.8, 4, 20, and 100 g/mL, when compared with positive control (LPS-stimulated RAW 264.7 macrophages). The calculated IC 50 of crude extract, Acet22, and Acet32 was 3.24, 1.03, and 0.85 g/mL, respectively. Although the inhibitory effect in NO production was slightly associated with moderated cytotoxicity (Figures 5(b) and 6(b)), especially for isolated flavonoids, the capacity of inhibiting NO production is higher when compared to cytotoxicity. For example, at 20 g/mL, for all tested samples, NO inhibitory activity was greater than 90%, while cytotoxicity was between 20 and 40%.
Conclusion
The present study reported for the first time the antimycobacterial and NO production inhibitory activities of O. notata extract. The findings from this study reveal the potential of O. notata extract and fractions to afford bioactive compounds and suggest that afzelin and isoquercitrin isolated do not contribute to the ethyl acetate fraction activity. But these compounds were able to significantly suppress the production of NO stimulated by LPS in RAW 264.7 macrophages.
|
2016-05-04T20:20:58.661Z
|
2015-02-19T00:00:00.000
|
{
"year": 2015,
"sha1": "074c619d87f2ac2109e6304c3417c75ce6aa6499",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2015/947248.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "074c619d87f2ac2109e6304c3417c75ce6aa6499",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
243078789
|
pes2o/s2orc
|
v3-fos-license
|
Pretreatment of Copper Sulphide Ores Prior to Heap Leaching: A Review
: Although the main cause of hydrometallurgical plant closures is the depletion of oxidized copper minerals reserves, the lack of new hydrometallurgy projects also contributes to these closures. One solution is to be able to process copper sulphide ores hydrometallurgically. However, it is widely known that sulphide copper ores—and chalcopyrite in particular—have very slow dissolution kinetics in traditional leaching systems. An alternative to improve the extraction of copper from sulphide ores is the use of a pretreatment process. Several investigations were developed evaluating the effects of pretreatment, mainly in the extraction of copper from chalcopyrite in chloride media. This study presents a review of various pretreatment methods prior to heap leaching to aid in the dissolution of copper from sulphide ores. Different variables of pretreatment that affect the extraction of copper were identified, including the type of salts used in agglomeration, curing time, and curing temperatures. Successful cases such as the implementation of the CuproChlor ® process (use of calcium chloride), and various pilot studies using sodium chloride and temperature, show that pretreatment is an alternative that aids in the dissolution of copper from sulphide ores.
Introduction
Chalcopyrite represents approximately 70% of the world's copper mineral reserves. This mineral is still one of the most refractory minerals when treated by hydrometallurgical methods. For this reason, since the 19th century, 70% of world production was generated through conventional processes of concentration by flotation followed by smelting [1]. In Chile, 40% of the copper produced is through smelters, and the rest is marketed as a concentrate for later smelting. Due to technological backwardness and lack of innovation, the concentration-smelting process requires high water consumption for the mineral concentration, and high energy consumption. It also causes environmental problems due to the continuous generation of air pollution and sulphur dioxide (SO 2 ) emissions [2,3]. Despite this, the projections made by the Chilean copper commission (Cochilco) [4] forecast an increase in the production of refined copper in this manner. This increase would reach an amount of 6.24 million tonnes by 2029, increasing the associated environmental problems. [5] reported that the use of continental waters reached 12.45 m 3 /s, where the activity in flotation plants uses 64% of this amount. Without a doubt, soon, the environmental regulations to protect the atmosphere, consumption, and contamination of the water will be much stricter. Therefore, the treatment of these minerals by this route will be restricted The other 30% of world copper production is carried out mainly by processing oxidized copper ores by hydrometallurgical means. The depletion of these mineral resources is of great concern, causing the closure of various hydrometallurgical plants, which will be less and less active in the country's productive matrix. Even though the main cause of the closure of the hydrometallurgical plants is the depletion of oxidized copper mineral resources, the lack of new hydrometallurgy projects also contributes to these closures. The projections made by [6] reflect that by the year 2031, there will be an installed capacity in hydrometallurgical plants that will be unused to around 2500 kilotonnes. The heap leaching technique is the most widely used technique at the industrial level within the hydrometallurgical processes for oxidized copper ores. The widespread use of heap leaching is due to its simplicity, and the fact that it entails short residence times, where it is generally necessary to reduce the size of the ore in crushers without the need to go through milling [7][8][9].
Various investigations were carried out through hydrometallurgical processes to treat copper sulphide ores and search for new alternatives to feed idle hydrometallurgical plants in the future [10,11]. The main problem was the refractoriness of sulphide ores-especially chalcopyrite. Leaching of chalcopyrite was attempted via extremely fine particle sizes, involving high temperatures, and environmental pressures or higher, but unfortunately, none of these techniques could be carried out at the industrial level [12]. Other methods are based on bacterial leaching, where microorganisms allow the oxidization of ferrous ions to ferric ions, which are then capable of oxidizing the mineral [13]. Unfortunately, this methodology is very slow, and is only profitable for low-grade secondary sulphide ores.
In 2010, Nicol et al. published numerous scientific articles dealing with the slow dissolution kinetics of chalcopyrite in acid-chloride solutions and at moderate temperatures (20-35 • C). The laboratory-scale results of these researchers were promising, showing an improvement in copper extraction [14][15][16]. Some mining plants have used these acid-chloride solutions to treat sulphide ores, using the heap leaching technique with some modifications. The main improvements were to agglomerate the sulphide ore with acid-chloride solutions, with prolonged curing times as pretreatment, followed by heap leaching [9].
The pretreatment method prior to leaching was investigated in recent years, and the information available is scarce [17]. Pretreatment is conducted according to stages, such as agglomeration and acid curing, which are fundamental for improving leaching. Some of the benefits obtained by performing a correct pretreatment process include shorter leaching cycles, greater extraction of metals, lower operating costs, and lower acid consumption, which in some cases generates the use of simpler, cleaner technologies [17][18][19]. One of the pretreatments used is chemical, which shows favourable results, also being used in the treatment of gold minerals [18,20]. Regarding studies of pretreatment in agglomeration and curing, several studies carried out over the last 5 years stand out, such as [17,[21][22][23][24][25]. Most of these studies proposed the addition of salts (both dissolved and solid), mainly for the incorporation of chloride into the system. The curing time or resting time is reported as one of the most important variables that aids in the dissolution of copper.
Although the most studied pretreatment process is agglomeration and acid curing, there are other proposals prior to these stages. The authors Moravvej et al. [26] studied the effects of microwave irradiation as a pretreatment for sulphides and oxidized copper ores via leaching tests in shaking flasks, to later carry out a conventional leaching process, obtaining a copper extraction of 6.05% for sulphide copper ores without pretreatment and 8.17% with pretreatment, while for copper oxide minerals, a copper extraction of 84.81% was obtained without pretreatment, and 93.74% with pretreatment, in addition to a decrease in the consumption of sulphuric acid of 28.8% for copper sulphides and 10.5% for copper oxides.
There are pretreatment proposals other than those of copper metallurgy. Such is the case of the study carried out by Qiu et al. [27], which consists of a cyanidation process with a pretreatment of adding pyrite to achieve greater extraction of silver. For this experiment, the authors carried out a pretreatment with pyrite as a reducing agent, achieving an increase in silver extraction of 43.92%. Another study by Mesa and Lapidus [28] proposed a pretreatment with sodium hydroxide at room temperature for the extraction of gold from refractory arsenopyrite. The authors varied the use of a sodium hydroxide concentration, the concentration of solids, and treatment time, obtaining a gold extraction of 29% without performing a pretreatment (thiosulphate leaching) process. With the use of pretreatment, a gold extraction of 81% was achieved, demonstrating the effectiveness of the process. Other research conducted by Chen et al. [29] proposed a pretreatment with O 2 , H 2 , and CO, for a sample of autocatalyst for the extraction of rhodium (Rh). A 56% extraction of Rh without performing any pretreatment was achieved, which increased to 82% with pretreatment.
An extensive review of the literature directly and indirectly related to the use of pretreatments before leaching of copper sulphide ores is presented in this paper. This study identified a gap in the literature, mainly due to the wide number of investigations focused on the leaching of sulphide ores, and the lack of research on previous stages or pretreatments, such as agglomeration and curing. For this reason, this investigation demonstrated that pretreatments are a key stage that leads to an increase in the dissolution of copper sulphide ores in heap leaching processes.
The Current Situation of Hydrometallurgy in Chile
The first references to hydrometallurgical processes indicate that they were used prior to the 16th century, but it was not until the early 20th century that they had a different approach from the one that had been used to treat copper ores. The process of heap leaching was used on a large scale in Chile, where copper oxide ores were treated with sulphuric acid, and copper sulphides were solubilized with the help of ferric ions, which acted as an oxidizing agent. In Chile, to obtain copper from leaching solutions, electrowinning (EW) was used (which is still in use today), as opposed to precipitation through the use of scraping iron [30].
A challenge faced by hydrometallurgical activity in Chile is the closure of its plants (heap leaching), such as the case of El Salvador (Codelco), announced in 2013. This closure was due to the lack of reserves (oxidized minerals) and high costs. However, this did not transpire, as different alternatives were sought to keep their operations running [31]. On the other hand, according to [32], the plants that would not close their operations only represent 12% of the cathodes that will be produced by the year 2027, which are the following:
•
Encuentro óxidos: This mining site will make use of the Tesoro Plant (heap leaching) when its mineral reserves run out. It should be noted that this mining company is adjacent to Centinela Oxides [1]. • Escondida: According to the projections, their oxide resources will be exhausted by 2025, but they would still have the run-of-mine (ROM), which corresponds to the low-grade minerals (dump leaching) that were extracted and sent to a large heap. These extracted minerals were not previously crushed, and will continue to supply the solvent extraction and electrowinning plants [1]. • Chuquicamata óxidos: The run-of-mine of this mining company could deliver solutions to the solvent extraction and electrowinning plants of Chuquicamata. However, the cathode production will be affected, which is estimated to decrease compared to that obtained in 2014 through the hydrometallurgical pathway [1].
Another project that would save the Chuquicamata mining operations is the retreatment of leaching residues and artificial resources (dump leaching) belonging to leachable materials that contain copper. These come from a stock generated from the heap mineral treatment plant (PTMP), the solvent extraction plant, and the oxide electrowinning plantall obtainable resources to continue operations. It is under consideration to increase the treated tonnes from 30,000 t/day to 45,000 t/day, for which 70% of the material from Mina Sur and 30% from Chuquicamata would be used, providing a total of 145 million tonnes. With this, there is an expected life extension of nine years [33].
Hydrometallurgy production in Chile will suffer a decrease in its participation in total copper production, given that in 2019 it had a 27.3% participation (1,580,000 tonnes), while in 2031 an 8.1% participation is expected (578,000 tonnes) [6]. If one looks at the operations of the hydrometallurgical plants at present and in the future, it can be seen that there are 31 plants actively operating, while in the future, in 2030, 19 will remain operational-of which 5 belong to Enami, 8 are from large mining companies, and 6 from medium-sized mining companies [34].
Among some of the projects that are under consideration in maintaining the hydrometallurgy route, Centinela has one; this consists of small projects, so that its oxide plant remains operational (2026-2040). Other active companies will undertake projects to continue viability in the coming years, such as: Planta Nora (2020-2035), Diego de Almagro óxidos (2021-2031), Rayrock-which will resume (2021-2035)-Producción óxidos
General Aspects of Copper Sulphide Leaching
Copper sulphide leaching is highly dependent on the redox conditions of the system and the addition of oxidizing agents. In some cases, temperature and pressure conditions are needed to favour the process. The most commonly used oxidizing agents are oxygen, ferric ions (Fe 3+ ), nitric acid (HNO 3 ), concentrated sulphuric acid (H 2 SO 4 ), and cupric ions (Cu 2+ ) [35].
The relative kinetics of different copper minerals depend on the copper mineral species: carbonates, sulphates, and chlorides have very fast leaching kinetics at room temperature; cupric oxides and silicates have fast kinetics, but require higher acidity; native copper, cuprous oxides, and some silicates and complex oxides with manganese have moderate kinetics, and require an oxidant; simple and complex sulphides have slow/very slow kinetics, and require an oxidant [36].
There are different leaching media that are used to leach chalcopyrite; among the most common are sulphates, chlorides, nitrates, ammonia, and bacteria [37]. Leaching with ammonia has the peculiarity that huge amounts of ammonia need to be used, due to the generation of ammonium sulphate ((NH 4 ) 2 SO 4 ), which must later be decomposed in order to obtain the ammonia reagent [38]; this process is carried out at a temperature between 75 and 80 • C, with rapid stirring, in the presence of oxygen, and at a low pressure [39]. According to [40], Reactions (1) and (2) are those that govern the process: Nitric acid, having the characteristics of an effective oxidizing agent, is able to successfully dissolve various sulphide minerals with acceptable kinetics. However, this reagent has a high cost; thus, the reagent has a negative economic impact, and hinders the viability of the process [41]. The use of nitric acid as an oxidizing agent is more effective when it is in the presence of NO + . Adding NO 2− accelerates the generation of NO + , thereby allowing the oxidation of the sulphide minerals at low temperatures, and forming elemental sulphur [41][42][43]. According to [43], Reactions (3) and (4), are the possible reactions that govern the process: An unconventional alternative is the use of chloride media, offering significant advantages, such as presenting high metal solubility and better leaching rates. One of the advantages of working with chloride ions is that the chemical activity of the proton is increased. The activity coefficient for chloride salts is generally significantly higher than the values for the corresponding sulphate salts. In chloride solutions, the hydrometallurgical processes in chloride media take into account the oxidizing capacity of Fe 3+ and Cu 2+ ions for the oxidation of sulphide ores to elemental sulphur, as well as the high stability of the metal chloride complexes in solution [44][45][46][47]. Studies suggest that both ions participate in the oxidation reactions; However, the leaching capacity of cupric ions is greater than that of ferric ions, since cupric ions tend to regenerate more easily in the presence of oxygen, while ferric ions form strong complexes with chloride ions. Additionally, high concentrations of chlorides in solution allow Cu + ions to be thermodynamically stable from a thermodynamic point of view; Cu 2+ ions are available for dissolution reactions to occur.
Several studies were conducted searching for a medium that best favours the dissolution of chalcopyrite. It was determined that the dissolution kinetics in chloride media are the highest compared to those of sulphate media. This can be explained as being due to the formation of a chloride-copper complex, which has the characteristic of being able to stabilize Cu + ions in strong complexes, therefore allowing the cupric ions to act as oxidants of chalcopyrite [48,49].
The bioleaching of minerals is another treatment available for the exploitation of copper sulphide ores. This process of leaching has the advantages of not generating a high cost of water usage, nor generating large operating and production costs, in addition to emitting low levels of pollutants to the environment. Another benefit of bioleaching is that it extends the use of solvent extraction and electrowinning plants [12,50]. The first steps of this technology at a commercial level were at Minera Pudahuel in Chile. With the passing of time, new improvements in operations based on this option emerged, revealing the profitability and functionality of this process [51].
The main role of microorganisms in the process of mineral bioleaching is to oxidize ferric ions and sulphur compounds. The authors of [52] proposed the following reactions, where Reaction (5) corresponds to the oxidation of the ferrous ions by the action of ironoxidizing microorganisms, and Reaction (6) to the oxidation of sulphur compounds, which become sulphates due to the interaction of sulphur-oxidizing microorganisms: The microorganisms present in bioleaching can be classified by using temperature gradients. The most prominent microorganisms are moderate thermophiles with optimal temperatures from 40 to 55 • C, extreme thermophiles with optimal temperatures higher than 55 • C and, finally, mesophilic microorganisms, which have an optimal temperature below 40 • C [53].
It should be noted that using thermophilic microorganisms presents an advantage when bioleaching, as an increase in temperature generates an increase in the speed of the oxidation reactions of minerals [54].
Mechanical Operations
Comminution is one of the first pretreatments performed on copper minerals before dissolution, whether they are oxides or sulphides. This is one of the most important steps within mining operations, consisting of reducing the size of the rocks by mechanically fracturing and grinding the minerals to obtain smaller fragments.
The objective of comminution is to free the mineral of interest from the gangue, increase the surface area of the particles, and produce particles of adequate sizes for the subsequent processes, such as leaching, and transportation of the mineral. This operation is carried out continuously in different stages. The first is the primary crush, preceding the secondary/tertiary crush and, finally, leading to the grinding stage [55]. It should be noted that traditional heap leaching processes do not require a milling stage. However, the comminution or crushing operations present a challenge-the expenditure of energy. Regardless of whether the crushing is carried out under or above ground, this operation continues to consume the most energy [56]. As reported by [56], the mineral crushing operation has an approximate range of 2-3% of energy consumption worldwide.
A given solution to reduce energy consumption is by the use of high-pressure grinding rollers (HPGR), which have significantly lower energy consumption. This type of equipment is used to achieve the release of diamonds, for the preparation of iron ore, for granulation, and-in recent times-in mining treatments of hard rocks, such as gold, platinum, and copper [57].
As reported by [58], the use of hPGR technology in copper leaching using columns provides a greater release of sulphides at the grain boundaries, greater fracturing of the rock matrix, better accessibility of scattered, fine-grade sulphides located in fractures and, finally, an increase in the leaching kinetics with respect to the conventional comminution in use. In certain cases, hPGR grinding can increase copper extraction by 2-10%, in a grinding circuit.
Another treatment studied to reduce energy consumption in crushing operations is the use of microwaves, which was studied for several decades [59]. Likewise, the use of microwaves on minerals produces mineralogical changes, which are favourable for mineral processing-especially for lateritic mineral processing.
The main objective of the use of microwaves is to generate cracks through thermal stresses. In order to fracture the mineral, it is subjected to differential thermal expansion, where the minerals that are within the rock are exposed to cycles of increasing temperatures and subsequent cooling, resulting in the fracture of the mineral. Fracturing also occurs due to the production of tensions within the mineral particles [60].
Depending on the type of rock treated, the reaction to microwave irradiation will be different; this is especially the case due to the dielectric properties of minerals, generating this heating behaviour differential [61].
The main cause of dielectric materials heating up is the variation in the electric field, being the dominant mechanism in microwave heating. Another method by which a microwave thermal process can be performed is ionic conduction [62].
As reported by [63], the use of microwaves was studied during the 1980s in the United States, testing the dielectric and low-power heating properties for common ore minerals. Studies showed that sulphates, micas, aluminosilicates, and carbonates exhibited minor heating. On the other hand, sulphides and metallic oxides were easily heated when exposed to microwave energy.
As previously mentioned, microwave irradiation depends on both the chemical and physical properties of the mineral to be treated. In the case of absorbent minerals, the microwave manages to enter the interior directly, whereas for the transmitters, the microwaves are reflected, not being able to enter the interior of the mineral [26].
Once the microwave pretreatment is complete, it can generate a lower consumption of grinding energy, in addition to achieving a better extraction of valuable metals [64]. However, energy greater than 10 kW h/t is generally required for a low power density, which means that no energy savings are generated during the crushing operations. Furthermore, residence time is greater than 1 s, which prevents the implementation of an operation with thousands of tonnes per hour, as is required in the mining industry [63]. As reported by [9], a high treatment capacity would be necessary to work with several generators in parallel, which would not generate an economic benefit with respect to a conventional process. On the other hand, if minerals with high-value products or lower tonnage were processed, the use of microwaves could be an alternative to traditional processes [9].
Agglomeration and Curing
One of the most widely used pretreatments to increase copper extraction in the heap leaching process is agglomeration and curing, which is performed prior to the heap leaching stage [19]. This is done in the first instance to avoid permeability problems in the heap. However, it was also determined that this stage can better dissolve minerals. Additionally, proper handling of the agglomerate and curing process can indicate success for the overall heap leach operation. However, there is no fixed process for bonding and curing-rather, it is based on experience [65].
The agglomeration process consists of smaller particles adhering to larger ones, forming a glomer, which results in the formation of openings in the leach heaps. Therefore, it is essential to obtain good permeability for gases and liquids of the agglomerated minerals [66]. On the other hand, if the mineral is not subject to an agglomeration process, there is the possibility that small particles will mix together with the leaching solution, covering the flow channels and pores, causing the permeability to decrease, which generates low dissolution of the mineral [65].
The agglomeration process is one of the methods to obtain good recovery in a heap leach. If inappropriate agglomeration takes place, it may be one of the main causes of low extraction [67]. Inappropriate agglomeration is due to an erroneous quantity of moisture added to the agglomeration process, which means that the fine mineral particles do not adhere to the large mineral particles, due to either the lack or excess of moisture. Another consequence of performing this process incorrectly is that the glomers present poor mechanical resistance, causing them to break when being transported to the leaching heap, leading to a segregation of the particle size in the heap [68]. On the other hand, the benefits of performing this process correctly are higher dissolution rates; that is, the leaching cycles take less time, and there is both an improvement to the conditions and a better structure of the heap. This is because channelling is minimized, improving the permeability and availability of reagents [17,19]. When the mineral to be treated has large amounts of clay, when large amounts of fine minerals are produced in the crushing process, or when a mineral is crushed to a size of 0.75 inches (19 mm) or finer, an agglomeration process will be required [67].
On the other hand, as a complement to agglomeration, the use of binders was investigated. As the particles are not united with a great force, this causes the glomers to disintegrate, causing a migration of the fines. Binders are a potential solution to this problem, since they help the formation of a more stable, strong, and disintegration-resistant glomer. For a binder to be effective, it must withstand the acidic environment that is present in heap leach operations, as well as having a strong affinity for mineral particle surfaces. The binder to be used should not affect the leaching chemistry or subsequent processes [69,70].
It is for this reason that binders must be able to create chemical bonds in order to obtain a stable glomer. Different studies were conducted regarding the use of lime, weeds, and wood fibres, but the results were not satisfactory; the resulting glomers when using these binders completely disintegrated when immersed in water for a couple of hours [71]. The use of Portland cement as a binder for gold and silver provides a better resistance to the formed glomer; this is because calcium silicate hydrates are formed during the curing process; However, these glomers partially or entirely disintegrate when they dry when using less than 50 kg/t of cement [66].
The choice of binder must be based on the mineral to be treated and the conditions of the desired product [72]. These can be classified into different types, such as polymeric, organic, or inorganic. In the case of a precious metal heap leaching under alkaline conditions, Portland cement is used, while for acidic conditions, diluted or concentrated sulphuric acid is widely used [67].
According to [73], organic binders such as modified cellulose and lignin were chosen because they are difficult to degrade. Modified cellulose is a hydrophilic component, which allows it to absorb water, allowing for retention of a part of the leaching solution that is in contact with it, and allowing it to remain attached to the surface of the mineral. Other binders-such as gelatine, agar, sodium carboxymethyl cellulose, gums, and starchproved to be inefficient under acidic conditions. Inorganic binders such as sodium silicate, which were chosen due to their reaction with an acidic medium, produced a silica gel that can act as a binder. Other inorganic binders-such as calcium sulphate, iron(II) sulphate, and sodium tripolyphosphate-were also tested under acidic conditions, with poor results. Polymeric binders were shown to have the ability to resist degradation by acid solutions; they have the ability to bind to hydrogen ions, which are adsorbed on the surface of minerals [73].
Instead, curing is intended for the mineral to interact with the leaching solution early on, causing the mineral to form sulphate (copper sulphate). The generation of this sulphate is a benefit for the leaching process, due to the high solubility of this product. Another effect of curing is that it can reduce the passage of silica in the leaching process, thanks to the fact that it solubilizes iron, which generates ferric ions, which in turn dissolve sulphides, avoiding the formation of colloidal silica [17]. As the acid-curing process proceeds, some of the components that have already dissolved react again and precipitate, thus causing a better bond with the mineral [74].
Acid curing generates the dehydration of aluminium silicate minerals by partially eliminating a monolayer of hydroxide that covers these silicates, causing the surface to become insoluble and hydrophobic in aqueous solutions. In addition, it homogenizes the distribution of the acid in the mineral bed and generates greater porosity in the bed, improving permeability [75]. The curing time varies depending on the mineral being treated, which can be in short periods of time-such as 8-24 h-or in long periods of 1-15 days, or even longer.
The curing effect is beneficial when performing a pretreatment for copper sulphide ores [76]. According to [77], which carried out a pretreatment process with sodium chloride, sodium nitrate, and sulphuric acid, and a curing time of 3 days, the authors report that a 64.7% copper extraction was obtained-in contrast to the test without pretreatment, where 26.8% copper extraction was obtained with the same operating conditions. In studies evaluating the effect of pretreatment, [65] stands out. In this investigation, tests were carried out with long curing times of greater than 48 days, and the addition of 35 kg/t of sodium chloride directly to the agglomerate; the authors note that the dissolution of copper sulphides (mainly chalcopyrite) was enhanced. Another investigation [21] focused on the study of the effect of temperature on curing time, where the authors proposed to evaluate the curing time at 50 • C in copper sulphide minerals-mainly of chalcopyrite and borniteobtaining an improvement in extraction. Research by [78] investigated the use of sodium chloride, sulphuric acid, and ferrous sulphate in the agglomeration and curing stage. The authors of [78] report that the use of these reagents in the agglomeration and curing stage allows for an improvement in the extraction of copper compared to an agglomeration without the use of the ferrous sulphate reagent, using 0.6, 0.53, and 0.5 kg/50 kg of mineral, respectively, allowing it to cure for 14 days at a temperature of 32.9 • C. The effect of curing on the leaching of exotic copper oxidized minerals was also studied [79], evaluating the effects of the curing time and the sodium chloride concentration; the authors concluded that both variables influence the responses; furthermore, long curing times favour the reduction of MnO 2 , which increased the dissolution of copper in the system.
Agglomeration with Calcium Chloride
CuproChlor ® is a process created for the leaching of copper sulphides, and initially it was intended to be used in secondary copper sulphides. This process provides a chloride medium for leaching generated by the addition of calcium chloride in the agglomeration stage. The process consists of four stages: agglomeration, resting or curing, leaching with recirculation solution, and washing with a refining solution.
In 2004, this process was patented by Minera Michilla, by the authors Abraham Backit Gutierrez, Jaime Rauld Faine, Raul Montealegre Jullian, and Freddy Aroca Alfaro. The process consists of an agglomeration stage and a curing time, during which sulphuric acid, water, and calcium chloride are added, and reacts to form calcium sulphate or gypsum [80], as can be seen in Reaction (7): The generation of CaSO 4 ·2H 2 O allows it to act as a solid bridge for agglomeration. The addition of sodium chloride also helps the stability of the glomers and, therefore, of the bed; its hydrodynamic properties are improved, such as its liquid and gas permeability, its hydraulic conductivity and, finally, its porosity. These properties are related to the kinetics and equilibrium of oxygen transport, from the gas to the liquid phase. In addition, this produces a change in the solutions, which go from a sulphate medium to a chloride medium, generating an improvement in the kinetics of the reactions in the presence of the cupric ions [81]. In the curing stage, a large part of copper and other species are solubilized. The dissolution mechanics of the CuproChlor ® process for copper sulphide ores is based on the leaching of sulphides in chloride media. Thus, the ferric ions present in the agglomerate stage, generated due to the interaction of sulphide minerals with sulphuric acid, react by leaching the copper sulphide minerals. The minerals that react are mainly covellite and chalcocite, as observed in Reactions (8) and (9), respectively [80]: Chalcocite: Covellite: CuS + 2Fe 3+ = Cu 2+ + S 0 + 2Fe 2+ (9) A higher concentration of chlorides allows for the formation of oxidizing conditions in order to obtain a rapid solubilisation of the copper sulphides during acid curing, and therefore gain an improvement in the extraction of copper during leaching. The oxidationreduction mechanism is based on Cl and Cu ions, since these allow for the formation of chlorocuprics and chlorocuproses, which are complex ions that interact with ferric and ferrous ions [41].
Later, during leaching, the solutions are required to contain iron, copper, and chloridesespecially the latter. The regeneration of the ferric ions occurs thanks to the cupric ions, which come from the leaching of copper minerals. Due to the presence of oxygen, the formation of ferric ions occurs, resulting in the oxidation of cuprous ions. This is a selfcatalytic process, and the reaction is continuous until the sulphuric acid or oxygen are depleted [82].
To eliminate possible copper precipitates, acid must be present in the reactions, and is thus added to the recirculating intermediate leaching solution. Furthermore, this acid allows oxidation reactions to continue. A greater focus should be placed on the washing stage of the organic loading before it is discharged, since the increase in chlorides in the solution can be transferred to the electrowinning stage, generating problems in the process [41].
Adding calcium chloride in the agglomeration stage improves the hydrodynamic properties, as previously mentioned. Thanks to these improvements, the passage of oxygen to the heap results in a more efficient irrigation stage. To verify this, different experiments were carried out on a semi-industrial scale (1000 tonnes) evaluating the effect of CuproChlor ® [81], where copper extraction results of 86-96% were obtained. Of the total tests carried out, 81% of them reached 90% or greater copper extraction-four of which were between 86 and 90%.
The working conditions under which Minera Michilla operated were: 30 kg/t of sulphuric acid and 12 kg/t of calcium chloride in agglomerate, 90 g/L of chloride concentration in leaching, a leaching time of 110 days, a heap height of 2.5 m, an irrigation rate of 0.32 L/min/m 2 , and a total copper extraction of 90% [80].
The CuproChlor ® process, unlike other copper sulphide leaching methods, has several advantages: One of these advantages is the short heap leaching times, which range from 100 to 110 days [41]. According to the study carried out by [83], bioleaching has a leaching time of approximately 167 days in column tests, and a time of 270 days in the case of a pilot plant. On the other hand, since the washing stage works correctly, this means that the large amounts of chlorides do not degrade the organic reagents, nor are there are problems in the quality of the cathodes obtained. Good permeability is also obtained-both liquid and gaseous. The process has suitable stability, and can form heaps of up to 6 m high, without suffering segregation problems; it can work with both fresh and sea water, and/or in the presence of ions that prevent the proliferation of bacteria [41], in addition to being able to work with a temperature lower than that required by bacterial leaching-since it works at a temperature of 25 • C [1], while the bacterial processes require a temperature that varies between 40 and 55 • C, or greater than 55 • C, depending on the microorganism used [50]. As the CuproChlor ® process is 100% chemical, it provides ease in working conditions-unlike bacterial leaching, which requires special care; it can also work with clay or fine contents, which would not be suitable in processes where bacteria are used [41].
The Use of Other Salts
In recent years, studies were conducted on pretreatment with various salts, and how these would affect the dissolution of copper. One such study [22] investigated the use of an acid-nitrate-chloride medium. A mineral with a chemical composition of 0.70% total Cu, 0.04% soluble Cu, 5.65% total Fe, and an acid consumption of 33.5 kg/t of minerals was used. The most abundant copper mineral in the sample was chalcopyrite, representing 84% of the total copper.
According to the authors, in the pretreatment, the effect of the application of potassium nitrate was minimal, as 9% of the copper extraction was associated with chalcanthite, which is soluble in water, and another 4% corresponded to covellite. Therefore, a solubility greater than 13% guaranteed dissolution of the chalcopyrite in the pretreatment process, which was not achieved under these conditions. From the statistical data obtained, the curing time parameter most influenced the results, with 54.66%; sulphuric acid had a moderate contribution of 35.75%, while potassium nitrate had a low contribution of 0.61%. For the case of a pretreatment with sodium chloride, the curing time was the parameter that had the most influence on the extraction of copper, with 56.36%; sodium chloride had a moderate contribution of 23.09%, while sulphuric acid had an extremely low contribution of 1.78%.
Another study [21] focused on the effects of a sodium chloride pretreatment prior to leaching of a copper sulphide ore. Here, a copper sulphide was used, which contained 1.21% chalcopyrite and 0.54% bornite-these being the minerals with the highest contribution of copper. The pretreatment process was carried out with 10 g of the mineral, to which 20 kg/t of H 2 SO 4 was added, along with 65 kg/t of sea water, and the addition of NaCl, which varied depending on the condition to be used. The mineral samples were homogenized to avoid loss to evaporation, and were left in a covered Petri dish. These plates were placed in a muffle so that the temperature did not drop below 50 • C, where they were left for the necessary time, while those at 20 • C were left in the laboratory with air conditioning to maintain temperatures. After the curing time, the samples were leached in an Erlenmeyer flask. The highest copper extraction was with a chloride concentration of 90 kg/t, with a curing time of 40 days at a repose temperature of 50 • C. As the chloride concentration increased, the copper extraction increased, as did the temperature and curing time.
Comparing the CuproChlor ® process with the experiment conducted in [21], a variation of 5.14% was presented, representing the differences between the two processes. Conducting this experiment as a pilot showed that it can be an alternative treatment. However, the use of 90 kg/t of sodium chloride can present a problem in downstream processes, such as SX or electrowinning. One of the problems generated by chlorides is that the Cl − ion oxidizes, thus forming chloride gas, causing pitting. This problem consists in the corrosion of the cathodic surface that is not in direct contact with the rich electrolyte. Another problem that can arise from exposure to chlorides is the trapping of impurities-such as lead-on the cathode surface. An alternative to avoid these problems is to perform a wash such as the one used in the CuproChlor ® process. Another important factor is the greater use of sodium chloride compared to the use of calcium chloride in the CuproChlor ® process, which may represent a greater operating cost.
Effect of the Addition of Chloride
According to the study carried out by [65], incorporating sodium chloride into the agglomerate leads to an increase in copper extraction. To confirm this, the authors compared the effect that extraction would have with and without the addition of chloride. However, by increasing the amount of chloride from 20 to 70 kg/t, the extraction did not undergo a significant improvement. This coincides with the findings of [21,48,49], where better results were obtained with chloride additions of 50 and 90 kg/t. This may be due, as suggested by [75], to the fact that the pretreatment carried out with chloride and acid allows for the reactive agents to be distributed more homogeneously in the mineral, thus generating the dissolution reaction earlier. On the other hand, there is the formation of soluble species with a greater solid-liquid interaction, in addition to a greater range of porosity; in this way, moisture is retained in the pores. Moreover, the addition of chlorides caused an increase in the extraction of copper in the tests conducted, which is consistent with the study conducted by [84].
The study conducted by [21] presented an extraction of 92.86%, unlike the 63% obtained in the study performed by [22] after the leaching process; this difference of 29.86% is likely due to the greater use of sodium chloride in the pretreatment process, which presents a difference of 70.2 kg/t between both experiments, since there was not a big difference between the curing time and the curing temperature-which were 10 and 5, respectively.
Effect of Nitrate Addition
The addition of sodium nitrate in the pretreatment process resulted in a positive improvement in the tests performed by [22,24]. One of the possible reasons for this is that nitrates are strong oxidizing agents to decompose copper sulphides; this leads to the fact that when nitrates are added, it results in a greater quantity of oxidizing ions to leach copper sulphides [85,86]. A study conducted by [87] obtained a copper extraction of 92% using nitrates with ferric chloride. It was concluded that this was due to the fact that mixing high concentrations of sodium chloride and sodium nitrate in an acid medium is more effective, which was confirmed by the study conducted by [22]. In addition, the authors of [43] note that chalcopyrite does not react unless it has an oxidizing agent in a sulphuric acid system, which is consistent with the studies mentioned above. However, in the case of potassium nitrate, this may be due to the non-addition of chlorides, as this was not the purpose of the experiment. When comparing the copper extraction results of the experiments performed by [22,24], a copper extraction difference of 35.94% in the pretreatment stage was observed. This difference is due to the variables studied by each author, which were curing temperatures (these being 45 and 25 • C, respectively) and curing times (which were 30 and 15 days, respectively). A greater focus was given to the addition of sodium nitrate by [22], which was 21.3 kg/t. Checking the effectiveness of the addition of nitrates in the pretreatment stage, together with the aforementioned, can explain this variation in copper extraction between both experiments. It should be noted that the addition of chlorides was 19.8 kg/t and 25 kg/t, respectively.
Effect of Curing Time
Increasing the curing time favours greater copper extraction. Moreover, in the statistical analysis carried out by [24], the curing time was the most effective in increasing copper extraction. This could be due to the fact that, as the curing time is longer, there is more time for the acid and chlorides to react with the treated mineral, or there is longer exposure to a large ionic charge. This ionic charge is caused by sulphuric acid, sodium chloride, and sodium nitrate. In addition, the humidity of the sample must be such that it allows the reactions to occur more quickly [21]. This is consistent with the studies performed by [23], where the authors confirm that the curing time increases the extraction of copper from sulphide minerals, and [65], which notes that implementing prolonged curing times generates a benefit to the heap leaching operations that treat copper sulphides; this is due to the fact that this shortens the leaching time, which leads to an improvement in the management of the solution/water and decreases the irrigation requirements.
Effect of Temperature on Curing
By increasing the temperature in the curing process, a better dissolution of copper is obtained. According to the authors of [78], with a higher temperature, dissolution reactions occur faster, because less energy is needed to break the molecular bonds. In the study conducted by [78], the effects of 32.9 • C and 14.5 • C with a 14-day curing time were compared. Once the pretreatment was finished, the mineral was leached in columns, achieving a copper extraction of 78.9% and 5.9%, respectively. Another benefit that temperature provides in curing is that having a high temperature requires a shorter curing time, which is of benefit as the formation of copper sulphate occurs. The evaluation of the temperature, as part of the pretreatment, was investigated by [21]; the authors demonstrated that a maximum copper extraction of 93% was obtained when the minerals (mainly chalcopyrite) were treated with a pretreatment of 90 kg of Cl − /t of mineral, 40 days of curing at 50 • C, and then the pretreated sample was leached at 45 • C in shake flasks. Although an increase in temperature in the curing stage aids in the dissolution of copper, it is also possible to identify successful cases at room temperature. The study performed by [88] indicates the benefit of curing time for chalcocite/covellite minerals in leach columns. Minerals agglomerated with sulphuric acid, chloride ions, and a long cure time have been shown to improve the dissolution rate of a secondary copper sulphide ore at room temperature. The authors note that it is possible to obtain a 72% extraction of Cu when the mineral is agglomerated and cured for 50 days, without the need to increase the temperature in the curing stage. This is mainly due to mineralogy, considering that chalcocite is the easiest copper sulphide to dissolve-a situation very different from chalcopyrite [89].
Conclusions
Due to the depletion of oxidized copper ores, hydrometallurgical plants are running out of ore to process. Therefore, by dissolving the copper sulphide ores through heap leaching, it could be possible to produce copper in a sustainable way. An alternative to achieve this is by strengthening the agglomeration and curing stages prior to heap leaching.
Both physical and chemical pretreatment processes are important in the hydrometallurgical treatment of copper sulphide ores. The agglomerate and those cured present a greater importance in the treatment of copper sulphide ores. The CuproChlor ® pretreatment process (calcium chloride addition) stands out, which produces a heap leaching time of 110 days, with copper extractions that vary between 86 and 96% at 25 • C.
In the reviewed studies, the addition of chlorides is favourable for the pretreatment process, increasing the extraction of copper. In the experiment performed by Cerda et al. [21], copper extractions ranging from 63.7% to 92.9% were achieved with additions of 90 kg/t of Cl − , while in the investigation conducted by hernández et al. [22], an increase of 15.1% was achieved when the addition of chloride increased from 2.1 kg/t to 19.8 kg/t.
Adding sodium nitrate in the pretreatment process resulted in a positive improvement in the tests performed by hernández et al. [22]. By varying the addition of sodium nitrate from 11.7 to 23.3 kg/t, an increase in copper extraction of up to 13.7% was obtained. However, the tests conducted by Quezada et al. [24] using potassium nitrate did not achieve an increase in copper extraction with the addition of 10 kg/t of potassium nitrate.
In most of the studies reviewed, the most influential variable turns out to be the curing time. In the study conducted by Quezada et al. [24] it was determined that, according to the variables used, 55% of the contribution was due to the curing time, according to the ANOVA analysis.
Increasing the temperature during the curing process improves copper extraction. According to the research conducted by Cerda et al. [21], an increase in copper extraction of up to 23.5% was obtained when varying the temperature from 20 • C to 50 • C, while in the study conducted by hernández et al. [22], an increase in copper extraction of 8% was obtained by increasing the temperature in curing from 25 • C to 45 • C.
There are differences between studies at the laboratory and at the industrial scale, the main one being particle size. The particle size in the studies analysed at the laboratory scale varies from 0.0098 cm to 0.79 cm, compared to 1.91 cm used at the industrial level (heap leaching). Another variable that generates differences is the curing temperature, considering that on an industrial scale the leaching of minerals is at room temperature. These differences are the major limitations in the scaling of the various studies.
Conflicts of Interest:
The authors declare that they have no conflict of interest.
|
2021-09-29T15:09:53.481Z
|
2021-07-02T00:00:00.000
|
{
"year": 2021,
"sha1": "8bcda16f25780894c5f656dc1da1131de301389f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/11/7/1067/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1b2fe6c878ca0b84c1c61e082c34f8e976b0f3b6",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
231789136
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 in long-term care facilities in Brazil: serological survey in a post-outbreak setting
ABSTRACT This cross-sectional seroepidemiological survey presents the seroprevalence of SARS-CoV-2 in a population living in 15 Long-Term Care Facilities (LTCFs), after two intra-institutional outbreaks of COVID-19 in the city of Botucatu, Sao Paulo State, Brazil. Residents were invited to participate in the serological survey performed in June and July 2020. Sociodemographic and clinical characterization of the participants as well as the LTCF profile were recorded. Blood samples were collected, processed and serum samples were tested using the rapid One Step COVID-19 immunochromatography test to detect IgM and IgG anti-SARS-CoV-2. Among 209 residents, the median of age was 81 years old, 135 (64.6%) were female and 171 (81.8%) self-referred as being white. An overall seroprevalence of 11.5% (95% CI: 7.5% – 16.6%) was found. The highest seroprevalences of 100% and 76.9% were observed in LTCFs that had experienced COVID-19 outbreaks. Most residents with positive immunochromatography tests (70.8%) referred previous contact with a confirmed COVID-19 case. Although there was a relatively low seroprevalence of COVID-19 in the total number of elderly people, this population is highly vulnerable and LTCFs are environments at higher risk for COVID-19 dissemination. A well-established test for COVID-19 policies, the adequate characterization of the level of interaction between residents and the healthcare provider team and the level of complexity of care are crucial to monitor and control the transmission of SARS-CoV-2 in these institutions.
Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and its associated clinical syndrome (COVID-19) was responsible for more than 26 million cases and 900,000 deaths worldwide 1 and has challenged the global scientific community. In Brazil, the Ministry of Health confirmed on February 26 th , 2020 the first case of coronavirus in the country, and the announcement of the first death by COVID-19 on March 17 th of the same year. Up to October 3 rd , 4,906,833 confirmed cases and 145,987 deaths due to COVID-19 were reported in the country, with 1,003,428 confirmed cases of COVID-19 and 36,136 deaths in Sao Paulo State 2 , the first epicenter of COVID-19 in Brazil.
Since the first case of COVID-19 in the world, there has been an evolution in the understanding of how the virus spreads and what must be done to contain the viral transmission at high-risk settings. Long-term care facilities (LTCFs) are considered at higher risk for virus outbreaks with poorer outcomes of residents, which can be particularly devastating in low-and middle-income countries 3,4 . In March 2020, the Brazilian Society of Geriatrics and Gerontology and the World Health Organization recommended the suspension of external visits to residents in LTCFs 5 . Therefore, the assessment of SARS CoV-2 seroprevalence is important in the definition of heath care policies in LTCFs.
Here, we present the results of a serological testing for SARS-CoV-2 in a population living in 15 LTCFs, following two intra-institutional outbreaks of COVID-19 in the city of Botucatu, Sao Paulo State, Brazil, with a population of 139,856 inhabitants and an elderly population of 22,756 inhabitants (16.3%) 6 .
Starting on May 29 th and May 30 th , 2020, two LTCFs located in the city of Botucatu experienced COVID-19 outbreaks. The first cases, in both institutions, were confirmed in women residents by RT-PCR. From June 3 rd to 25 th , the local health authority collected nasopharyngeal and oropharyngeal secretions to perform the RT-PCR test for COVID-19 in all elderly residents and employees (called "Universal RT-PCR test") from the 20 LTCFs located in the city. In this study, 20 institutions were invited and 15 (75%) agreed to participate. Only the residents were invited to participate in the serological survey. The institutions were assessed for the complexity of the care provided based on the degree of dependence of the residents. The degree of dependence and the most frequent fragility observed among residents were used to define the level of care complexity of each LTCF.The functional capacity of the elderly was assessed using the Katz index, which includes actions related to self-care (bathing, personal hygiene, dressing, the ability to eat without help, the ability to move without help for transference and continence). The total score corresponds to the sum of the 'yes' answers in relation to the items related to independence. Residents were considered independent if they reached a score between 5 and 6 points, partially dependent when the result was 3 or 4 points and highly dependent with scores of zero, 1 or 2 points 7 . Frailty was assessed by the Frail Scale, in which residents were classified as robust (0 points), prefrail (1 to 2 points), and frail (3 points) 8 and for Frail Nursing Home (Frail-NH), [robust (0 to 1 point), prefrail (2 to 5 points), and frail (6 or more points)] 9 . The presence of previous symptoms in the 14 days prior to this survey were obtained from a report by the resident or the LTCF team.
The rapid serological One Step COVID-19 tests (Guangzhou Wondfo Biotech Co., Ltd., China) was approved by the Brazilian Regulatory Agency -ANVISA and were supplied by Instituto Butantan, Sao Paulo, Brazil. The test consists of a lateral flow immunochromatographic assay that is able to detect both, IgM and IgG immunoglobulins together, with no discrimination of the immunoglobulin isotype 10 .
Blood samples were collected, processed and the serological tests were performed on serum samples. Moreover, information was collected on the demographic and clinical status of the participants, as well as the characteristics of the institutions.
The rapid test kit showed sensitivity of 86.4% (95% CI: 82.4%-89.6%) and specificity of 99.6% (95% CI: 97.6%-99.9%) 10 according to the manufacturer. Additionally, there was a better performance of these kits using plasma or serum samples instead of capillary whole blood 11 , so in this survey all tests were performed on serum samples.
The study was reviewed and approved by both COVID-19 seroprevalence rates were calculated based on the results of the rapid serological test for each participating resident from the LTCFs. The prevalence was reported as frequencies of positive tests corresponding to a proportion of the total sample considering a 95% confidence interval.
The demographic characteristics of the participants and the LTCFs were described as frequencies (counts and percentage), or their median, by age and by the rate of residents per team. Fisher's exact or chi-square tests were used to compare the rapid serological results. The Mann-Whitney test was carried out to compare the age of the population age. Statistical analyses were performed using Stata, version 13 (StataCorp, College Station, Texas, USA). The significance level was set at 5%.A total of 209 residents (73.3% of all residents in the 20 LTCFs in the city) were tested from June 22 nd to July 8 th , 2020. Their median age was 81 years old (min 50 -max 106), 135 (64.6%) were female and 171 (81.8%) self-referred as white. Regarding their clinical profiles, 129 (61.7%) residents had between 2 and 4 comorbidities and 120 (57.4%) were using 1 to 5 medicines daily; 194 (92.8%) had no previous COVID-19 symptoms in the 14 days prior to the test. Regarding the state of dependence and fragility, 91 (43.5%) residents were dependent for three or more daily activities (Katz index) and 89 (42.6%) were considered frail (Frail-NH index). Most residents (101; 48.3%) slept in multiple accommodation (Table 1).
The frequency of non-white ethnicity was significantly higher in residents with positive serological test compared to the negative group (p=0.041). Most residents with positive serological tests (70.8%) referred previous contact with confirmed cases of COVID-19, whereas only 2.2% of residents with negative results did not report such exposure to the disease (p<0.001) ( Table 1).
More than half of all residents (113 or 54.1%) lived in LTCFs classified as level III of care complexity. Most residents in LTCFs classified as level II/III had a positive serological test (87.5%) compared to seronegative cases (59.4%) (p=0.007) ( Table 1).
The median rate of residents per health provider supervision was 1.3 (0.7 to 3.0); more than half of the LTCFs had 100% of the beds occupied, with bed occupancy rates ranging from 80.0% to 100%, and 12 (80.0%) of them with private financing. The dependency assessment showed 11 (73.3%) LTCFs classified at level III of care complexity ( Table 3).
The overall seroprevalence (11.5%) observed in the LTCF residents was relatively low, compared to some institutions in high-income countries 12 , but it varied considerably when compared to the results obtained by RT-PCR in each LTCF, probably because many residents had already recovered from the symptoms of COVID-19 or were asymptomatic carriers at some stage and had detectable antibodies at the time of serological test.
Symptoms of COVID-19 in the elderly do not seem to be a good predictor of infection, since almost 93% of the residents examined did not show symptoms in the period of at least 14 days before the serological test, corroborating similar findings from other studies 13,14 . Therefore, the immediate testing of all contacts in the first positive case should be performed regardless of the presence of symptoms 15 . However, the fact that the One Step COVID-19 test does not differentiate between IgM and IgG immunoglobulins may be a limitation for the interpretation of results during outbreaks in this population, so that other testing strategies or combined strategies should be planned. The contact with the care giver team coming from outside the institution, facility, the presence of visitors and the living conditions of the elderly make the LTCFs highly vulnerable to the transmission of COVID-19 12 . Such vulnerability can be seen in the two LTCFs that had previous COVID-19 outbreaks and had seroprevalence rates above 70% and case fatality rates of 33.3% in their residents. In addition, only in those same two LTCFs, cases of COVID-19 in health care teams were confirmed, showing a RT-PCR positivity rate of 40% (4/10, LTCF #8) and 60% (3/5 at LTCF #14). High positivity rates of COVID-19 in care giver teams of LTCFs that experienced COVID-19 outbreaks have also been observed in investigations carried out in the United States. In these studies, an average of 7.4% positivity was reported in health care teams of 15 LTCFs tested soon after an initial case of COVID-19 was identified, contrasting with the average of 1.0% positivity in care giver teams of 13 LTCFs with no COVID-19 identified 16 . In our study, the LTCFs affected by outbreaks had a higher positivity rate among their team members than observed in the literature, suggesting that specific characteristics such as living conditions (frailty and degree of dependence) of the residents in these LTCFs would require special care on the part of the health team and close contact for daily activities; however other specific risk factors for the transmission of SARS-CoV-2 should be investigated in these settings. The prevalence of COVID-19 in the community is another important factor that may impact the number COVID-19 cases and deaths in nursing homes 18 and should be considered in the context of investigations carried out in LTCFs during outbreaks.
Most residents with a positive test for COVID-19 tend to have a higher degree of dependence, comorbidities or different levels of cognitive decline or dementia, making it difficult to comply with contingency measures 17 . In this study, among residents with a positive test, 87.5% lived in complex care institutions level III, almost half of the population (45.8%) was totally dependent on the team care for daily activities, 54.2% were considered frail and 58.3% lived in accommodation with multiple beds, which favors the spread of the disease. Even so, the low seroprevalence in this population can be explained by the restriction of visits and group activities, as well as by the implementation of hygiene protocols in the LTCFs 5 .
Normally, in health institutions of high complexity frailty and extreme dependent residents imply a greater number of healthcare givers, increasing the risks of transmission of SARS-CoV-2. This study showed a median rate of 1.3 residents per health team. However, there is no consensus on the impact of the proportion of resident per health team on the dissemination of COVID-19 in LTCFs settings 18,19 .
Most LTCFs investigated in this study are financed by the private sector 80%). In fact, Sao Paulo State is the richest State in the federation 20 . However, the country presents many other challenges in relation to the elderly and COVID-19.
Equal acquisition and distribution of adequate personal protective equipment in quantity and quality for residents and workers, specialized training for health teams, scheduled periodic repetition of tests if someone develops symptoms or presents a positive test for the virus 17 , laboratory capacity and for the last, but not least, the assessment of the mental and emotional state of the workforce in nursing homes. Faced with a possible second wave of COVID-19, LTCFs require special attention. The literature suggests that these units should be the last ones to reopen and isolation measures including restriction of visits should continue 17 . The government should discuss special lines of credit and new financing sources directed to these institutions to ensure the adequate isolation, routine examination and adequate care of residents, reducing transmission and avoiding lethal endpoints.
|
2021-02-04T06:16:24.132Z
|
2021-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "36c32f307ff1655e62120965f589fe8f54ed4c68",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rimtsp/a/NkywY6DyGQTccx5BqvNfK9v/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c45815586103218abc4f9c35e4e8223209f4bfe6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270408448
|
pes2o/s2orc
|
v3-fos-license
|
Crowdfunding as a socio-economic opportunity for state support
. In the current economic environment, most enterprises have been affected by the political, economic, social situation in the country and the quarantine restrictions of 2019-2021. In the context of the development of computerization and informatization, the role of crowdfunding as an alternative way of financing creative ideas, startups, innovations, new technologies, and socially significant projects is growing. The purpose of the study is to reveal the essence of the concept of crowdfunding, its formation and development in Ukraine. The study is based on theoretical generalization, comparative analysis, methods of analysis and synthesis, which made it possible to argue the prerequisites for the successful development of crowdfunding in Ukraine and the directions of its state legislative management. It is found that crowdfunding is a tool for financing small or medium-sized business projects through an open call on the basis of social networks or the Internet, having a material or moral basis for the interest of potential investors. The principles that are characteristic of crowdfunding in terms of targeting, investor interest in the project, publicity, and the benefits of the future investor are outlined. Thanks to the SWOT analysis conducted in the study, the types of crowdfunding business models were revealed. The criteria for assessing the level of development of crowdfunding are proposed: the degree and timeliness of information support, the level of activity, diversification of platform types, interaction of the resource with the banking sector, the level of public awareness, the level of government influence. The state regulatory policy on crowdfunding in different countries is analyzed and its financial management in Ukraine is proposed. It is established that in the conditions of insufficient financial resources, crowdfunding is a qualitative alternative to standard investment methods, and its main environment of subjects in Ukraine is medium-sized enterprises. The practical value of the study is to identify the factors that impede the development of an alternative method of financing in Ukraine, as well as to provide recommendations for the further functioning of crowdfunding in the country
INTRODUCTION
The financial crisis has become typical for most businesses, which have faced the problem of finding and selecting sources of investment.In the context of the full-scale war in Ukraine, it has become more difficult for enterprises to exist and develop, and it is difficult to access loans.There was a need for significant financial support and the desire
MATERIALS AND METHODS
The theoretical and methodological basis of the study was the scientific works of Ukrainian and European scholars on the activities and formation of crowdfunding, materials from periodicals, online sources, educational materials and scientific works of the authors on this topic.The study used various general and specific research methods.In particular, using abstraction, a theoretical framework was developed to assess the impact of crowdfunding on government support and socio-economic development, including theories of social capital, financial inclusion, and entrepreneurship.In the context of the study of crowdfunding as a socio-economic opportunity for state support, the inductive approach was applied at different stages of the analysis: data collection, case analysis, and conclusion.The deductive method provided a rigorous and systematic approach to testing theories and hypotheses, allowing for the establishment of cause and effect relationships.
The article also uses the methods of a systematic approach, logical generalization and comparison, synthesis, structural and logical analysis, graphical method and grouping method.The methods of analysis, synthesis, grouping, comparison and generalization were used to reveal the essence of crowdfunding, clarify the conceptual and categorical apparatus and improve the typology of crowdfunding.Structural and logical analysis helped to create models of crowdfunding project implementation, and the graphic method was used to visualize the results of the study.Additionally, a systematic approach and methods of classification and SWOT analysis were used to reveal the business models of crowdfunding.The criteria for assessing the level of crowdfunding development were determined by the methods of cognition, classification, synthesis and analysis.The study of the state regulatory policy used the methods of analysis, synthesis, logical generalization and comparison.The method of example was used to study the impact of crowdfunding platforms on social entrepreneurship, and the case study method was used for real-life analysis of phenomena directly related to crowdfunding.
RESULTS AND DISCUSSION
The main goal of improving Ukraine's public financial structure should be to generate modern financial structures, among which crowdfunding plays an important role (Mazaraki & Volosovych, 2016).Crowdfunding connects lenders and borrowers through online platforms.Crowdfunding is aimed at financing innovative and investment projects that are not profitable for ordinary stock market participants (Sharma et al., 2017).The analysis of the studied sources made it possible to establish the existence of three main approaches to the concept of crowdfunding as a business solution that calls for affordable, effective fundraising in general and for a specific pilot project.Crowdfunding is based on principles, namely: of enterprises to develop and use additional ways of financing to develop their activities, to develop their business and innovation activities more.
The study by L. Yaremenko et al. (2021) shows how small and medium-sized enterprises found themselves in a situation of limited access to credit and additional financing for a specific area of activity.In developed countries, the solution to this problem can be found in many ways through online platforms, the impact of which is felt around the world (Versal & Dudnyk, 2021).At the same time, Yu.Krylova (2020) pointed out that in developing countries, crowdfunding is one of the ways of financing that can change traditional business management and support the financial situation in scientific institutions.In the context of the development of Ukraine's market economy, the search for various sources of funding is becoming increasingly important for both scientists and entrepreneurs (Gierczak et al., 2023).
During 2019-2023, there is a radical change in the methods and attraction of funding for innovation and investment projects.Technological advances have contributed to the emergence of new innovative business solutions, in which digital consumption of information plays a large subjective role.Crowdfunding plays a significant role, authorizing a wide range of people to finance some innovative and investment projects of a socio-economic nature through online platforms at the expense of shared funds (Homotiuk, 2022).Scientist C. Medina-Molina et al. (2019) noted that crowdfunding platforms allow the development of social investment areas that are not institutional investors, such as the state, investment funds, business representatives, venture capital units, and others.In this form, crowdfunding can be implemented in various areas, such as economic start-ups, support for small and medium-sized businesses, investment in cultural events, public and political organizations, etc. T. Baumgardner et al. (2017) also noted that crowdfunding allows to receive additional funding in the form of various investments due to financial benefits, social and environmental aspects of impact and personal needs and interests, and M.K. Poetz & M. Schreier (2019) concluded that crowdfunding is an innovative source that allows to attract funding for new innovative entrepreneurial projects from a large number of people involved through Internet platforms.
The purpose of the article was to deepen the theoretical and methodological foundations and opportunities for the formation and spread of crowdfunding in the context of official stock market transformations.Despite the rapid popularization of crowdfunding in the world, research on this issue in Ukraine is just beginning.The characteristics and prospects of using crowdfunding in general, as well as crowdfunding platforms, remain unexplored by scholars.Research into the above issues will allow crowdfunding to become a more effective, innovative and alternative method and tool for financing projects and entrepreneurial activities, and to expand the range of awareness of this concept and its use.
Targeted direction.The basis is the definition of the goal, the task of applying the future attracted investments in cash.Also, investors are expected to be selected for a future pilot project, which is determined at the first stage of project development, precisely when developing a strategy for attracting and raising additional funding.
Investors' interest in the project, which is manifested through the approval of the amount of remuneration, donations in the form of cash.
Publicity, including information transparency through incentives and additional funding.
Benefit of the future investor when he makes investments in the form of allocations.
The assessment of the development of crowdfunding as one of the elements of alternative additional financing of the state can be carried out according to the following criteria: the degree of development of communication support, the degree of activity, diversification of platforms, cooperation of platforms with the banking system, the degree of awareness of communities or communities, the degree of public administration (Table 1).In the scientific literature, the concept of "5Ps" is used to classify crowdfunding, similar to the marketing mix, which is formed from the "4Ps" (sales, communication, price, product) (Moeller, 2008;Bakhur, 2021).Thus, the concept of "5Ps" is the five main components of the crowdfunding system (Fig. 1).
Criterion Description
Degree of communication support Systematic collection, processing, and summarization of statistics on the state and development of industry.
Degree of activity
Increased activity and interconnection with regular activities of the developing country.
The absence of language barriers only helps to expand the geography of platforms.
Diversification of platforms
The operation of multi-sectoral platforms without any obstacles or restrictions on their activities.
Cooperation of platforms with the banking system
Strengthening the cooperation of platforms with the banking system through the cooperation of existing platforms and banks.
Degree of awareness of communities or communities
Increasing the degree of socio-economic erudition of the population, especially regarding the prospects and methods of obtaining funds: social, scientific, innovative startups in the context of high cost of traditional financial resources.
Degree of government regulation
Existence of various socio-economic requirements that regulate the functioning of certain types of crowdfunding.
Table 1. Criteria for assessing the development of crowdfunding
Source: developed by the authors on the basis of J. Alwidian & R. Al-Omoush (2019) Crowdfunding as a way of raising funds appeared around the 2000s.Initially, it was the music industry, and this process was used as a driving force to promote music compositions through the global Internet.The main idea behind crowdfunding was to attract investment by reaching a wider audience.Therefore, all the links or actors in the process are important.Over the years, certain trends have emerged in the process.These are mainly ideas of humanitarianism, and in practice, opportunities to solve global problems of modernity: ways to integrate artificial intelligence into society, create jobs that will not be affected by such integration (retraining of existing specialists).After all, crowdfunding is a platform that promotes the development of shared values and social responsibility, and it is an example of how it compares favorably with conventional approaches to raising investment capital.Project creators are often highly motivated, engaged, and collaborate with all stakeholders to represent their interests.The platform in this system is a means of communication between creators and investors to resolve various issues related to the project's implementation.The boundary between investor and creator can be easily eliminated, as crowdfunding involves maximum participation in the process, and if desired, the investor can become a creator, and vice versa -the creator can be an investor.Crowdfunding aims to become more socially oriented in order to attract more actors and users (potential actors).However, crowdfunding is dependent on a large number of external factors, which makes it unstable, but the general acceptance of this process will lead to greater security, which the subjects themselves will be able to guarantee.As of 2023, the proliferation of Internet platforms and cyberspace introduces new opportunities, stereotypes, and ideas that directly affect business practices and change user behavior.However, the efficient execution of various concepts of commercial transactions through online platforms, including the use of electronic payment systems, is crucial.The reliability of the banking system and electronic payment system is important when choosing a project, as it is one of the aspects of security.The operational reliability of e-commerce is of paramount importance to investors, as any problems related to fund transfers can detract from crowdfunding projects.
Investing in an innovative project usually involves active human participation, unlike traditional methods of investment.In crowdfunding, investor participation goes beyond financial contributions to include creative contributions, service-related support, and even critical feedback aimed at improving project outcomes.Collaboration is central to the crowdfunding system.For example, the Coolest Cooler project, which aimed to develop a new mobile refrigerator with additional features such as USB ports, a flashlight, wheels, a comfortable handle, and a phone charger, initially sought funding of USD 50,000, but received significant investor support and active participation, leading to its success.In the end, he received USD 13.5 million from over 62,000 investors along with the final iteration of the product (Petrenko, 2023).Crowdfunding makes it easier to initiate the realization of an idea at the initial stage of a project with fewer human and financial resources.This is primarily achieved by accelerating the accumulation of funds.Different elements of the 5Ps are formed for each project (Fig. 2).
Advantages Disadvantages
creation of new business models; attracting additional allocations for small and medium-sized enterprises through projects; inability to act in opposition to official actions; nability to apply for additional funding for large projects; significant disproportionality of data in the implementation of procedures; The first step is to place the project on crowdfunding platforms, plan and formulate clear goals, objectives, and goals of the project, specify specific project deadlines, estimate borrowed and personal funds, and describe the uniqueness and significance of the project for future investors.
The successful completion of the project depends on an apt title, well-written and clearly defined characteristics, issues, relevance, future expected results, information about the executors, and a report on the use of the allocations received.The content of the project is also important, as it should be interesting, clear, convincing, effective in attracting investor funding, and attractive and informative for the potential consumer.
When presenting a project on a crowdfunding platform, you need to describe ways to reward investors, which may vary depending on the amount of the contribution.In addition to material and financial rewards, social rewards for investors are often used, such as a letter of appreciation or a review of approval on social media pages or verbally in a personal conversation.
Crowdfunding projects are successful if the specified amount of funds is fully raised (i.e.100%), and sometimes even more than the specified amount.Such projects definitely have a clear and interesting presentation, video, clear reporting on the proceeds received on social media pages, i.e. open reporting to consumers, continuous contact with the target audience through social media pages, blogs, surveys and polls among subscribers, etc. Daily, continuous content and coverage of the project implementation.
Without well-prepared information (content), it is impossible to win and have a successful project.Therefore, most project authors use and engage social networks, You-Tube channels, popular bloggers, media, and provide free social and entertainment content (charity concerts, exhibitions, flash mobs, etc.) to increase interest and promote the project.Crowdfunding has some pros and cons.At the same time, crowdfunding can have both advantages and disadvantages in terms of business development, the state, and entrepreneurial activity (Table 2).In Ukraine, the practice of developing and implementing investment platforms such as crowdfunding is rather slow and fragile.The trend is toward a decrease in interest in this type of investment inflow.First of all, crowdfunding requires more work and effort from the developer than the standard financing model.The author has to do everything himself, i.e. develop a project, formalize it, launch it on the market, and then monetize it.The site provides consulting and information support for the entire period of fundraising.Secondly, developers should set a fair price depending on the costs, as anyone can follow this.That is, the possibility of making a markup on the product is minimized, and if there are no analogous products on the market, it is difficult to set the price.Third, the success of crowdfunding depends on the connection between the developer and the user.For the developer, this means coming up with a project idea that will attract people and keep them interested.For investors, it is necessary to find projects that interest them.In addition, crowdfunding projects often do not have the opportunity to take out loans or receive grants, unlike the conventional model (Shevchenko & Kazak, 2019).
The main advantages of crowdfunding platforms are that they greatly simplify the process of starting a business.The costs of popularization can be very low and insignificant, which is very important in today's business environment.Crowdfunding sites have a regular user base that may be interested in new projects and attract people from outside.Second, crowdfunding allows project creators to better control their work.Thanks to a clear plan of action published on the site in advance and available to investors, the creators have a clear schedule.Sponsors, in turn, benefit from crowdfunding as an opportunity to influence the future of the project.The money spent on the project gives them the opportunity to share their ideas with the developers.This leads to another advantage -the ability to establish a connection between the user and the developer, meaning that they can exchange ideas, and the developer has an idea of what the user wants to get from the project.Thus, the crowdfunding model is more flexible and allows the project to become ideal for developers and future users.
The general process of crowdfunding is based on four models of entrepreneurial activity: 1.The "all-or-nothing model", which specifies the targeted investment area and a clear deadline for fundraising.If the specified amount of funds is not raised within the specified period, the entire amount is returned to investors.This model is used on all crowdfunding platforms.
2. The "all or more of the above" model is very similar in nature to the previous one, except for the specified period of fundraising, it means that the receipt of funds does not stop after the specified amount is raised (previously prescribed and set by the project goal).
3. The "holding" model, in which a trustee (manager or manager) of a crowdfunding platform organizes a company for a specific project that needs financial support.In this model, the sale of shares and bonds is encouraged.
4. The "club mode" model, where project followers are an important part of the "club of money investors", their role is to show interest and commitment to the development of a particular project.
To date (2023), the problem of the formation and spread of crowdfunding has been reflected in the research of scientists from mainly European countries.P. Belleflamme et al. (2023) noted that the issue of crowdfunding should be disclosed as one of the successful business solutions for financing through online platforms at the expense of shared funds.However, donations, rewards, gifts, and additional contributions in the form of money should also be taken into account.This study agrees with the opinions of the above-mentioned authors and confirms that crowdfunding is an additional and at the same time a new way of obtaining funds that are lacking for enterprises, educational institutions, institutes, etc.The authors of this study believe that this is a new and effective method that has some risks at the initial stages, but ultimately brings its "fruits".
Source: developed by the authors on the basis of N. Petrenko (2023)
Advantages
Disadvantages opportunities for participation and development of innovative projects; investment by small and medium-sized businesses; a fairly easy process of formalizing an investment portfolio; the possibility of financing municipal projects in the face of the economic crisis.
a significant level of bankruptcy risk, lack of investor awareness; information support is asymmetrical; dependence on external socio-economic factors; asymmetric awareness, dependence on external factors, low level of demand from investors.
Opportunities Threats project implementation requires only a promising idea that can be presented on the website, which allows for evaluation and investment from interested users; studying the needs of the audience and identifying relevant and important projects; unique projects that appeal to users have a high potential for popularity; no restrictions on receiving funding on an equal basis of opportunity.
high competition between projects of similar topics; the risk that the idea may become irrelevant even if the developer implements it with high quality; failure to receive the required amount of funds for the project implementation; after implementation, the project may not gain wide popularity in the market and may not bring the expected profit to the developers.
Table 2. Continued
Scientists A. Ordanini et al. (2023) noted that crowdfunding is the first step to attracting additional funding for innovation and investment projects through co-investment from other people.Crowdfunding should be explored as an additional means of financing certain projects through a publicized appeal or co-financing based on the moral and material attention of potential investors.In this article, this issue has been considered in more detail: the authors are closer to revealing the very essence, structure, and principles of crowdfunding in Ukraine, its application, risks, and opportunities.The researchers T. Tovt & N. Drozd (2019) also noted that crowdfunding is a manifestation of different sponsorship markets and trends in the popularization of social investment.In addition, they noted the conditions for using crowdfunding in the prism of the investment process of commercial projects in Ukraine.
А. Bondar (2019) noted that crowdfunding is a system of encouraging investment at the micro level through network platforms for the implementation of various innovative projects without limits.In the opinion of the authors of this article, it is necessary to add that crowdfunding platforms provide innovative investment opportunities, and the definition of crowdfunding can be defined using the term "crowdinvesting", which will narrow down the cases of economic effect for investors' investments.S. Tulchynska et al. (2017) noted that crowdfunding should be primarily divided into certain types, namely: socially and culturally oriented, crowdfunding ideas, crowdfunding in the business environment and political crowdfunding.There is logic in this division, however, it would be possible to supplement this division with innovative, socio-economic and financial types to increase efficiency, attractiveness, and diversity for investors for future investments.T. Mayorova et al. (2019) noted that crowdfunding accelerates the processes of globalization and integration; the emergence of new types of production; mass cooperation; opening up new opportunities for joint, open ownership and access to materials, goods, services (sharing), etc.The authors agree with this opinion, but it would be more correct to note that with the development of Internet technologies, new trends in the investment arena are emerging due to crowdfunding and crowdfunding platforms, which are increasingly strengthening, becoming unshakable, supported by the facts of implemented innovative projects.It should be noted that crowdfunding is developing rapidly from year to year, especially in the socio-economic sector (environment).This leads to additional formation of investment resources and the need to implement innovative projects.
To summarize, it should be noted that in the context of increased competition in the market environment and limited financing of enterprises, crowdfunding is indeed one of the most promising and productive tools for attracting additional allocations, which is confirmed in the scientific literature.However, the authors who considered this issue did not take into account the influence of factors on the implementation and operation of Internet platforms and crowdfunding platforms.Paying tribute to the scientific achievements of the above-mentioned scholars, it should be noted that many issues require a new vision.The continuous development of crowdfunding in the world indicates its great potential prospects.Therefore, it is necessary to pay attention to the problems of analyzing external and internal factors of the operating environment and the possibility of solving the negative effects of the intensification of crowdfunding development on the market and economic system of the state.
CONCLUSIONS
Given the lack of financial revenues and despite the risk of failure of project authors, crowdfunding is a far-reaching, effective way to attract additional funding in the business, scientific, and educational sectors.At the same time, with the emergence of decentralization in the existing banking system, crowdfunding can become effective for the public and public authorities as an alternative source of financing for innovation and investment projects.Crowdfunding is gaining momentum and becoming an alternative source of funding, which in turn makes it possible to accumulate the necessary funds for the implementation of certain projects at the initial stage.Crowdfunding platforms internationalize projects, increasing their chances of successful implementation.In Ukraine, crowdfunding is gaining momentum, but Ukrainians are using the direction of registering scientists' projects on European platforms.For crowdfunding to flourish in Ukraine, the challenges discussed in the study need to be overcome, which will create more favorable competition against the traditional methods of financing that exist in the country's financial market.
Internet platforms give crowdfunding an innovative meaning that is unlike traditional forms of financing, including collective financing.Not only private entrepreneurs, but also the public and communities can address the development of their goals through crowdfunding.Many organizations and communities are already starting to make their pilot projects and steps by using Internet platforms, network technologies, and crowdfunding in general to attract additional funding for the socio-economic development of their activities.Therefore, the prospects for further research are to increase the productivity of using crowdfunding as an alternative financing and marketing of crowdfunding.
Figure 1 .
Figure 1.Components of the crowdfunding systemSource: developed by the authors based on T.Torris (2017)
Figure 2 .
Figure 2. Elements of the functioning of a project to attract investment through the Crowdfunding scheme Source: developed by the authors
|
2024-06-13T15:07:21.579Z
|
2023-11-21T00:00:00.000
|
{
"year": 2023,
"sha1": "29ae0555f3b4c72dc4ecc10b3ffbadb3ba67a1da",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.69587/ueb/4.2023.33",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4bdc512405cefada8e1532df574bc9ffae5c150a",
"s2fieldsofstudy": [
"Economics",
"Business",
"Political Science"
],
"extfieldsofstudy": []
}
|
49315393
|
pes2o/s2orc
|
v3-fos-license
|
Variability of clinical target volume delineation for rectal cancer patients planned for neoadjuvant radiotherapy with the aid of the platform Anatom-e
Highlights • Delineation of treatment volumes is a major source of uncertainties in rectal cancer radiotherapy.• Anatom-e is an electronic platform working as an image-based delineation system.• The use of Anatom-e was able to decrease the inter-observer variability in the delineation process of clinical target volumes for locally advanced rectal cancer.• Anatom-e may be potentially helpful in increasing the compliance to common guidelines and protocols.
Introduction
A multidisciplinary approach, including total mesorectal excision, radiotherapy (RT) and chemotherapy (CHT) is presently considered the standard of care for the treatment of locally advanced rectal cancer [1,2]. Pre-operative RT is a well-established option to provide tumor downsizing and downstaging and to increase locoregional control in this setting [3]. Selection and delineation of clinical target volume (CTV) and organs at risk (OARs) are crucial steps to deliver precise and tailored radiation [4][5][6]. While the sites and subsites of irradiation are, to some extent, agreed by physicians, the boundaries of CTVs still remain controversial, leading to inhomogeneous contours and systematic errors with standard deviations as high as 1 cm, as reported in different studies [7,8]. Generally speaking, most of the heterogeneity is due to the difference in contouring protocols used by the treating physician, but the magnitude of this uncertainty is also related to the imaging modalities and technical approaches used in the delineation process [9]. In the context of pelvic malignancies, one of the major sources of uncertainty is the lack of clearly-defined anatomical boundaries in this region, which may lead to detectable contouring differences among radiation oncologists [8]. From an anatomical point of view, the site of major disagreement is to be located in the upper anterior and inferior aspect of the mesorectum, which is a critical structure for tumor control given the likelihood of microscopic involvement, particularly for locally advanced cases [7,10]. Several strategies can be implemented to reduce this source of error, including periodic training for radiation oncologists, the use of shared delineation guidelines, and quality assurance processes with online platform for centralized revision [11]. Anatom-e (Anatom-e Information Sytems Ltd., Houston, Texas) is an electronic platform functioning as an image-based delineation system, an image fusion software and a treatment planning tool. It includes a digital atlas combining protocols and guidelines, classified according to tumor site and a vast library of normal tissue structures. All protocols and information can be continuously updated and customised. Interestingly, Anatom-e contains a library of CTVs, including lymphatics at risk, prophylactic volumes and OARs for most of the tumor sites and oncological scenarios, on both intact and post-operative anatomy. The present study was set in order to test the potential role of the platform Anatom-e in reducing the intra-(Intra-OV) and inter-observer variability (Inter-OV), within a multicentric context, in the delineation process of prophylactic volumes in locally advanced rectal cancer patients undergoing neoadjuvant RT. In particular, we tested the efficacy of the platform in homogenizing the compliance of different radiation oncologists (ROs) in following a pre-defined delineation protocol.
Material and methods
This was a multicentric study implemented within the oncological radiotherapy network of the Piedmont region in Italy and endorsed by 'Rete Oncologica del Piemonte e della Valle d'Aosta'. The study was proposed to 14 centres belonging to the network and 10 of them agreed to participate and were consequently included. For each participating centre, a RO with a !5-year experience in the treatment of rectal cancer was selected to participate, on a voluntary basis. Participating centers were provided with the credentials to access the online version of Anatom-e, which has most of the characteristics and tools of the physical platforms (www.anatom.e.com). Two different clinical cases were chosen to be delineated. Both of them were locally advanced rectal cancer classified according to 2010 American Joint Committee on Cancer/Union Internationale Contre le Cancer (AJCC/UICC) staging system, undergoing neo-adjuvant long-course radiotherapy. Patient 1 was a 59 years old male with a Stage IIIB (cT3N2aM0) low lying rectal cancer with a mesorectal node in close proximity to the mesorectal fascia. Patient 2 was a 49 years old female with a Stage IIIC (cT4bN1bM0) rectal cancer located to the low rectum with an anterior spread and infiltration of the posterior wall of the vagina. Detailed characteristics of the 2 clinical cases can be found in Fig. 1.
Target volume selection and delineation
For both cases, planning computed tomography (CT) images were acquired from the second lumbar vertebra down to below the lesser trochanters. All simulation images were acquired with no contrast enhancement and had a 3 mm-slice thickness. Participating ROs were instructed to follow the atlas and the specific instruction found in the Radiation Therapy Oncology Group (RTOG) consensus published in 2009, during the delineation process [12]. Two consensus meetings comprising contouring laboratory were organized before the start of the study to agree on the whole contouring workflow. In general, our study aimed at investigating whether the compliance in following the indications of the RTOG 2009 consensus could be enhanced by the use of an advanced digital platform such as Anatom-e, compared to direct consultation of the paper version of the aforementioned consensus. Participants were asked to manually segment clinical target volumes (CTVs), for both patient 1 and 2, on day 1 with and without the use of the Anatom-e platform. After one week (day 2), the same ROs were asked to contour again, with and without Anatom-e, the same CT scans of the 2 patients. The treatment volumes to be contoured and the corresponding nomenclature and description can be found in Fig. 2. An example of the Anatom-e interface with the specific ontology can be seen in Fig. 3.
Contour analysis
All volumes were imported in the Velocity platform (Varian Medical Systems, Palo Alto, CA). To determine Intra-OV, we analysed and compared contours performed by the same RO on day 1 and day 2. To determine Inter-OV, we compared the contours drawn by all different participants with a 'ground truth' contour performed by an experienced RO dedicated to rectal cancer treatment. The 'ground truth' contour was driven by the use of Anatom-e and considered as the 'gold standard' for comparison. An outline of all different contours obtained with or without the platform can be found in Fig. 4. We decided to analyse overlap between different contours using the Dice similarity coefficient (DSC) which represents the ratio between the overlapping volume and the encompassing volume, with the numerator multiplied by 2. By definition, DCS varies from 0 (no overlap) to 1 (complete overlap) [13]. To explore the distance between contours, we employed the Hausdorff distance (HD), which is the maximum distance of each voxel of the reference set to the nearest point in the comparison set [14]. We also calculated the mean distance to agreement (MDA), which is the average distance that all outlying points in the considered volume must be moved to achieve perfect conformity-overlap with the reference volume [15]. For both HD and MDA, lower values (in mm) correspond to a higher correspondence between the compared volumes.
We investigated the overlap between contours performed on day 1 and day 2 by the same operator for Intra-OV and between all contours drawn by the ROs of all the 10 centres participating in the present study and the 'ground truth' contour drawn in the reference centre, for Inter-OV.
Statistical analysis
All the results were reported as the sample mean and standard deviation (SD). Comparisons between groups were performed using univariate t-Student test. Multiple subsets of data were analysed on a 8 Â 8 grouping categorization. The difference between multiple subsets of data was considered statistically significant if t-Student test gives a significance level P (P value) less than 0.05. The STATA software package (Stata Statistical Software: Release 13.1. Stata Corporation, College Station, TX, 2013) was used for all statistical analysis.
Results
Detailed results can be found in Table 1. For the clinical case 1, no significant difference was found in terms of Intra-OV (same RO; day 1 vs day 2) for DSC, HD and MDA according to the use or not of Anatom-e. In particular, the mean DSC was 0. (Fig. 5c). The use of Anatom-e decreased the SD from 2.97 to 1.33 (Fig. 5b). Mean HD was lower (26.06; SD: ±2.05; range: 24.08-32.62) but without statistical significance (p = 0.14) compared to the one obtained without Anatom-e (31.39; SD: ±1.31; range: 26.14-48.72).
Discussion
The current management of rectal cancer employs a multidisciplinary strategy which involves different professionals. Hence, it is of crucial importance to verify the quality of the different treatment strategies comprised within the combined modality approach. For example, training, centre's experience and the quality of total mesorectal excision have be shown to be prognostic factors in rectal cancer patients [16]. Moreover, a central review of pathology report and an efficient feedback to surgeons has been proven to improve the quality of the surgical procedure [17]. Also RT, as a mainstay treatment option in the multimodality management of cancer, needs quality assurance (QA) protocols to constantly check for the quality of treatments (target volume delineation, treatment plan optimization, dosimetric results and delivery reliability) [11]. The contouring process of the target volume is a major source of uncertainty and error in RT and, since, most of the times, this potential error remains constant during the whole RT treatment, it may have a detectable impact on the dose received by the tumor, especially for highly conformal techniques, such as volumetric modulated arc therapy (VMAT) and whenever image-guidance (with consequent CTV to PTV margin reduction) is employed [18][19][20]. The factors that most consistently influence target delineation variability include gross disease visibility, disagreement on target definition, extension and interpretation or lack of dedicated contouring protocols [18,21]. Inter-OV during the delineation process is strongly affected by the imaging modality and technique employed and by the specificity of the observer (specialty, training and personal bias) [18]. This interobserver variation is detectable even during the delineation of visible and well-circumscribed targets such as in prostate cancer or brain tumors with variation having an average factor of 1.3-2 [18,21]. It is, of course, much higher in body regions where anatomical boundaries are not necessarily well-defined. The pelvis is paradigmatic in this sense and rectal cancer RT volumes are a good example [4]. Pelvic subsites such as the presacral space, the mesorectum and the lateral lymphnodes are not trivial to be correctly defined on non-contrast-enhanced computed tomography images [4].An even higher variation can be supposed in the evaluation of the extent of the microscopic involvement in the delineation of the CTV. It has been shown that ROs tend to delineate larger volumes compared to physician with different specialties [21]. Hence, standardization of the delineation process is of paramount importance for all tumor sites, including rectal cancer. As an example, in Belgium, the PROject on Cancer of the Rectum (PROCARE) initiative was set to increase the use of guidelines and quality indicators throughout the country, with decentralized implementation of treatment protocols, prospective data registration and consequent feedback supply and benchmarking to improve the homogeneity of CTV delineation in daily clinical practice [11]. Tools allowing for direct visualization of a specific contouring protocol are very useful to increase the consistency of the delineation process. In this sense, the study performed by the Radiation Committee of the Southwest Oncology Group (SWOG) provided an important evidence on the effect of a consensus guideline-based visual atlas with respect to target volume delineation variability in rectal cancer [7]. Authors asked 13 physicians and 1 reference expert to contour both gross tumor volume (GTV) and CTV in a case of cT3N0M0 rectal cancer. The access to the delineation atlas was provided or not on a random basis and observer variations were analysed on a volume basis with the conformation number [7]. The use of the aforementioned atlas resulted in a significantly higher inter-observer agreement between physicians, particularly for pelvic nodal regions [7]. Anatom-e (Anatom-e Information Systems Ltd., Houston, Texas) is an electronic system working as a platform able to drive delineation based on multimodality imaging, with an advanced image fusion software and treatment planning characteristics. It includes a digital atlas built on the combination of more than 150 contouring atlases using 3 mm axial computed tomography (CT) and magnetic resonance (MR) images acquired in treatment position, with more than 50.000 normal tissue structures available. It contains several treatment protocols and guidelines, classified according to tumor site, and allows for the personalization of institutional protocols. It employs an evidence-based approach with a continuous update of scientific information and literature, being connected online to a central data server. Anatom-e contains a library of CTVs, including lymphatics at risk, prophylactic volumes and OARs for most of the tumor sites and oncological scenarios, on intact and postoperative anatomy. The present study was aimed at investigating whether the platform Anatom-e may increase the adherence to a specific protocol among different centres, with respect to CTV delineation in rectal cancer patients planned to receive preoperative long-course RT. We chose to follow the RTOG 2009 consensus guidelines since they were constitutively included in the platform and most of the participating centres were familiar with their indications. Patient 1 was a locally advanced low-lying rectal cancer whose level of criticality lied on the proximity of a mesorectal node to the mesorectal fascia. Patient 2 was again a locally advanced low lying rectal cancer extending to the posterior wall of the vagina. For patient 1, no differences in terms of DSC, HD and MDA were found, when using or not the platform, on both Intra-OV (same RO; day 1 vs day 2) and Inter-OV (different ROs; day 1 vs ground-truth). The participants had a high degree of self-consistency since the mean DSC was 0.95 for Intra-OV regardless of the use of the platform and the mean MDA was below 1 mm even with no Anatom-e used. For Inter-OV, mean DSC was 0.80 and mean MDA around 3.8 mm, independently of the platform. The high consistency within and among ROs can be explained by the low number in delineation variables in case 1, with very standard prophylactic volumes to be included in the CTV (mesorectum, presacral space, bilateral internal iliac and obturator nodes) and a very visible mesorectal node close to the mesorectal fascia to drive the volume selection and definition. The other explanation might be that most of the participants were trained in the same centre during the residency program and hence they shared a common background knowledge and contouring approach. In clinical case 2, selfconsistency was again very high with a DSC around 0.9 regardless of Anatom-e. Conversely, the inter-OV was significantly decreased by the use of the Anatom-e platform. The DCS for day 1 vs ground truth was significantly influenced by the use of Anatom-e (0.72 vs 0.65 without platform; p = 0.03), as an effect of a higher overlap of all contours with the reference CTV. The mean MDA was lower with the use of the platform (3.61 mm vs 4.14 mm; p = 0.21), but with no statistical significance; the use of Anatom-e decreased the SD from 2.97 to 1.33. That means that the mean distance between the tested contours and the reference volumes were on average lowered by the use of Anatom-e. At the same time the dispersion of the values around the mean values was lowered, as an effect of the increase in homogeneity of the delineation process. Clinical case 2 had a higher number of delineation variables compared to case 1, with also bilateral external iliac and inguinal nodes to be included in the CTV. Moreover, the involvement of the vagina increased the complexity of the selection and delineation of treatment volumes introducing a region of uncertainty represented by the anterior aspect of the CTV. The anterior aspect of the nodal regions within the pelvis is, in general, a source of potential disagreement, because anatomical boundaries are less clear. Moreover, the infiltration of the posterior wall of the vagina pushed ROs to extend anteriorly the CTV to cover the area of tumor spread, but to a different extent depending on the contouring RO. The visual evaluation of delineation variation according to pelvic sub-regions, confirmed the variability for the anterior border of the CTV (cranially to the bladder) and for the anterior border of the lateral lymphnodes, which is a critical boundary since no easily recognizable landmarks are present at that level. This was one of the reason for the increased Inter-OV, which was shown by the lower DSC for case 2 compared to case 1 without the platform (0.65 vs 0.80). This variability was mitigated by the use of the Anatom-e platform which increased the DSC up to 0.72, lowered the mean HD and decreased the SD for HD. This was evident also by visual inspection of the anterior aspects of all contours obtained, which had a higher overlap with the use of the system. This is also in line with the data of Nijkamp et al., where the benefit of the implementation of delineation guidelines based on adigital atlas in rectal cancer patients was particularly observed in the reduction of the disagreement among operators in the anterior region of treatment volumes [22]. The increase of homogeneity in the contouring process has been shown to have a dosimetric impact on target coverage in rectal cancer patients undergoing pre-operative RT [23]. This may have an influence on the quality of the whole RT process and finally on patient's outcome [24]. Improvement in interactive teaching for treatment volume delineation is also a major need for education and training of professionals in radiation oncology (especially trainees and young specialists) [25]. International professional societies such as ESTRO, the European Society for Radiation Oncology, developed an educational project denominated as FALCON (Fellowship in Anatomic deLineation and CONtouring) to increase the homogeneity in the delineation process, comparing individual contours with endorsed guidelines or expert opinions [26]. This initiative, based on short and interactive workshops, was shown to be effective in reducing Inter-OV in specific clinical contexts [27].
Conclusion
The use of a digital platform such as Anatom-e decreased the inter-observer variability among operators in the delineation process of CTV for locally advanced rectal cancer patients with a complex disease presentation planned to receive neoadjuvant RT. This system may be helpful in increasing the compliance to follow shared guidelines and protocols, potentially reducing discrepancies and discordances in delineated treatment volumes.
Ethics approval and consent to participate
Approval for the present study was given by the Review Board of the Department of Oncology of the University of Turin. Written informed consent was acquired from all patients with respect to RT treatment and clinical data management for research purposes.
|
2018-07-03T23:37:41.938Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "91d378a181a4503294e735c668359decece366a0",
"oa_license": "CCBYNCND",
"oa_url": "http://www.ctro.science/article/S2405630818300405/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b79b50370aa5ab1d9c136a2864484df4908d584",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218974931
|
pes2o/s2orc
|
v3-fos-license
|
Sorafenib for the Treatment of Hepatocellular Carcinoma: A Single-centre Real-world Study
Abstract Background Sorafenib is an oral multi-kinase inhibitor used for the treatment of hepatocellular carcinoma. Its efficacy in randomised controlled trials was demonstrated in patients with well-preserved liver function and good functional status. In the real-world setting, treatment is often offered to patients outside these criteria. We therefore performed a single-centre real-world cohort study on the efficacy of sorafenib in patients with hepatocellular carcinoma. Patients and methods We identified all patients with hepatocellular carcinoma initiating treatment with sorafenib between January 2015 and January 2018. The primary endpoint was overall survival (OS) since starting sorafenib. Clinical and demographic variables associated with survival were studied. Results The median OS was 13.4 months (95% CI 8.2–18.6). Multivariable Cox’s regression identified worse ECOG performance status (HR 2.21; 95% CI 1.56–3.16; P < 0.0001), Child-Pugh class C (HR 52.4; 95% CI 3.20–859; P = 0.005) and absence of prior locoregional treatment (HR 2.30; 95% CI 1.37–3.86; P = 0.002) to be associated with increased mortality. Conclusions Careful selection of patients for treatment with sorafenib is of paramount importance to optimize outcomes.
Introduction
Sorafenib is an oral multi-kinase inhibitor, which inhibits tumours angiogenesis through inhibition of vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF) signalling pathways. It has demonstrated a significant prolongation in overall survival of patients with advanced-stage hepatocellular carcinoma (HCC) up to 2.8 months. 1,2 The two landmark trials included mainly patients with compensated liver cirrhosis of viral aetiology and an excellent baseline functional status. Consequently, sorafenib is formally indicated only in patients with well-preserved liver function (Child-Pugh A) and advanced tumours (Barcelona Clinic Liver Cancer [BCLC] C) or intermediate stage tumours (BCLC B) progressing after locoregional therapy. 3 Nevertheless, sorafenib is often used outside these criteria in the real-world setting, mainly due to the absence of alternative treatment options. As these patient subgroups were not studied in registrational trials, only large observational non-randomized cohort studies can help inform practice. [4][5][6] We therefore a retrospective real-world cohort study of patients treated with sorafenib for advanced HCC, investigating its efficacy and variables associated with OS.
Patients and study design
We performed a retrospective cohort study of all patients with HCC initiating treatment with sorafenib between January 2015 and January 2018, who were followed until October 2018 at a single tertiary centre. Data collection was approved by the institutional ethics committee, while treatment did not differ from the standard of care and thus did not require additional approval.
We included all consecutive patients aged at least 18 years with a histologically or radiologically confirmed diagnosis of HCC, who were treated with sorafenib. The decision for initiation of the drug was made based on the consensus of the Liver Multidisciplinary Team. Patient records were retrospectively reviewed for demographic and clinical information. The date of death was extracted from the national health insurance database.
The primary outcome was OS from initiation of sorafenib. We explored the relationship of clinical characteristics with OS.
Statistical analysis
Continuous variables are given as medians with interquartile ranges (IQR). Univariable association analyses with survival were performed using the Kaplan-Meier method with log-rank testing. Multivariable analysis was performed using a Cox-proportional hazards model with stepwise backward selection where variables were removed if they did not achieve statistical significance at P < 0.05. All analyses were performed on an intention-to-treat basis. Analyses were performed using SPSS, Version 25 (IBM, Chicago, USA).
Patient characteristics
We included 115 patients, who were predominantly male with Child-Pugh class A alcoholic cirrhosis with good performance status (Table 1).
Survival outcomes
A total of 83 patients (72%) died during the study period. The median OS since initiation of sorafenib was 13.4 months (95% CI 8.2-18.6).
Discussion
HCC is among the leading causes of cancer-related deaths. It primarily develops from cirrhosis, and many patients are infected with hepatitis C virus (HCV) or hepatitis B virus (HBV). Treatment with the multikinase inhibitor sorafenib is a systemic therapy option for patients with advanced HCC since 2008. Systemic therapy has helped prolong survival after disease progression. Clinical management of patients should target improvement of patient OS. Sorafenib therapy is recommended in guidelines as the first-line option in patients who cannot benefit from resection, transplantation, ablation or TACE, and still have preserved liver function and significantly prolonged OS and TTP.
Sorafenib monotherapy remains the standard of care in unresectable HCC. Sorafenib has demonstrated survival benefit in patients with unresectable HCC in two (2) randomized, placebo-con- trolled, double-blind, phase III trials: SHARP and AP. The use of sorafenib significantly increased OS: 10.7 months vs. 7.9 months (SHARP study) and radiologic progression was significantly lower in the sorafenib group of patients. 8 The use of Sorafenib also significantly increased OS in Asian-Pacific study. However, the results compared with SHARP study were worst especially because of different demographic characteristics of patients, more extrahepatic spread, greater number of hepatic tumor lesions and poorer ECOG performance status.
In GIDEON, real life analysis of the sorafenib group of patients median OS was 8.6 month vs. 10.4 in SHARP study. Clinical outcomes of advanced HCC patients treated with sorafenib in real-life practice are better compared to the other studies conducted in the Asia-Pacific region in terms of survival and tolerability. Extrahepatic spread and combination with other therapies are of predictive value for OS of advanced HCC. Further studies are required to maximize the effect of sorafenib in combination with other modalities. 7,8 In our retrospective study, we collected and analyzed the clinical outcomes of advanced HCC patients who underwent treatment with sorafenib in real-life clinical setting. We found that HCC patients with Child-Pugh A exhibited a significantly higher median survival. In the present study, factors that are predictive of OS in HCC patient treated with sorafenib include gender, extrahepatic spread, and combined other therapies. 7,8 In the Slovenian study, HCC patients treated with sorafenib had median OS of 13.4 months, which is longer than that reported in SHARP (10.5 months) and GIDEON (Global Investigation of Therapeutic Decisions in HCC and of its treatment with sorafenib) (10.8 months).
Multivariable analysis of the Slovenian group of patients demonstrated significant associations between mortality and ECOG performance status, Child-Pugh class C and absence of prior locoregional treatment, but not baseline AFP.
There are several limitations in this retrospective designed analysis. Being a retrospective study, it is difficult to ascertain the actual cause of death in our cohort. The population size examined in our study is relatively small, which may limit the statistical power. Small population size may have influences on subgroup analysis. Other limitations include the reduced initial dose of sorafenib based on clinical decision made by individual physicians and adjustment of dosages during treatment due to intolerance. However, our results are comparable with results of other worldwide studies.
In conclusions, careful selection of patients for sorafenib treatment is important. Treatment of HCC patients should be performed in experienced centers, where the decision of treatment of each patients should be made after previous presentation of patients at multidisciplinary board of experts.
|
2020-05-29T13:05:57.437Z
|
2020-05-28T00:00:00.000
|
{
"year": 2020,
"sha1": "9e8e93086944eb8b1dd5751f45813c2d1dbb3368",
"oa_license": "CCBYNCND",
"oa_url": "https://www.sciendo.com/pdf/10.2478/raon-2020-0027",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8690a58d5f5d1e72daa1d0b869128bdc8f8c0c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236535859
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Impact of after-consult clinic blood pressure: comparison with automated office blood pressure
Background It is most important to measure blood pressure (BP) exactly in treating hypertension. Recent recommendations for diagnosing hypertension clearly acknowledge that an increase in BP attributable to the “whitecoat response” is frequently associated with manual BP recordings performed in community-based practice. However, there was no data about after-consult (AC) BP that could reduce whitecoat effect. So we evaluated before-consult (BC) and AC routine clinic BP and research based automated office blood pressure (AOBP) measured. Methods The study population consisted of 82 consecutive patients with hypertension between April 2019 and December 2019. We measured routine clinic BP and AOBP before and after see a doctor, respectively. Seated blood pressure and pulse are measured at each time after a rest period using an automated device as it offers reduced potential for observer biases. AOBP was measured and measuring BP 3 times un-observed. We compared each BP parameter for identifying exact resting BP state. Results There was significant difference between BC and AC systolic BP (135.37 ± 16.90 vs. 131.95 ± 16.40 mmHg, p = 0.015). However there was no difference in the BC and AC diastolic blood pressure (73.75 ± 11.85 vs. 74.42 ± 11.71 mmHg, p = 0.415). In the AOBP comparison, there was also significant difference (BC systolic AOBP vs. AC systolic AOBP, 125.17 ± 14.41 vs. 122.98 ± 14.09 mmHg, p = 0.006; BC diastolic ABOB vs. AC diastolic AOBP, 71.99 ± 10.49 vs. 70.99 ± 9.83, p = 0.038). Conclusions In our study, AC AOBP was most lowest representing resting state. Although AC BP was higher than BC AOBP, it might be used as alternative measurement for reducing whitecoat effect in the routine clinical practice.
Background
The effect of blood pressure (BP) to cardiovascular disease is obvious [1]. It has been well known that early intensive BP control affect benefit that reduced long term mortality or target organ damages [2]. Therefore, it is important to recognize and minimize early vascular and organ damages.
There is no concern that method of BP measurement is basic and important step in the hypertensive patient management. Hypertension guidelines described precisely BP measurement dividing conventional office BP, unattended office BP, out of office BP, Home BP, and ambulatory BP [3,4]. Generally, values of measured BP in the doctor's office is still cornerstone of BP control and hypertension treatment and BP is usually measured before see a doctor.
However, an increase in BP attributable to the "white coat response" is frequently associated with manual office BP recordings performed in community-based practice [5][6][7]. So, we designed after-consult (AC) BP measurement. This would be reflected more comfortable, physically stable state of patient. However, there was no study whether AC BP is less variable and could reflect long-term prognosis or not.
We evaluated before-consult (BC) and AC clinic BP and research based automated office blood pressure (AOBP) measured.
Study population
Participants were required to meet all the following criteria: controlled hypertensive state with medication, older than 18 years, no medication change during recent 3 month. Detailed exclusion criteria are one or more of following below: patients with secondary hypertension, uncontrolled diabetes mellitus, chronic kidney disease needing dialysis, more 2 times of AST/ALT, drug sensitivity, atrial fibrillation or uncontrolled arrhythmia. All participants provided written informed consent.
All major classes of antihypertensive agents were included in the formulary and we did not change medications during follow up period as possible as. Study investigators could also prescribe other antihypertensive medications.
Blood pressure measurement
We measured three BP: office BP, AOBP and 24 h ambulatory BP (ABP). For office BP, an initial single observed BP measurement with non-invasive oscillometric system (OMRON HBP 1300, Japan) was performed after 5 min of quiet rest period using appropriate cuff size for mid-arm circumference, seated in chair, cuff at mid sternal level, arm supported on a flat surface, feet flat on the floor, with no conversation during the measurement.
AOBP was performed with another device (OMRON HEM-907XL, Japan) with the technique noted above but with the patient entirely alone in the exam room resting quietly and mean of three blood-pressure measurements was reported.
Twenty-four-hour ABP was monitored using validated oscillometric arm devices (TM-2430, A&D Company, Tokyo, Japan). Measurements were performed at 15-30 min intervals for 24 h, and study participants were instructed to remain still with the forearm extended during each BP reading. Awake and night-time periods were defined 07:00 to 22:00, 22:01 to 06:59, repetitively. ABP recordings with less than 70 % usable BP readings were excluded. All valid awake and nighttime ABP readings were averaged to provide a single awake and nighttime ABP value per study participant.
For office BP and AOBP, we repeated BP measurement after and before see a doctor with same method. We illustrated BP measurement pathway (Fig. 1).
For identifying target organ damage, we evaluated left ventricular hypertrophy, pulse wave velocity and microalbuminuria. Left ventricular hypertrophy (LVH) was defined by the increased left ventricular mass index (LVMI) in transthoracic echocardiography. (LVMI > Fig. 1 BP measurement pathway. BP: blood pressure 115 g/m2 in men and LVMI > 95 g/m2 in women), Pulse Wave Velocity (PWV) was measured by pulse waveform analyzer. (Vp-1000 plus, Omron, Japan), Urine Albuminto-creatinine ratio (A/C ratio) was measured by a spot urine sample.
Statistics
Continuous variables are reported as mean ± standard deviation (SD). Frequencies are given as percentages. Differences between mean systolic and diastolic AOBP values and heart rate (HR) were assessed using paired t tests for "white coat effect" measurements. Similarly, differences between mean systolic and diastolic AOBP and 24-hour BP values were also assessed using t tests. We compared agreement between BP measurements in two ways: We used the method of Bland and Altman with bias (defined as the mean value of the differences) and 95 % limits of agreement with their confidence intervals. The analyses were performed using the software R, version 3.2.2 (R Foundation for Statistical Computing, Vienna, Austria).
Results
A total of eighty two consecutive hypertensives fulfilling the criteria for enrollment were included in the study, 46 men and 36 women, with a mean age of 62.6 ± 13.7 years. Their clinical characteristics are shown in Tables 1 and 2.
When compare among all BP values, both systolic and diastolic AC AOBP was lowest (p = 0.001) representing resting state.
Bland-Altman plots for the comparison of the systolic AC BP, BC BP and diastolic AC BP, BC BP with the systolic 24 h mean ABP are shown in Fig. 3 A, B, respectively. Differences between 24 h ABP and office BP were 10.61 mmHg for BC BP and 7.61 mmHg for AC BP, respectively. AC BP showed consistently narrower than BC BP.
For BP variability analysis, we checked difference between BC BP and AC BP and defined as delta office BP and analyzed correlation between delta office BP and 24 h BP deviation. There was no significant association between delta office BP and 24 h BP variability (p = 0.389).
For further evaluation, we analyzed correlation coefficient between 24 h ABP and BC BP, AC BP, BC AOBP, and AC AOBP. Their correlations are shown in Fig. 4 A We defined white coat effect as blood pressure difference > 20mmHg using before and after office BP. we compared clinical values including left ventricular hypertrophy, pulse wave velocity and Urine A/C ratio between patients with white coat effect and without white coat effect. However, there were no differences in the clinical values for assessing prognostic impact except urine A/C ratio (
Discussion
To the best of our knowledge, this is first study to simultaneously compare BP values between before and after seeing a doctor for reducing white coat effect and examined for any differences in the mean AOBP when these readings were obtained before and after office consult. We also compared the office BP and AOBP values with 24 h ABP, which is generally accepted as a more sensitive risk predictor than office BP of CV events, in order to investigate any differences between AOBP and 24 h ABP values [8,9].
Our findings showed that based on the automated BP measurement device, AC systolic BP was lower when readings with after seeing a doctor were taken. Furthermore, AC systolic AOBP had further lower in the similar situation. The mild association between AC BP and 24 h ABP values was observed and the value was higher than other BP values. So, it should be highlighted that AC BP measurement should be considered as alternative routine practice.
The overall prevalence of white coat hypertension in the general population is estimated to be approximately 10-15 %, and it amounts to 30 % in patients with increased clinic BP recordings [10]. White coat hypertension is more frequent in women, non-smokers, and BC Before consult, AC After consult, AOBP Automated office blood pressure, ABP Ambulatory blood pressure, SBP Systolic blood pressure, DBP Diastolic blood pressure, HR Heart rate, bpm beat per minute Fig. 2 Comparison among office, automated blood pressure, and 24 h ambulatory blood pressure. Systolic BP comparison (A), Diastolic BP comparison (B). BC; before consult, BP; blood pressure, AC; after consult, AOBP; automated office blood pressure, ABP; ambulatory blood pressure patients with low clinic BP and smaller left ventricular mass at echocardiography [11]. Although the prevalence of white coat effect in the hypertension treatment during follow up period is not exactly known, it might be similar with that of white-coat hypertension. In addition, it is hard to take a sufficient rest before BP measurement in South Korea because both patients and physicians are pressed for office consultation time.
In our study, white coat effect was about 9.8 % when we defined as 20mmHg difference between BC and AC systolic BP. So, AC BP that we measured BP after consult is fit to take a sufficient time and reduce white coat effect.
Recently, a study was performed for reducing white coat effect [12]. Emmanuel et al. examined the difference in AOBP readings, with and without 5 min of rest prior to three readings recorded at 1-min intervals. In that study, systolic AOBP can be initially checked without any preceding rest and if readings are normal can be accepted. However, when AOBP is ≥ 130 mm Hg, Fig. 3 Bland-Altman plot comparing mean after-consult systolic blood pressure (A), before-consult systolic blood pressure (B) with 24 h ambulatory blood pressure (mmHg). AC; after consult, BC; before consult. Red dashed lines, mean bias; Blue dashed lines, 95 % limits of agreement Fig. 4 Correlation coefficient analysis among office blood pressure, automated blood pressure and 24 h ambulatory blood pressure. Systolic blood press (A), Diastolic blood pressure (B), BC; before consult, BP; blood pressure, AC; after consult, AOBP; automated office blood pressure, ABP; ambulatory blood pressure measurements should be rechecked with 5 min of rest. So, it has limitation for daily routine practice.
We still used 24 h ABP as reference standard. A recent meta-analysis showed that due to the significant heterogeneity it is believed that use of the AOBP should not replace daytime ABP.
Interestingly, it should be highlighted that AC systolic AOBP value was lower than 24 h ABP. Our findings are inconsistent with those of others [13,14] that clinic BP values were higher than daytime ABP values in the higher range of BP distribution. We suggested even AOBP might have white coat effect and AC AOBP could reflect most comfortable resting BP state.
This study has several limitations. First, it should be mentioned that the entrance of the study personnel into the examination room before recording before or afterconsult BP with 5 min resting cannot completely eliminate noisy circumstance. However, we believe that situation reflected more real clinical practice. Second, the relatively small size of the study population may limit generalization of study results. Third, we could not check the long-term prognosis because we concentrated on method of BP measurement. We will identify the adverse event in the near future.
Conclusions
In our study, AC AOBP was most lowest representing resting state. Overall AC BP including routine BP and AOBP was lower than BC BP. Based on the present results, although AC BP was higher than BC AOBP, it might be used as alternative measurement in the routine clinical practice for reducing white coat effect.
|
2021-08-01T13:31:19.876Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "64a805b2e4e2d67fcd5918102b8a7a8183441604",
"oa_license": "CCBY",
"oa_url": "https://clinicalhypertension.biomedcentral.com/track/pdf/10.1186/s40885-021-00171-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16a994bccd0302499718b49fbcfe50f857b18e7d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14659142
|
pes2o/s2orc
|
v3-fos-license
|
Second generation HIV surveillance in Pakistan: policy challenges and opportunities
From 2004 to 2011, the Canada-Pakistan HIV/AIDS Surveillance Project (HASP) worked with government and non-government partners in Pakistan to design and implement an HIV second generation surveillance (SGS) system. Insights into the development of scalable cost effective surveillance methodologies, implementation, use of data for HIV prevention and human rights were gained over the course of HASP. An ideal SGS system would be affordable, able to be implemented independently by local partners and produce data that could be readily applied in policy and programmes. Flexibility in design and implementation is important to ensure that any SGS system is responsive to information needs, political changes and changes in key population dynamics and HIV epidemics. HASP's mapping methodology is innovative and widely accepted as best practice, but sustainability of the SGS system it developed is a challenge.
HIV surveillance is an essential part of monitoring and evaluation policy frameworks. Such frameworks ensure that methods provide the necessary information at the right frequency, and are ethical, acceptable, scalable and cost effective. 1 First generation HIV surveillance monitored prevalence of infection and disease reports but, largely because of HIV's unique epidemiology, which can lead to multiple epidemics, was not suitable for all epidemic types and tended to exclude information such as risk behaviours. 2 Thus the SGS approach, which tailors surveillance methods to different epidemic typologies, was developed. WHO and UNAIDS define SGS as the, '…regular, systematic collection, analysis and interpretation of information for use in tracking and describing changes in the HIV/ AIDS epidemic over time'. 3 They outline three epidemic types-generalised, concentrated, and low level -and surveillance approaches to each. WHO/ UNAIDS define a concentrated epidemic as one in which prevalence is less than 1% in the general population but 5% or more among sub-populations at higher risk.
Pakistan's HIV epidemic is characterised by high HIV prevalence among those who inject drugs 4 as well as significant rates among hijra (transgender), male and female sex workers, but few infections in the wider population. In concentrated epidemics such as Pakistan's, behavioural information is particularly important as it can help to predict the extent to which sub-populations at the highest risk engage in risk behaviours with individuals outside these sub-populations, thus potentially leading to epidemic transition.
HASP generated rich experiences about how to design, conduct and translate knowledge from SGS in a concentrated epidemic. They may be applicable to a number of HIV surveillance and wider public health policy arenas. Four such arenas will be discussed in this paper: developing a scalable methodology; policy related to operationalising HASP's model; the ability of the SGS system to impact HIV prevention, treatment and care and support policies and programmes; and SGS and human rights.
DEVELOPING METHODOLOGIES FOR LARGE-SCALE HIV SURVEILLANCE IN PAKISTAN
The HASP SGS system was designed to monitor the HIV epidemics in Pakistan and provide data for policy makers to use when reporting on national HIV targets, while also providing data useful for the development and evaluation of HIV prevention policy and programmes. Specifically, the locations and sizes of populations of injection drug users and male, hijra and female sex workers were estimated, after which data about behavioural risks, vulnerabilities and HIV prevalence were collected among them. This was conducted in four separate rounds and in many cities across Pakistan, allowing for comparisons and the analysis of temporal trends (box 1).
In order to develop the most appropriate and effective SGS system for Pakistan, a number of decisions had to be made about inclusion criteria, sampling, survey instruments and funding. First, decisions about the inclusion of particular key populations and sentinel sites were made. Cities with evidence of an HIV epidemic were selected as sentinel sites 5 and key populations with evidence or plausibility of HIV transmission (injection drug users and male, hijra and female sex workers) were Open Access Scan to access more free content included. Agreement was reached after considerable and ongoing debate. Some stakeholders continue to suggest that Pakistan's SGS system should include men who have sex with men (MSM); HASP epidemiologists contend that the trends among male and transgender sex workers reflect and are an early warning for the trend among MSM. There was frequent pressure to include additional cities, and HASP did accommodate certain well-justified requests, such as to provide baseline data for new HIV prevention programmes. Although the sentinel site model was retained, flexibility was important to maximise the impact of the SGS. The mapping methodology the HASP team developed has been recognised as best practice and incorporated by WHO advice for the region. 6 The timing was opportune, as it coincided with similar efforts in other countries with concentrated epidemics, thus allowing HASP to contribute its mapping methodology to countries with similar epidemics and contexts.
HASP's mapping methodology first identifies locations where activities that increase the risk of HIV transmission occur and then estimates the number of key population members engaging in such activities. Initial mapping collects secondary data from key informants, which is then validated through interviews with and observation of key population members. As these populations are often marginalised and/or mobile, surveillance teams need to build trust and work closely with the pertinent populations while maintaining rigour and working rapidly to avoid double-counting. The methodology efficiently estimates the size of key populations without the need for large surveys and eliminated the need to rely on extrapolation from few studies of varied quality, although it is more resource intensive and costly compared with statistical extrapolation. 7 Training field teams to conduct the mapping data collection is time-consuming, but given the concentrated nature of Pakistan's epidemic, the focus of mapping is narrow and therefore feasible.
There was much debate over whether sampling should be designed to meet the national-or the provincial-level needs. Data from smaller and simpler samples could have been obtained quickly at low cost to provide national level estimates, while avoiding security risks to which Pakistan is prone. However, health is administered at the provincial level and therefore stratified samples from multiple locations within the provinces have provided useful information for health programme planning. A flexible approach enabled the project to meet both national and provincial needs using a range of sampling strategies.
The behavioural surveys provided rich data sets by including questions capturing demographics, education, migration, income, housing, detailed risk behaviours, and HIV knowledge and risk reduction behaviours including indicators identified by the United Nations General Assembly Special Session (UNGASS) on HIV. The scope was intended to enable making predictions about the trajectory of the HIV epidemics and inform policy and programme planning. The development of the questionnaire was a participatory process with input from experts in HIV epidemiology and prevention, policy makers, NGO service providers, and members of key population communities; it was meant to meet the needs of them all. This increased the complexity of questionnaire development, and limiting its length was difficult. Indeed it grew significantly over the duration of HASP.
Behavioural and biological surveillance was conducted on a close-to-annual basis in order to capture shifts in the epidemics that occur quickly. The brief interval between surveillance rounds did not always leave sufficient time for provinces and NGOs to plan and implement HIV prevention programmes, let alone time for these programmes to have effects. Data generated through HASP have the potential to be used to improve HIV prevention programming, but adequate time must be allocated for planning and implementation. While the richness of HASP's data combined with its comparability between provinces was one of its strengths, inadequate capacity of key provincial and NGO staff posed a challenge for programme development and implementation.
The decision was made to develop a standard questionnaire for each key population and specify the data entry and analysis software to use. This decision was made because experiences in early rounds of HASP SGS demonstrated that autonomy over the process of data entry and analysis did not ensure data quality or timely analysis. Data entry took place at the provincial level, while data cleaning and analysis was carried out centrally by HASP to ensure consistency and quality. It can be a challenge to require the use of standardised methodologies while also promoting local collaboration and uptake of surveillance results. Even when there is consensus, policy frameworks and guidelines may need to be adapted to local needs and regulated. One option is to develop minimum standards for mapping and IBBS to allow for shorter, less costly but adequately rigorous studies to be conducted locally, which may be compared to the results of larger, periodic rounds of surveillance. An alternative option could be an even leaner national SGS methodology complemented by smaller local surveys focused on specific local data needs.
To effectively reach marginalised populations in Pakistan, they must be sought out in their communities, which produced a logistical challenge for HIV testing. A relatively new application of an older technology that increased the feasibility of fieldbased surveillance was the use of dried blood spot (DBS) technology to collect a capillary blood sample for HIV testing. 8 While it is now the specimen collection method of choice by WHO/UNAIDS for a variety of surveillance purposes, this use of DBS was relatively new when the project began. It allowed for non-medical personnel to conduct biological surveillance activities in non-clinical sites, as well as collection of sufficient samples for quality assurance and a specimen repository for future analysis and research. However, as no validated laboratory test exists to identify active syphilis infection using DBS specimens, syphilis testing was excluded. There was significant discussion regarding this decision, but ultimately agreement that the challenges of whole blood data collection (to permit testing for active syphilis) outweighed the potential benefits of collecting anonymous syphilis data among the key populations HASP was surveying. Mid-project political events led to uncertainty regarding the government department responsible for storage, integrity and appropriate sharing of SGS data. The major policy lesson is for comprehensive data ownership and intellectual property plans be devised and agreed upon at the beginning of any similar initiative, along with mechanisms for updating them should organisational changes occur.
Countries are encouraged to develop strategies for resource mobilisation, incorporating internal and donor funding sources. 1 In Pakistan there is heavy reliance on donor funding, and indeed this is how the SGS was funded. Financing a surveillance system through international donors is risky, as funding may be unpredictable, and moreover, donors may request changes in the surveillance methodologies and inclusion criteria. Gaps in funding and different methodologies may limit temporal trend analysis and geographical comparisons. Moreover, with global cuts in both development assistance and HIV prevention, HIV surveillance funding may become more difficult to secure.
Best practice in HIV SGS methodologies change over time as new methods are developed and HIV epidemics shift. To keep surveillance methods up to date, they need to be revisited regularly. HASP accomplished this by developing a system and guidelines for mapping and IBBS that were flexible and that have contributed to policy frameworks and guidelines on HIV monitoring and evaluation in Pakistan and regionally.
OPERATIONALISING SGS: DEVELOPING SURVEILLANCE CAPACITY
In addition to designing and implementing national SGS, HASP was intended to build government and non-governmental partners' capacity in the management of SGS systems and use of surveillance data to inform policy and programmes (box 2). Therefore, the principles guiding the operation of the SGS included building capacity in addition to feasibility and scalability. In addition, the funder required that partner NGOs and research organisations, selected to conduct the surveillance field work, be chosen through an annual competitive process. Operationalising SGS at a national scale through competitive procurement, while building sustainable capacity, proved to be a challenge that provided many lessons.
The project's capacity-building strategy with government partners involved HASP staff working with national and provincial counterpart staff. Government counterparts participated in SGS contracting, training of contractors and monitoring of field work, ensuring compliance with the protocol. HASP also proactively engaged its government partners in the interpretation and application of data. The National and four Provincial AIDS Control Programmes did demonstrate increased, albeit varying, engagement in the process of managing the SGS system. However, uncertainty in Pakistan's HIV governance structure, funding limitations and human resource changes meant that continuity among government counterparts was sub-optimal. The periodic nature of surveillance posed a challenge to maintaining engagement and capacity that had been built, and HASP staff may have been perceived as a resource available to manage the SGS system under government direction, rather than a resource to transfer knowledge and build capacity. While this was a challenge, it also presents an opportunity to re-consider the policies that guide project design, such as whether highly technical projects such as HASP should rethink capacity development approaches when resources, in particular human resources, are constrained. One option used by some donors is to allow funding to support salaries. 9 Building capacity in resource mobilisation to ensure predictable financing is another. In addition, if the likelihood of sustained technical expertise within a public sector organisation is low, alternative approaches to project design could include building links with agencies that have expertise while also developing public sector skills in managing relationships with technical organisations.
HASP combined formal training on the SGS methodology and ethics with accompaniment to build the surveillance capacity of NGOs and research organisations successful in the tenders to perform the field work of the SGS system. Accompaniment provides experiential learning opportunities along with mentoring and coaching techniques. It is recognised by development professionals as more likely to be sustainable than training alone. 10 Competitive, transparent, merit-based tendering to select NGOs and research institution partners was meant to counter corruption, which is documented to be high within Pakistan's health sector. 11 In a review of corruption in HIV programmes, Transparency International identified government procurement of HIV services as an area in which corruption is likely to occur. 12 13 However, tendering should be accompanied with other recognised anti-corruption measures such as strategies to increase civil society governance capacity, enhance political accountability, restrain power, and reform the public sector. Although these actions were beyond HASP's mandate, when issues related to transparency in procurement were encountered they were addressed as best as possible. Some organisations which competed through this process may have been more motivated by the funding opportunity than the vision to implement a high quality SGS. Therefore while tendering does comply with development principles and policy with respect to prevention of corruption, 12 simultaneoususe as a capacity development strategy may not have been the optimal approach. Research on corruption prevention policy and measures is now widely recognised as being needed for HIV programmes; monitoring and evaluation, including SGS, is no exception. A valid question is whether the time expended on tendering and contract management affected the project's ability to operationalise SGS and build capacity in it. In one round, HASP compared the efficiency and rigour of SGS when HASP directly engaged field research staff, as compared to when HASP contracted local organisations. Using the direct approach, there were fewer difficulties in assuring rigour and protocol adherence. The training model, with some adaptations, was effective for both. Theoretically, directly hiring and supervising field researchers might have allowed for more innovation in methodologies. However, training, monitoring, and overseeing SGS overwhelmed HASP staff, and therefore would require a larger staff if used at scale.
Tendering issues decreased the attention HASP staff were able to provide to building quality SGS capacity, highlighting the Box 2 Options and approaches for operationalising second generation surveillance ▸ Build government capacity to manage second generation surveillance ▸ Develop approaches for capacity building on non-governmental organizations and research partners ▸ Contract data collection to non-governmental organisations and research institutes versus direct delivery.
need to find optimal strategies for building broad-based SGS capacity while also discouraging corruption. Working with one or two research organisations may have overcome some of these challenges and would have reduced the transaction costs of tendering. Overall, finding the right capacity development strategy while scaling up SGS through competitive tendering was a challenge.
USING SGS DATA TO INFORM HIV PREVENTION POLICY AND PRIORITIES
One of the goals of HASP was to facilitate the uptake of surveillance data for HIV prevention policy development and programme design. HASP SGS data informed two revisions of HIV policy in the National Strategic Framework and several funding applications, evidence that HASP influenced and created opportunities for better HIV policy and programmes. However, the data have not been used to their full potential. Although often underfunded, the important role of civil society in delivering HIV prevention services, in particular for key populations, has been well-established. 14 15 HASP planned to work with NGOs that provide HIV services to key populations at risk of HIV to help these organisations to use SGS data to inform their programmes. However, but cuts in NGO funding precluded this initiative, a missed opportunity which is regretted. Also, the skill set required to build SGS capacity is different from that needed to develop HIV programming capacity within NGOs. In the future, it may be useful to have separate mechanisms and teams working with government and NGOs, with opportunities to bring these groups together for cross-learning. HIV programmes, including monitoring, evaluation and SGS, are increasingly being encouraged to integrate with wider health systems 15 to enhance the affordability of HIV responses and enable adequate funding for them. 16 Integration could present a potential opportunity to share resources and expertise in the longer term, and globally health systems are shifting towards integration. However, there is a risk that targeted HIV prevention could be subsumed within programmes for the wider population. Given the likelihood that HIV prevention will be underfunded in the foreseeable future and the crucial importance of targeted prevention, the best option may be to engage productively in integration planning so the challenges to HIV programmes and SGS are considered and addressed.
HUMAN RIGHTS AND HIV SURVEILLANCE
Pakistan is a signatory to the Universal Declaration of Human Rights, the Convention on the Rights of the Child and the Convention to Eliminate All Forms of Discrimination Against Women. However, Pakistan's progress in implementing these principles has been limited. Colonial legislation that continues to exist as well as Islamic law sanction criminal penalties for non-marital sex, sodomy, and commercial sex work. 17 According to the most recent Human Development Index women's development is only 70% of men's. 18 The key populations among which HASP surveillance was conducted have few legal or societal protections for their health, safety, confidentiality and other rights and this was considered in the design of the SGS methodologies.
Consistent with international standards, HASP used unlinked anonymous testing (UAT). 19 Stakeholders in Pakistan encouraged HASP to link results in order to identify positive individuals and theoretically refer them to services. However, HASP technical advisors provided several rationales for using UAT instead of linked testing, including the need to avoid recruitment or selection bias skewing the results, and to prevent the purpose of surveillance being confused with service provision (indeed HASP surveyed only a small proportion of the key populations). Moreover, HASP managers felt that in Pakistan's highly politicised environment, combined with an often aggressive legal system, keeping participant names and HIV status private would have been a challenge.
Surveillance participants received information about where to obtain HIV testing and counselling to protect their rights to information and health. However, in some of these cities services were not locally available and participants needed to be referred to services some distance away.
To protect participants' rights to safety and confidentiality, sensitisation sessions were held early on with local police, municipal authorities, religious leaders, NGO partners, and community gatekeepers. Organisations to which participants could be referred for HIV counselling and testing were identified. Surveillance methodologies and security procedures were designed to ensure the safety and confidentiality of participants, through consultation with key population members. Meaningful engagement means sharing decision making, difficult in a research context, and consultation was not always able to provide key populations with true influence on methodologies. 20 Generally, over the project's life there was increased recognition of the ethical challenges in HIV surveillance among key populations at higher risk of HIV, again underlining the need for frequent reassessment and renewal of rights and ethical considerations and measures. Indeed, in the final year of HASP, guidelines to help SGS staff and partners understand and address gender inequality among sexual minorities were developed, informed by the patterns of vulnerability among key populations in Pakistan.
CONCLUSIONS
Through HASP, many lessons were learned about the design and implementation of SGS. It is important that surveillance be designed with potential for scale up and long-term sustainability in mind. An ideal system would be affordable, able to be implemented independently by local partners, and produce data that could be readily applied in policy and programmes. The need for flexibility in design and implementation resonated throughout the experience of HASP and is important to ensure that any SGS system is responsive to information needs, political changes, and changes in the key population dynamics and HIV epidemics. HASP's mapping methodology is innovative and widely accepted as best practice; institutionalising the SGS system the project developed at all stages of the research and knowledge translation cycle has been more challenging. However, HASP has produced a wealth of knowledge about key populations and HIV in Pakistan, and there is great potential for it to be translated into effective HIV prevention policy and programmes. While no new data collection is planned, the Government of Pakistan continues to collaborate with the University of Manitoba to refine analyses and better use the existing data to improve programming.
|
2017-04-16T15:38:02.059Z
|
2012-12-08T00:00:00.000
|
{
"year": 2012,
"sha1": "58cd4cac801273732a06aadcdaf3b3b7b02e6230",
"oa_license": "CCBYNC",
"oa_url": "https://sti.bmj.com/content/89/Suppl_2/ii48.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "58cd4cac801273732a06aadcdaf3b3b7b02e6230",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225210079
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the Flow Rate and Speed of Vehicles on a Representative Road Section before and after the Implementation of Measures in Connection with COVID-19
Transport is an inseparable part of the life of all citizens. At the beginning of the year, the COVID-19 pandemic hit the world. Individual states have taken strict measures to prevent its spread among the population. Due to this fact, the government of the Slovak Republic has issued restrictions on the closure of public spaces (schools, shopping centres, restaurants, bars, etc.). These restrictions have had an impact not only on the economic activity of the population but also on their mobility in the form of reduced traffic. This is due to the drastically reduced mobility associated with the coronavirus, such as commuting trips and extremely limited leisure opportunities. Reduced mobility of the population (reduction of the number of vehicles in the traffic flow) can bring positive effects not only on overloaded road network (increased vehicle speed, lower flow) but also on the environment (reduction of noise, emissions, etc.). This article aims at finding out what effect the measures taken had on the quality of traffic flow. The quality of movement was examined in the form of the flow and speed of vehicles on one of the busiest first-class road sections. Descriptive statistics were used to examine the state of the restrictions. The results show that after the introduction of measures against the spread of coronavirus, the intensity and speed of vehicles in the measured section decreased.
Introduction
Transportation is a complicated stochastic process. Nowadays, it is an inseparable part of each of us-whether when traveling to work, for relaxation, transportation of goods, etc. Rising living standards of the population as well as favourable conditions when travelling across countries are some of the key indicators reflected in the increase in demand for transport. The consumer lifestyle has also affected our society. The social trend in the automotive industry is particularly obvious. Previously, car ownership in our region was a sign of luxury, wealth and higher social status. Today, many more people can buy it. The fact that nowadays buying a vehicle is much easier than some years ago does not only affect the environmental aspect in the form of pollution (emissions), noise, vibration. This factor mainly affects the increase in the number of vehicles on the roads, which often causes congestions on the road network [1,2].
Due to increasing flows, especially on highways and first-class roads, the smoothness of traffic flow moves away from free traffic flow and the part of traffic flow with a load degree heading to 1 increases [3][4][5]. The time between the origin and the destination of the trip increases significantly. Travel time is considered one of the main indicators of road network permeability [6]. Travel time is
Traffic Flow
In theory, traffic flow can be understood as a flow composed of different types of vehicles that have specific attributes. These attributes distinguish it from similar phenomena (flows), known for example in physics. Therefore, it is necessary to examine it separately. Several factors affect the traffic flow. On the other hand, the traffic flow influences its surroundings through its behaviour, both in qualitative and quantitative ways [31,32]. The traffic flow is understood as the unity of temporal and spatial characteristics but also of the transport and motoric characteristics of vehicles on the road. The movement of vehicles in traffic flow is affected by other vehicles in the traffic and is therefore monitored as a whole and not as the movement of an individual vehicle [33][34][35].
Basic Characteristics of the Traffic Flow
Three interdependent variables are considered as these characteristics, namely speed v (d, t), flow q (d, t) and density k (d, t). All of these variables are dependent on place and time. The quantity of the traffic flow is its flow and the quality expresses the speed and fluency in the given conditions. The research and description of the relationships between these three variables is the basis of the traffic flow theory [33,35,36].
Speed
The speed v is dependent directly on the distance s and indirectly on the time t. It is most often stated in the basic units of the SI (The International System of Units), i.e., in m/s or km/h [37]. where: • v is the speed of the traffic flow [km/h], • d is the distance of the monitored section of road [km], • t is the time [s].
Flow
Flow is the most important characteristic of traffic flow because the largest flow that communication can transmit is the capacity of road communication. We define the flow rate as the number of vehicles passing a certain point in a given time period in one or both directions [33]. where: • N is number of vehicles [veh].
Density
Density is the number of vehicles situated on a unit length of a roadway at a given time. At low density, vehicles are allowed to move freely, and drivers can choose the speed of their choice. On the contrary, at high density, the driver is influenced by other road users and congestion occurs [33]. where: • l is a monitored section of road [km].
Basic Equation between Traffic Flow Characteristics
There is a connection between the basic characteristics, and it is given by the continuity equation: Sustainability 2020, 12, 7216 4 of 19 Provided that these characteristics were obtained by spatial-temporal observation. Empirically, the natural dependence of speed on density is verified. There is a maximum speed at the minimum density and a maximum density at zero speed. It follows that density is also dependent on the flow [38].
The principles of descriptive statistics are most often used to evaluate the measured data from traffic surveys. Specifically, these are characteristics or measures of position. The most used characteristics of the position of one-dimensional distributions are the mean value (arithmetic mean), median, mode and quantiles [39,40].
•
The mean value is the best-known characteristic of the position, it describes the place on the numerical axis around which the values of the random variable fluctuate randomly. It is also referred to as expected value or mathematical hope: where: • x is random variable, • n is number of variables.
•
The median is a value that divides a series of ascending results into two equally numerous halves: x = a + h· 0.5 − r−1 n=1 f i f x (6) where: • a is lower limit of median class, • h is range of median class, • r−1 n=1 f i is proceeding cumulative frequency from median class. • f x is frequency of median class.
•
The mode is the most frequently occurring value in the statistics file.
Analysis of the Addressed Section of the Road I/11
Traffic flow monitoring was performed on the first-class road I/11, which is part of the European road E75. The first-class road I/11 connects northern and southern Europe [41]. It enters Slovakia via Svrčinovec border crossing and leads to the town of Žilina with a length of 36.8 km. It is relatively busy not only with individual passenger transport but also with freight transport, as it forms a significant transit junction to two neighbouring states. Long traffic jams (sometimes more than 10 km long) are an everyday part of this section of the road [42][43][44]. Traffic flow monitoring was performed at two points of the road in the Kysucké Nové Mesto district ( Figure 1). This section was chosen due to the high volume of traffic. The intersection near Kysucké Nové Mesto plays a key role in the quality of the traffic flow. A few years ago, this intersection was changed from a roundabout to an intelligent light-controlled intersection.
A large number of people worked from home (home office) during the strict restrictions related to the spread of coronavirus. This has reduced the number of passengers in public transport, but also the number of vehicles on the road network. In public transport vehicles, the number of seats had been reduced and it had been and still is obligatory to have face protection. All cultural and sports events have been cancelled. Some sports matches were held but without the presence of spectators. This also had an impact on the number of vehicles on the roads and the number of passengers in public transport vehicles. In terms of reduced demand for transport, several connections in public transport were cancelled and the demand for freight transport was also reduced. Unfortunately, specific numbers are not yet available. During strict measures, state borders were closed, and it was not possible to travel abroad. However, freight vehicles were allowed to cross the border, but drivers were tested to see if they were infected. The survey was carried out online and it was necessary to monitor the individual measured values during the day and record them at 15-min intervals. Subsequently, graphs and tables were made. A large number of people worked from home (home office) during the strict restrictions related to the spread of coronavirus. This has reduced the number of passengers in public transport, but also the number of vehicles on the road network. In public transport vehicles, the number of seats had been reduced and it had been and still is obligatory to have face protection. All cultural and sports events have been cancelled. Some sports matches were held but without the presence of spectators. This also had an impact on the number of vehicles on the roads and the number of passengers in public transport vehicles. In terms of reduced demand for transport, several connections in public transport were cancelled and the demand for freight transport was also reduced. Unfortunately, specific numbers are not yet available. During strict measures, state borders were closed, and it was not possible to travel abroad. However, freight vehicles were allowed to cross the border, but drivers were tested to see if they were infected. The survey was carried out online and it was necessary to monitor the individual measured values during the day and record them at 15-minute intervals. Subsequently, graphs and tables were made.
Traffic Data Collection
The monitored road section is one of the main road sections. Traffic detectors are located in two places (point 1 and 2- Figure 1), which record the average speed and flow (Figure 2) of the vehicles at 15-minute intervals.
The marked points in the picture are 6.5 km apart. Point 1 is 3.8 km from junction A and point 2 is 2.7 km away. This traffic information is available 24 hours a day and is processed by the National Traffic Information System. Then, they are published on the webpage odoprave.info. The National Traffic Information System (NSDI) is a comprehensive system environment for the obtaining, processing,
Traffic Data Collection
The monitored road section is one of the main road sections. Traffic detectors are located in two places (point 1 and 2- Figure 1), which record the average speed and flow ( Figure 2) of the vehicles at 15-min intervals. A large number of people worked from home (home office) during the strict restrictions related to the spread of coronavirus. This has reduced the number of passengers in public transport, but also the number of vehicles on the road network. In public transport vehicles, the number of seats had been reduced and it had been and still is obligatory to have face protection. All cultural and sports events have been cancelled. Some sports matches were held but without the presence of spectators. This also had an impact on the number of vehicles on the roads and the number of passengers in public transport vehicles. In terms of reduced demand for transport, several connections in public transport were cancelled and the demand for freight transport was also reduced. Unfortunately, specific numbers are not yet available. During strict measures, state borders were closed, and it was not possible to travel abroad. However, freight vehicles were allowed to cross the border, but drivers were tested to see if they were infected. The survey was carried out online and it was necessary to monitor the individual measured values during the day and record them at 15-minute intervals. Subsequently, graphs and tables were made.
Traffic Data Collection
The monitored road section is one of the main road sections. Traffic detectors are located in two places (point 1 and 2- Figure 1), which record the average speed and flow ( Figure 2) of the vehicles at 15-minute intervals.
The marked points in the picture are 6.5 km apart. Point 1 is 3.8 km from junction A and point 2 is 2.7 km away. This traffic information is available 24 hours a day and is processed by the National Traffic Information System. Then, they are published on the webpage odoprave.info. The National Traffic Information System (NSDI) is a comprehensive system environment for the obtaining, processing, provision, publication and distribution of traffic information and data on the current traffic situation on the whole road network in Slovakia [34,45]. The webpage provides comprehensive information The marked points in the picture are 6.5 km apart. Point 1 is 3.8 km from junction A and point 2 is 2.7 km away.
This traffic information is available 24 h a day and is processed by the National Traffic Information System. Then, they are published on the webpage odoprave.info. The National Traffic Information System (NSDI) is a comprehensive system environment for the obtaining, processing, provision, publication and distribution of traffic information and data on the current traffic situation on the whole road network in Slovakia [34,45]. The webpage provides comprehensive information on the current traffic situation, in particular on road traffic restrictions, traffic accidents and driving conditions. Records from this online traffic survey were used for analysis and subsequent comparison [31,45].
Evaluation of Online Traffic Surveys
The surveys aimed to obtain information about traffic on the monitored section, then compare and evaluate it. Two traffic surveys were carried out. Both surveys lasted 12 h, from 6:00 to 18:00. The maximum speed at the monitoring point was 90 km/h. The first traffic survey was carried out on the Sustainability 2020, 12, 7216 6 of 19 5th of March 2020, before the introduction of measures against the spread of coronavirus. The second was carried out four weeks later-the 2nd of April 2020. Clear to partly cloudy weather was during both surveys. The first survey was carried out spontaneously to obtain information about the traffic situation in the monitored section. However, after the introduction of the restrictions, a second survey was deliberately carried out and compared with the first one. The green columns in figures represent the peak hour.
Evaluation of Online Traffic Surveys
The surveys aimed to obtain information about traffic on the monitored section, then compare and evaluate it. Two traffic surveys were carried out. Both surveys lasted 12 h, from 6:00 to 18:00. The maximum speed at the monitoring point was 90 km/h. The first traffic survey was carried out on the 5th of March 2020, before the introduction of measures against the spread of coronavirus. The second was carried out four weeks later-the 2nd of April 2020. Clear to partly cloudy weather was during both surveys. The first survey was carried out spontaneously to obtain information about the traffic situation in the monitored section. However, after the introduction of the restrictions, a second survey was deliberately carried out and compared with the first one. The green columns in figures represent the peak hour.
Evaluation of Online Traffic Surveys
The surveys aimed to obtain information about traffic on the monitored section, then compare and evaluate it. Two traffic surveys were carried out. Both surveys lasted 12 h, from 6:00 to 18:00. The maximum speed at the monitoring point was 90 km/h. The first traffic survey was carried out on the 5th of March 2020, before the introduction of measures against the spread of coronavirus. The second was carried out four weeks later-the 2nd of April 2020. Clear to partly cloudy weather was during both surveys. The first survey was carried out spontaneously to obtain information about the traffic situation in the monitored section. However, after the introduction of the restrictions, a second survey was deliberately carried out and compared with the first one. The green columns in figures represent the peak hour.
The First Traffic Survey (5.3.2020)
The total number of monitored vehicles in 12 hours in point 1 was 2906 vehicles in both directions, which represents an average of 242 vehicles per hour. The following figures show the daily variation of the flow and speed of vehicles at point 1 for both directions ( Figure 3). A total of 4527 vehicles passed point 2 during the survey (377 veh/h). The following figures show the flow and speed of vehicles for both directions at this point.
In the first case ( Figure 5), the peak hour was from 6:45 to 7:45-289 vehicles. The maximum number of vehicles in 15 minutes interval was between 7:15-7:30 (86 vehicles). The speed at the monitored point was about steady until 14:15 then there was a decrease. In the opposite direction ( Figure 6), the peak hour was between 8:15 and 9:15-245 vehicles. The speed at the monitored point did not change significantly. The maximum number of vehicles in 15 minutes interval was from 6:15 to 6:30-92 vehicles. A total of 4527 vehicles passed point 2 during the survey (377 veh/h). The following figures show the flow and speed of vehicles for both directions at this point.
In the first case ( Figure 5), the peak hour was from 6:45 to 7:45-289 vehicles. The maximum number of vehicles in 15 minutes interval was between 7:15-7:30 (86 vehicles). The speed at the monitored point was about steady until 14:15 then there was a decrease. In the opposite direction ( Figure 6), the peak hour was between 8:15 and 9:15-245 vehicles. The speed at the monitored point did not change significantly. The maximum number of vehicles in 15 minutes interval was from 6:15 to 6:30-92 vehicles.
Result and Discussion
The previous figures show the recorded values in monitored points 1 and 2 separately for both days of the survey. Figures 11-16 show a comparison of the recorded values at the monitored points during 5.3.2020 and 2.4.2020.
In fact, it is a graphical comparison of the recorded intensity and speed at point 1, 2 separately according to the direction of the traffic flow. It is a combination of histograms (Figures 3-10).
Result and Discussion
The previous figures show the recorded values in monitored points 1 and 2 separately for both days of the survey. Figures 11-16 show a comparison of the recorded values at the monitored points during 5 March 2020 and 2 April 2020.
Result and Discussion
The previous figures show the recorded values in monitored points 1 and 2 separately for both days of the survey. Figures 11-16 show a comparison of the recorded values at the monitored points during 5.3.2020 and 2.4.2020.
In fact, it is a graphical comparison of the recorded intensity and speed at point 1, 2 separately according to the direction of the traffic flow. It is a combination of histograms (Figures 3-10). In the first measurement (Figure 11), the flow reached a maximum value of 75 veh/15 min. However, in the second measurement, its maximum value was 20% lower (60 veh/15 min). In the first measurement, the speed of the vehicles had a fluctuating tendency during the day. During the morning, the lowest speed was 22 km/h. This was caused by regular traffic jams, which arise in front of the crossroads near Kysucké Nové Mesto. During the second measurement, due to the lower flow of the vehicles, the speed did not change significantly. The highest recorded value was 79 km/h, but in the first survey, the maximum speed was 73 km/h. In fact, it is a graphical comparison of the recorded intensity and speed at point 1, 2 separately according to the direction of the traffic flow. It is a combination of histograms (Figures 3-10). In the first measurement (Figure 11), the flow reached a maximum value of 75 veh/15 min. However, in the second measurement, its maximum value was 20% lower (60 veh/15 min). In the first measurement, the speed of the vehicles had a fluctuating tendency during the day. During the morning, the lowest speed was 22 km/h. This was caused by regular traffic jams, which arise in front of the crossroads near Kysucké Nové Mesto. During the second measurement, due to the lower flow of the vehicles, the speed did not change significantly. The highest recorded value was 79 km/h, but in the first survey, the maximum speed was 73 km/h. In the opposite direction (Figure 12), the decrease in the flow is also clearly visible in the second measurement. The maximum flow was 41 veh/15 min. It was approximately 30.5% higher (59 veh/15 min) during the first measurement. The speed did not change significantly during the second measurement. However, the difference in maximum speed between measurements is around 17%. In the second measurement, it reached a value of 84 km/h.
In conclusion, the flow has decreased after the introduction of measures against the spread of coronavirus. The same statement can be made with the achieved values in the second monitored point. The flow measured during the first survey (blue colour) reached higher values than can be seen from the graphic processing. The maximum flow at the second measurement was reached in the interval 8:30-8:45 (70 veh/15 min-2.4.). At the first measurement, the maximum flow was almost 18% higher. The speed in both cases was around 80 km/h. However, the speed at the first measurement slowed down significantly after 14:00, which was caused by the formation of a traffic jam in front of the intersection. The flow measured during the first survey (blue colour) reached higher values than can be seen from the graphic processing. The maximum flow at the second measurement was reached in the interval 8:30-8:45 (70 veh/15 min-2.4.). At the first measurement, the maximum flow was almost 18% higher. The speed in both cases was around 80 km/h. However, the speed at the first measurement slowed down significantly after 14:00, which was caused by the formation of a traffic jam in front of the intersection. Figures 15 and 16 show a graphical course of the flow and speed recorded during the surveys at the monitored points for both lanes. It can be seen from the figures that the flow at both monitored points reached higher values during the first measurement. Following the introduction of measures against the spread of coronavirus, the flow decreased. On the contrary, due to the decrease in the flow, the speed of vehicles has increased. Figures 15 and 16 show a graphical course of the flow and speed recorded during the surveys at the monitored points for both lanes. It can be seen from the figures that the flow at both monitored points reached higher values during the first measurement. Following the introduction of measures against the spread of coronavirus, the flow decreased. On the contrary, due to the decrease in the flow, the speed of vehicles has increased. In fact, it is a graphical comparison of the recorded intensity and speed at point 1, 2 separately according to the direction of the traffic flow. It is a combination of histograms (Figures 3-10).
Comparison of the Flow and Speed at Monitored Points
In the first measurement (Figure 11), the flow reached a maximum value of 75 veh/15 min. However, in the second measurement, its maximum value was 20% lower (60 veh/15 min). In the first measurement, the speed of the vehicles had a fluctuating tendency during the day. During the morning, the lowest speed was 22 km/h. This was caused by regular traffic jams, which arise in front of the crossroads near Kysucké Nové Mesto. During the second measurement, due to the lower flow of the vehicles, the speed did not change significantly. The highest recorded value was 79 km/h, but in the first survey, the maximum speed was 73 km/h.
In the opposite direction (Figure 12), the decrease in the flow is also clearly visible in the second measurement. The maximum flow was 41 veh/15 min. It was approximately 30.5% higher (59 veh/15 min) during the first measurement. The speed did not change significantly during the second measurement. However, the difference in maximum speed between measurements is around 17%. In the second measurement, it reached a value of 84 km/h.
In conclusion, the flow has decreased after the introduction of measures against the spread of coronavirus. The same statement can be made with the achieved values in the second monitored point.
The flow measured during the first survey (blue colour) reached higher values than can be seen from the graphic processing. The maximum flow at the second measurement was reached in the interval 8:30-8:45 (70 veh/15 min-2.4.). At the first measurement, the maximum flow was almost 18% higher. The speed in both cases was around 80 km/h. However, the speed at the first measurement slowed down significantly after 14:00, which was caused by the formation of a traffic jam in front of the intersection.
In the opposite direction, the highest value of flow was 92 veh/15 min, recorded in the time interval 6:15-6:30. In the second survey, the maximum flow was 30% lower (64 veh/15 min). The speed of the vehicles in both measurements was around 80 km/h. All of the above comparisons were table processed using descriptive statistics. For the flow and speed, the basic characteristics of the position (mean value, median and mode) were calculated.
The final values and their comparison are shown in the following tables. Table 1 shows the average flow for both measurements and their subsequent comparison and evaluation. The average flow value at the first measurement in point 1 reached 29 veh/15 min. The median was 26 veh/15 min. After the introduction of measures against the spread of coronavirus, the second survey found out that the average flow value increased by one. The average and median reached the same values, 30 veh/15 min. Only in this case, the flow did not decrease, but on the contrary, increased by 3.87%. This may be because, in the first measurement, more intervals were recorded with the lower flow but large variance. On the contrary, in the second survey, most of the intervals reached lower values but with smaller variance. In contrast to the average values, the mode reached a lower value in the second measurement. The most frequently recurring flow value for the 15-min interval was 28 veh/15 min in the first measurement, and 15 veh/15 min in the second measurement.
In other directions, the average flow of vehicles decreased after the introduction of restrictions. In relative terms, this decrease represented a variance of 24%-40%.
The previous Table 2 shows the average speed values for the individual directions. The average speed of vehicles increased, which caused a decrease in the flow. The number of vehicles on the road decreased, which led to an increase in the quality of traffic flow in the form of increased speed. The relative decrease in the average speed between the surveys is shown in Table 2. The highest increase in speed was found in the direction Kysucký Lieskovec-Kysucké Nové Mesto, up to 42.51%. The highest average value of speed during the first measurement was 80 km/h and the second measurement was 84 km/h in the direction Kysucké Nové Mesto-Budatín. The average flow of vehicles in the first survey reached the value of 61 veh/15min in point 1. In the second survey, it reached a value of 49 veh/15min, it is a decrease of about 19%. In point 2, there was a decrease of almost 26% between the two surveys. The mode of flow in the first measurement reached 68 veh/15min and in the second measurement, it decreased to 46 veh/15min. Due to the decrease in the intensity of vehicles at the monitored points, the average speed increased on average by 12.38% in point 2, and by as much as 22.25% in point 1.
The following figures (Figures 17 and 18) show a graphical course of the flow and speed between 2019 and 2020, specifically 2 April recorded at the monitored points for both lanes a comparison made. d 2020, specifically 2 April recorded at the monitored points for both lanes a comparison made. Table 4, as in the previous case.
The average value of the traffic flow in 2019 reached in point 1. 158 veh/15min and in point 2. to 186 veh/15 min.The last comparison shows a huge decrease in the flow of vehicles in 2020 compared to 2019. This decrease represents almost by 70% fewer vehicles in 2020 than on the same day in 2019. However, the average speed in point 2 did not show a very large difference of 9.23%, on the contrary in point 1 it increased by more than 20%.
Due to the fact that there was an increased fluency of the traffic flow (increased speed and a decrease in the flow of vehicles), it is possible to state a positive effect of the restrictions on the quality of movement at the monitored points. However, the crisis has also shown that it is possible for governments can take measures that have a great impact that also have the support of the people. This is important: The people are the ones who experience the benefits of clean air and the disappearance of congestion [46].
Already in the graphical comparison of surveys, a decrease in the flow of vehicles, and an increase in the speed of the current traffic flow can be seen. Using descriptive statistics, the positions of the monitored quantities were calculated and compared with each other. These comparisons were made in two ways: separately for each direction and together for both directions. The analysis and comparison of individual directions showed that in one case the number of vehicles increased on average by 3.87%. In other cases, there was a strong decrease in the average flow of vehicles in the range of 23.97% to 39.28%. The highest decrease (39.28%) was found in the direction Kysucké Nové Mesto-Kysucký Lieskovec (point 1). In our case, however, there is a decrease in the number of vehicles on one road. But, for example, when comparing 2019 and 2020, the decrease in traffic flow reached almost 70% at both points, similar to [15,22,23,25].
The traffic flow has decreased in cities around the world. Somewhere traffic flow drops by more than 80% [47]. In China, for example, there has been a dramatic reduction in road traffic during the control period. The flow of commercial vehicles and buses in the Beijing-Tianjin-Hebei region and its surrounding areas decreased by 77% and 39%, respectively, during the control period [25]. In the UK, data shows motor traffic dropped by 73% on 29 March [23] compared with pre-outbreak levels. The analysis shows the number of road miles travelled has not been this low since 1955. Bucsky states that mobility was severely reduced, at least by 51% and maximally by 64%, and the middle estimate suggests a reduction of 57% in Budapest for the second half of March [48]. In our study, we did not look at the reduction of traffic throughout the Slovak Republic, but we focused on one path. Therefore, it can be assumed that the decrease in the flow of vehicles in road transport could be even lower, as reported by other studies.
The decrease in the average flow of vehicles caused an increase in the average speed in the range of 4.8-42.5%. It is interesting that in the direction Kysucký Lieskovec-Kysucké Nové Mesto, where the flow rate increased, there was the largest increase in speed. In the case of the analysis of the average values of the traffic flow in both directions, there was a decrease in the flow rate and an increase in speed in the monitored points 1 and 2. In point 1, the average flow rate decreased by almost 19%, in the second point by almost 26%. This fact and the interaction between the monitored variables caused an increase in the speed of 12.38% in point 2 and 22.25% in point 1. Several studies [13,[49][50][51] also confirm an increase in speed on the roads in response to a reduction in traffic flow but also points to an increase in average speeds between 2019 and 2020.
In addition to the decrease in the flow rate, air quality records in the Slovak Republic also showed a decrease in pollutants. In cities, changes in NO 2 concentrations from the 10-year average range from −6 to −41%, with the average for all stations at −24%. Changes in NO x concentrations from the 10-year average are more pronounced from +29% to −53%, with an average of −25%. The observed trends are less pronounced compared to 2019, where the average decrease from all stations is at −23% for NO 2 and −21% for NO x . PM10 concentrations in the considered period of 2020 compared to the average of 2010-2019 decreased, on average, by −14% and compared to 2019 increased by +1% [52]. Results from the study [53] given at a resolution of 20 km show decreases in NO 2 concentrations ranging from −30% to −50% in all Western European countries. The reduction in NO 2 concentrations in Madrid (Spain), under COVID-19 lockdown during March 2020 were 50% and 62%, respectively [54]. Others authors in Barcelona (Spain) assessed air quality changes during the lockdown in the city of Barcelona and observed a 31% and 51% reduction of particulate matter (PM10) and nitrogen dioxide (NO 2 ), respectively, during the lockdown compared to the month before the lockdown [55]. The NO 2 levels of São Paulo decreased during the partial lockdown: −45% compared to the same period in 2019 [26] and e.g., in Almaty (Kazakhstan), CO and NO 2 concentrations reduced by 49% and 35%, respectively [56]. In one study from Morocco, it was found that during social distance, NO 2 levels fell by up to 96% and −75% for PM10 [57]. Sharma et al. [58] observed a 31% reduction of particulate matter (PM10), respectively, during the lockdown compared to the same time period of the past four years in India.
In the Slovak Republic, the decrease in NO x and NO 2 concentrations in the urban area could be due to two factors over the period considered-favourable dispersion conditions or a decrease in emissions due to the measures under consideration, in particular in transport and to a lesser extent in industry. For PM10, the situation is more complicated. These concentrations are more affected by cross-border transmission (significant cross-border transmission was observed at the beginning of measures with a likely source of dust in the Karakum Desert and around the Caspian Sea) and the most significant source of their household emissions-over 60%, followed by agriculture, transport (around 9%) and industry.
During the COVID-19 epidemic, various public health measures, such as encouraging social distancing, lockdown of cities, and travel restrictions. This could be explained by the fact that the emergency measures (lockdowns), related to the cessation of industrial and transportation activities, had as a consequence a limitation in NO 2 emission from both industrial production and vehicle exhaust, which has implicated a decrease in NO 2 concentrations during this period. Air quality also improves with the reduction of production activities and human mobility. At the same time, the lockdowns during COVID-19 are stricter, and the compliance of residents is better than usual. Reduced traffic in cities is also a major benefit in terms of reducing pollution in NO 2 and other local pollutants. Therefore, the air quality index decreases more after the private vehicles are restricted during this period. Comparing the air quality in 2019 and 2020, Cadotte [59] found that governments can improve air quality through policy change. Hepburn et al. consider the possible positive and negative effects of COVID-19 on climate change and tend to be very optimistic [60].
Conclusions
Currently, transport is a basic and very important part of society. In general, increasing the flow of vehicles increases the load on the road network to unfavourable values and the use of roads reaches the maximum capacity. However, an important task is to design and build roads that will suit the current as well as the future flow of vehicles.
The article aimed to point out how the measures against the spread of COVID-19 limited the quality of the traffic flow on the selected section of the road. Specifically, it was the first-class road I/11, which connects the cities of Žilina andČadca. It is one of the most congested road sections in Slovakia. The flow and speed data from the National Traffic Information System were used as a basis for analysis and evaluation. The flow and speed of the traffic flow were monitored at two points in both directions during two surveys (before and after the introduction of measures). In addition to traffic surveys, a comparison was made between 2019 and 2020 for the same day, 2 April. In this case, there was a significant decrease in the number of vehicles. Based on our results, it can be argued that the lockdown and social distancing had a great impact on the decline in traffic not only in our country but also around the world, as describe in other studies.
Based on the findings, it can be argued that the measures related to COVID-19 had a positive effect on improving the quality of traffic flow in the monitored road section. In addition to reducing the flow of traffic on roads and in cities around the world, there has been a reduction in air pollutants in the Slovak Republic and other countries. However, several scientific studies have shown that this state of emergency has affected not only road transport but all modes of transport worldwide. In this way, the negative effects of traffic and industry such as particulate matter such as emissions, noise, etc. have been reduced. Although the partial lockdown has contributed to a positive impact on air quality, reducing the number of vehicles and our planet has become "greener and healthier" for a while. It is important to consider the negative impacts on social aspects, considering the deaths caused by COVID-19 and also the economic effects. Funding: This research was funded by the Slovak scientific grant agency VEGA of the Ministry of Education-no. 1/0436/18-Externalities in road transport, an origin, causes and economic impacts of transport measures.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-09-10T10:14:08.193Z
|
2020-09-03T00:00:00.000
|
{
"year": 2020,
"sha1": "cde8e354602ea65ee0fbeadab48e5c74bcbeca34",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/17/7216/pdf?version=1599398843",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "76047a0bafe95e18cdbbff1ecd9a3d1fc63a1387",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
268767726
|
pes2o/s2orc
|
v3-fos-license
|
Environment Niche Perspective on Brown Canyon Post-Mining Area in Semarang City
,
INTRODUCTION
Indonesia is an archipelago country with countless natural biological and nonbiological resources.Mineral is a part of non-biological resources, a material that can be massively found in Indonesia.Sediments or minerals are generally distributed unevenly in the interior of the earth's crust.These mineral resources include sand, stone, petroleum, coal, gold, silver, tin, and others, and they are commonly taken and utilized to enhance developmentespecially building construction.According to Masrukhi (in Brata, 2008), the development ideology changes people's lives drastically.Apart from being urged to explore nature to explore the riches contained therein, development processes are also directed at the level of improving people's lives.In this case, natural resources are the key and basic capital in national development.Therefore, the management must be utilized as much as possible for the people's benefit by paying attention to the preservation of surrounding life.One of the activities in using natural resources is mining minerals (Prodjosoemanto, 2006).
Various mining phenomena commonly draw the attention of Indonesian people.This is because the earth's belly contains a massive wealth of natural resources that often intersect with their lives (Brata, 2018: 9).However, mining has both negative and positive impacts.Mining is positive as it has a role in development by producing raw materials for industry, absorbing labor, sources of foreign exchange for the country and the region, developing technology, training skilled workers, and incorporating modern management patterns.Meanwhile, it is negative as it causes various harmful impacts, such as deforestation in exploration and exploitation activities, stripping/digging, noise from mining machines, and air pollution.
Two of the famous mining types in Indonesia are rock and sand mining's.Both are primary raw materials for civil construction, such as houses, buildings, roads, bridges, ports, and dams.Among others, rock and sand mines were found in Tembalang, Semarang City.The geographical condition of Semarang City is divided into two, namely lowlands and hills.Entrepreneurs and investors use the empty hilly areas as mining points.Rowosari Village, a full of minerals C area, is one of them.
Rock and sand mining's in Rowosari Village have been running since the nineties.They are operated almost daily, and dozens of trucks can transport the mine minerals in turn.Mine products are transported in the form of mountain sand, solid soil, and gravel.Each Colt Diesel truck can carry 2-3.5 tons of sand.These continuous activities have an impact on the settlements around.Class C mining, currently better known as rock and sand mining, is regarded as a severe threat to the city of Semarang (Nutiara, 2016).
Multiple government efforts have been made to close the area, but unfortunately, they only resulted in conflicts.Miners thought that the closing policy would eliminate their source of economic income.In the end, an alternative has emerged.The residents initiate to take advantage of the changing landscape due to the mining's.They intend to turn it into a tourism destination offering beautiful panoramas.This way, it might provide economic value if appropriately managed, as well as implementing reclamation and recovery efforts for the environment's sake.
However, there are still some dilemmas, for instance, the active illegal mining'salthough they have been warned and banned for times.Competition in getting a substitute job and social conditions is regarded as the reason for this disobedience.In other words, people are forced to return to their old work as miners to stay alive though they sometimes violate social norms (Brata, 2020: 111).
Mining issues encourage residents to slowly establish alternative post-mining tourism management.The idea was proposed to respond to the socio-ecological crisis around the left area.This issue background motivates the researchers to analyze the socio-ecological crises in the mining area and the strategies to transform it into a tourism destination.
RESEARCH METHOD
This study was qualitative.The qualitative method produces descriptive data and research results in written or spoken words from people and observed behavior.Meanwhile, the sampling technique was snowball.Sugiyono (2008: 219) explains that the snowball sampling technique is for data sources that are initially small in number but get bigger and finally stop until the information is sufficient.This study was conducted in Rowosari Village, Tembalang Sub-district, Semarang City.It emphasized the socioecological crises due to sand and rock mines and the strategies to transform it into a tourism destination.There are primary and secondary data.Primary data were obtained directly through interviews and observations.Meanwhile, the secondary data were collected through documents or photos related to events in the area, namely pictures of mining activities and postmining land use.
FINDINGS AND DISCUSSION Brief Description of Research Location
Rowosari Village is geographically located in the southern part of Semarang City.Its coordinates are -7°06'71''S and 110°48'24''E.Topologically, Rowosari is situated in a lowland area with the village office as its administrative center (approximately 40 meters asl).Several areas consist of hills, which are potential for agriculture, plantations, and excavation of minerals C. The mining and plantation activities in Rowosari can be seen from its concrete roads-traversed by mining vehicles and transportation carrying residents' plantation products.Trees, plantations, and fields dominate the access.
In this area, mining is vital as it is not only a tourism icon that attracts many people but also a source of people's income.Undeniably, excavation C in Rowosari opens up employment opportunities for hundreds/thousands of people.The data validates that most Rowosari people work as laborers/private sector workers.Some probably do not work, just take care of the household or do farming, but it does not change the fact that mining plays a remarkable role as the most dominating job.
Rock and sand mining's in Rowosari have been around since the 90s, meaning that it has been decades until now.The looked minerals in this area are classified in type C excavation.Agustin and Brata (2019) explain that some minerals C are asbestos, nickel, sand, and rocks, which can be quite easily found in several regions in Indonesia.Rock and sand are two class C minerals widely used for industry and construction.Generally, these materials are taken and processed by two types of mining: largescale and small-scale.Large-scale mining is typically managed by the State-owned Enterprises (BUMN), while people or community mining's tend to be small-scale mining.
Mining areas are privately owned land or individuals managed by a Limited Company (PT) to extract the natural resources.As there is nothing that can be taken back, the mine owner will buy or rent the land of residents adjacent to the mining site to continue mineral exploration.Initially, mining's in Rowosari were started by Mr. Mudiono and his brother as they found the mining potential.Originally, it was only done using manual stuff and was primarily for leveling the land.However, as time passed, this mine site expanded closer to residential areas and entered Demak Regency.
Mudiono's family has maximized the mining potential in Rowosari.This can be seen from the involvement and influence of this family management on the massive mining there.The family-based mining company is indeed growing massively in Rowosari.The following is a list of mining companies and their ownership: The preceding five companies manage the natural resources (excavation C).There are various daily activities, with heavy equipment and production machines to process mining products.The increasing need for development makes mining in this area develop by using modern equipment to increase the amount of production.What is produced positively impacts supporting the needs of regional development and the surrounding community's economy.Brata (2014) states that people carry out mining activities to fulfill their basic needs.Mining activities also support the surrounding community's economy and can create jobs that increase welfare.
Mining Continuity
Rowosari mining was once seen as illegal and thus became controversial since it gradually changed the landscape and impacted the environmental sustainability of its surroundings.The government has closed the mining area at times (especially in 2015).The decision is under the changes in the authority for mining business licensing in the context of transferring the government affairs.This is a legal basis for implementing the governor's authority to issue mining business permits.
There are differences of interest between the government and the mining parties; the government represents environmental interests, while mining parties represent economic ones.Various efforts made by the policy-makers, ranging from supervision and policies related to mining, have been ignored by the mining parties and those involved.Mining is still running even though a ban has been issued.This happens because the mining business is up-andcoming and makes some residents not have alternative jobs and other skills as they are already comfortable working there.
Data from the Energy and Mineral Resources (ESDM) Office of Central Java in 2018 noted that previously, there were a permit for IUP (Mining Business Permit) activities with the category of sales permit submitted by Mr. Mudiono on behalf of PT.Berkah Rowosari Indah in 2017.However, no licenses are recorded for other companies operating in the area.This is ironic since Berkah Rowosari Indah is not the only company involved and is actively running in Rowosari.The released data recap also does not mention the validity period of the mining permit that has been submitted.
Socio-ecological Condition of Rowosari Community
The emergence of rock and sand mining has begun to shift agricultural land in Rowosari.Employment data in Rowosari Village shows that people work more as laborers and private employees.This is related to the emergence of mining which opens up many job opportunities related to the level of education of the Rowosari people.The mining sector has indeed had a positive impact on wide employment.Mining owners own mining businesses, and a building contractor business allows people who own heavy vehicles such as trucks to take advantage of this opportunity.However, this is not always a positive impact considering that working in the mining sector is not listed as a specific skill.This makes the community dependent on this sector.The dependence results in more massive-or even uncontrolledactivities that are hard to stop.Sometimes, there are even "games" in buying and selling community land to expand mining areas.
Meanwhile, the mine owner and community relationship are relatively spotty.There are various concerns experienced by the community, such as the environmental and social impacts.The community understands that the problems of life they are currently facing are driven by mine owners who continue to exploit and expand their land.However, various community concerns did not surface because of the fear and reluctance of the mine owners.People decide to make peace because they feel they cannot fight against the power of the mine owner and have no idea since the mines have fulfilled their needs.They eventually keep silent-as long as the mining activities positively impact their economic matters.
Ecological Issues around Mining Areas
The need for excavation C products, such as soil, rock, and sand for development matters results in a significant expansion of the mining area.Moreover, many national strategic projects in Central Java are progressing, including the construction of the sea embankment toll road and the reclamation of the northern region of Semarang.Efforts designed to save the ecological situation of other areas seem to sacrifice spaces that produce construction materials.Thus, this program looks like it just transfers ecological damage to new places where natural resources are being exploited.
The bustle of mining vehicles is just typical for Rowosari people.Besides public access, the concrete road also acts as a distribution route for trucks of soil, stone, and sand.Although people enjoy the road construction results, they are also disturbed by the heavy traffic of mining vehicles.The large cars often leave massive dust along the road and on residents' houses terraces.The evening air will be sweltering and arid due to mining activities-purging the trees away for years.The dry air also exacerbates pollution in Rowosari.Affected residents cannot do anything but accept all because the road is built with donations and by mining companies.
Excavation areas leave not only high cliffs but also fairly deep basins-especially those carried out by dredging the surface of the soil until the base layer of rock and sand.Sukamti and Brata (2020) state that mining activities negatively impact the environment.Mining in the fields, for instance, will cause the pits cannot be covered with soil.When it rains for quite a long time, the excavated areas become ponds because they cannot absorb rainwater infiltration.This also causes the water flow to decrease.Excavations around the hills can also cause landslides.
Land Condition of Rowosari Rock and Sand Minings
The community's dependence on work in the mining sector (especially excavation C) has resulted in broader mining expansion.Decades of mining activities have changed Rowosari's typology.Many lands that were originally hilly and agricultural areas are now low and relatively barren.This difference can be seen from the high cliffs that separate areas that are still wellmaintained from the damaged ones.
Mining land has irregular contours.Some are holes and cliffs that have the potential for landslides.Between them, two tall pillars become icons in Rowosari today.The two pillars are referred to as watu lumbung, categorized into Watu Lanang (Male Stone) and Watu Wedhok (Women's Stone).Previously, the two stones were medium for public prayer.The Rowosari community, which farmers once dominated, often performs rituals to ask for rain as the area is usually dry.However, mining activities have turned the sacred site into a "mere" tourist icon for Rowosari.The post-mining lands and the barren and dusty surrounding areas make agricultural land less productive.Some residents who own land adjacent to the mine prefer to sell it to avoid losses.Several development activities proved that investors also caught the opportunity for the Rowosari area to be used as a cluster.There are at least three newly built clusters there.
Tourism Potential of Mining Area
Behind the massive process of utilizing natural resources of rock and sand in Rowosari, this area has its own attraction for residents and visitors.The expansive mining area presents a beautiful panorama during sunrise and sunset moments, in which the exotic sky is combined with high cliffs here and there.In mining tourism, the main attractions are generally divided into four categories: natural, man-made designed primarily for excursions, manmade, man-made built for excursions and events.Armis (2019) explains that a destination's typical experience and satisfaction can contribute to the tourists' main motivation and can be an essential attribute for a location to excel in its competition with others.
Mining activities leave not only rows of cliffs and pillars that rise upwards but also deep mining holes.Some holes that open underground water flows and do not have good water absorption will form water puddles that resemble lakes.This lake is often a concern of visitors and the public as it has quite a beautiful view.Besides, the typical lake is also a fishing arena.According to several residents, the mine owner once sowed fish seeds there.The typology of mining areas that have turned into expanses of rocky soil, puddles, steep mounds, and ex-mining roads has become the main attraction for extreme sports lovers, such as downhill bicycles and motocross.Ex mining areas are often used as playgrounds for them.The vast expanse of mining seems to be a free vehicle for them, even though the activity is classified as dangerous since the area is not intended for it-and thus no safety standards there.According to residents, the mining area has also been visited by various artists and has become a shooting location for tourism/adventure TV programs.
Another option for post-mining land is transforming it into an educational tourism field.Such post-mining activities have been practiced in Europe.Lamparska (2019) explain that mining area can be used as a training ground for the mining and environmental protection department polytechnic students.The area selection can be based on age, suitability for tourism and education, origin, authenticity, and uniqueness.
Access for Post-Mining Tourism
Access to post-mining use in the Rowosari area is entirely in the hands of the company owner.The management of mining activities begins with buying land belonging to the local community, which then results in the transfer of full rights and power to the mining company.The district has no access to use post-mining land because of private ownership.Land use will not occur if there is no permission from the owners.
Post-mining activities in Rowosari Village not only focus on improving the environment but also on improving a tourism attraction.When mining did not run since there was no permit from the local government, the company had time to open access to post-mining land use to help people's economy.The locals welcomed entrance, and it was used to develop tourism potential.The natural panorama around the mine is a unique attraction that can be highlighted.One of the parties interested in developing tourism here is Pertamina.The company is noted to have carried out a CSR program to build a tourism area in one of the ex-mining lands.In this case, Pertamina cooperated with the mine owner in managing the land.
Development and Strategy of Post-Mining Tourism in Rowosari
The mining landscape resembles the Grand Canyon area in Arizona, United States.The Grand Canyon area consists of a range of canyons formed by erosion around the Colorado River.This resemblance attracts tourists to visit Rowosari.Even the naming is inspired; the ex-mining site in Rowosari is famously called "Brown Canyon".The most similar parts are two high pillars in the middle of a barren mining area surrounded by cliffs.
Brown Canyon was viral and known to many people.It is considered to have an extraordinary and wonderful landscape.According to Pitana (2005), there are always push and pull factors for someone to travel, as well as the driving factors that are generally socio-psychological or personspecific motivation.This uniqueness is enough to make Brown Canyon widely known by the public-taking advantage of online media that distributes information rapidly.
Many stakeholders are related to tourism and its needs, one of which is a destination (DTW) to be visited.Most tourists are generally people who are tired of living life in the middle of the city.They, for example, get tired of hearing traffic noise.Therefore, they choose to travel in quieter and more unique villages than others.Indonesia has many unique villages; all can be developed into tourism villages with the community, managers, and government collaboration.Such development requires clear guidelines to succeed (Antara, 2015).Brown Canyon has not been appropriately managed in this study because visitors can only fully enjoy it after the workers have finished mining activities.In addition, its fame that makes the percentage of visitors continue to increase is not matched by strict and transparent regulations-for example, regarding parking.Unclear parking rules even once resulted in disturbance for some residents around the area.
Apart from the problems caused by tourist attractions in Rowosari, the uncertainty of the post-mining direction raises various choices for the community.They are faced with the option to surrender to the former mines as they are or to process them into tourism destinations.This situation can be seen from the perspective of Rational Choice Theory.Two crucial elements in rational choice are actors (rational action actors: individuals) and resources (various things controlled by actors to achieve need fulfillment).Someone acts because of a particular goal, and he will sacrifice the resources he has to accomplish that goal.These resources are material (money, land, physical equipment) and non-material (trust, social relations, labor/business).Ritzer (2016) explains that people with adequate resources, such as mining owners, may quickly achieve goals.However, people with fewer resources are undoubtedly different; they are most likely to have difficulty realizing goals, so their rational actions can be easily affected.However, although it is difficult to achieve the desired goal, there must always be an opportunity.Only people willing to think hard, diligently, and sensitive to the environment can find or capture the existence of an environment niche.The discovery of environmental gaps as something new that gives people the opportunity to work and earn economically will be massively followed by others (Brata, 2020: 28) In this case, the community around the mining area continues to pursue postmining business as a place for tourism.Some people see the environmental gap as a good tourism potential to be developed and can help them as a post-mining activity.This is undoubtedly profitable from an economic point of view.The community leader of Rowosari designs a long-term plan to set the post-mining activity strategy there.The technique used is the improvement of road infrastructure that supports the Brown Canyon area.After it is completed, the site is targeted to become a culinary tourism center which will help the community's MSME activities and impact the quality of the local economy quality.
Although many obstacles and challenges come up in practice, the Rowosari community believes that what they are doing now is to learn to face life after mining activities are over.Empowerment efforts and ideas for post-mining activities keep coming.Those who depend on mining activities are now slowly considering postmining actions by looking at the environment niches around.In this case, the community and related parties fully understand their goals and resources that must be sacrificed to determine their rational actions.
CONCLUSION
Rock and sand mining's in Rowosari Village have both positive and negative impacts.It is positive because it increases job opportunities and sources of economic income for the community.Meanwhile, it is negative as it destroys social and ecological life, specifically changes landscapes, increases the risk of landslides, and causes air and noise pollution.These socioecological crises affect closing access to mining in the Rowosari area and make several people lose their jobs.These problems provide two main rational choices for the community: continue mining or open alternative post-mining tourism activities as a new source of economic income.
In this context, the mining companies, the government, and the community are the three actors who exchange roles and resources.The post-mining area was once a tourist attraction, while the mining company had not yet been receiving a business permit.Still, the owner stopped the activity due to illegal levies.This situation is exacerbated by the issuance of mining business permits which triggers uncertain direction of the post-mining area as a tourism destination.Some post-mining businesses carried out by the owners also do not lead to tourism activities.Several mining activities continue even though the period of the related mining business license has expired.Currently, the community can only propose strategies outside the mining area by preparing supporting infrastructure and activities when mining activities come to an end.
Table 2 .
Classification of Actor and
|
2024-03-31T15:30:00.973Z
|
2022-06-30T00:00:00.000
|
{
"year": 2022,
"sha1": "e562e625b3a336b9769dc86164e1b5110e4861e4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15294/fis.v49i1.35814",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4b4a6e891f178eaad6fd2ca4f16781ec80166ed0",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
}
|
102928869
|
pes2o/s2orc
|
v3-fos-license
|
SODIUM LAURYL SULFATE EFFECTS ON ELECTROCHEMICAL BEHAVIOR OF POSITIVE ACTIVE MATERIAL AND COMMERCIAL POSITIVE PLATES IN LEAD-ACID BATTERY
Sodium lauryl sulfate (SLS) is an anionic surfactant used in many applications such as cleaning and hygiene products, electroplating, etc. For the first time, effects of SLS as an electroyte additive on electrochemical behavior of positive active material and commercial positive plates have been studied by cyclic voltammetry (CV), eletrochemical impedance measurements. The electrode surface morphology after 20 cycles of CV was studied by using scanning electron microscopy (SEM). Results show that SLS additive significantly improves the conversion reactions of positive active material and therefore enhances charge/discharge capacity. By increasing SLS concentrate, crystalline structure of positive active material changed. The effects of SLS on kinetic parameters of positive electrode reactions are also discussed in this paper. The results showed that SLS is promising for use as electrolyte additive for leadacid batteries.
INTRODUCTION
Lead acid battery is a long-standing traditional source of power, with many outstanding features such as high electrity, stable operation, small inert resistance, simple structure, low cost, etc. Lead acid batteries are widely used for starting, lighting, ignition (SLI) batteries, backup power supply, etc, resulting in annual turnovers of lead acid battery industry reached tens of billions of US dollars on a worldwide basis [1]. Nevertheless, the positive active material of lead acid battery has a comparatively low coefficient of utilization. It is only about 45 to 50 % for discharge with small current density. Therefore, how to raise the coefficient of utilization of the positive active material has become one of the problems that arouses continuous concern among scientists and engineers that are working in field of chemical power sources [2].
Many approaches have been done to overcome this disadvantage such as: adding organic materials in positive paste [2 -4] and using additives in electrolyte solution. The latter approache is very effective without changing the production process. Voss [5] and Meissner [6] have published comprehensive assessments of effects of H 3 PO 4 and phosphate salts on activity of lead acid batteries. Accordingly, the addition of H 3 PO 4 to the electrolyte extends the cycling life, decreases the irreversible sulfation of positive active material. The addition of H 3 BO 3 at concentration up to 0.4 % inhibits the formation of hard-PbSO 4 and reduces the self-discharge of the PbO 2 electrode. Naima Boudieb et al. [7 -9] have studied effects of two phosphate surfactances on electrochemical behavior of lead acid batteries. Results showed that those additives have some beneficial effects on performance of battery.
Sodium lauryl sulfate (SLS) is an anionic surfactant used in many applications such as cleaning and hygiene products, electroplating, etc. In this paper, for the first time, effects of SLS as an electroyte additive on electrochemical behavior of positive active material and commercial positive plates have been studied.
Preparation of working electrode
The working electrode is a flat plate made of pure lead metal or a commercial positive plate taken from a 5Ah-type battery. Except that working surface area of electrodes was exposed, the sides and other parts of them were covered with an epoxy to avoid any contact with electrolyte solution. Then, electrodes were polished with smooth paper.
Materials and electrolytes
The 98 % concentrated sulfuric acid and sodium lauryl sulfate is the pure chemical from China. The H 2 SO 4 electrolyte (d = 1.27 g.cm -3 ) was prepared from concentrated H 2 SO 4 and double distilled water. Electrolyte solutions containing 10, 50, 100, 150, 200, 250 and 300 mg.L -1 of SLS additive were prepared by adding an appropriate calculated amount of SLS additive to the electrolyte.
Electrochemical measurements
Electrochemical measurements were carried out with a potentiostat/galvanostat equipment (AUTOLAB PGSTAT 302N-Netherlands) using the three electrode system. Working electrodes were pure lead and commercial positive electrode. The counter electrode and reference electrode were a pure lead sheet and Al/AgCl electrode, respectively. Before every mesurement, the working electrode was mechanically polished with emery paper and cleaned with acetone and double distilled water.
Cyclic voltammograms were obtained at a 50 mV.s -1 scan rate, between 1.14 and 2.5 V (Ag/AgCl) for pure lead electrode and 1.14 to 2.1 V (Ag/AgCl) for commercial positive electrode. The used working electrode has a surface area of 0.57 cm 2 for pure lead electrode and 0.13 cm 2 for commercial positive electrode.
Electrochemical impedance spectroscopy measurements were carried out after 20 cycles of CV of the lead electrode in the solution to reach a steady-state condition. The frequency range was set from 10 kHz to 10 mHz with potential amplitude of 5 mV in open circuit potentical.
SEM imaging
Micrographs of pure lead alloy electrode were obtained with JSM 6610-LA scaning electron microscopy (Jeol-Japan). To determine the microstructure of PbSO 4 and PbO 2 formed on electrode surface, before taking SEM imaging, electrodes were polarized by CV with 20 cycles.
RESULTS AND DISCUSSION
3.1. The conversion of specials in positive active material. Figure 1 shows cyclic voltammograms recorded at 50 mV.s -1 scan rate on a pure lead electrode in H 2 SO 4 (d = 1.27 g.cm 3 ) with and without various additive concentrations of SLS, the potential region from 1.14 to 2.5 V (Ag/AgCl). It is clear that there are two peaks in CVs. The anode peak relates to the oxidation of lead sulfate to lead dioxide and the other peak relates to the reduction of lead dioxide to lead sulfate corresponding to the following reaction equation: The obtained data from Fig. 1 are gathered in Table 1. E pa and E pc (mV) are anode and cathode potentials, respectively. ∆E p (mV) is the difference between anode and cathode potential values, which is characteristic for reversible degree of electrode reaction.
are coefficients used to evaluate the conversion level of lead sulfate to lead dioxide and vice versa, respectively. They are quotient of charge amount (the area under peaks) used for lead sulfate oxidation (Q + ) or lead dioxide reduction (Q -) in 20 th cycle.
From Table 1, ∆E p values of pure lead electrode in absence of SLS additive is smaller than that in presence SLS additive. This indicates that the addition of SLS additive to make conversion reactions on alloy electrode become more irreversible. The more SLS additive concentration from 10 to 200 mg.L -1 , the more decreasing reversible degree of reactions on the electrode is. Then, with continued increase in SLS additive concentration of over 200 mg.L -1 , the reversible degree of electrode reactions increases again. As indicated from Table 1, values of conversion coefficients with the presence of SLS in electrolyte solution are greater than one. This shows that SLS additive significantly improves conversion reactions of positive active material, especially at SLS concentration range of 150 to 200 mg.L -1 .
A similar behavior is also recognized from cyclic voltammetry measurements on the commercial positive electrode in H 2 SO 4 solution with and without SLS additive (Fig. 2).
Reduction peaks are clearly observed while oxidation peaks are overlapped by the oxygen release.
It is well known that, a cycle of cyclic voltammetry measurement can be considered as a charge and discharge cycle for surface material layer of electrode. This reveals that SLS additive with the ability to significantly improve the conversion of positive active material can be increase the capacity of positive electrode. Thus, SLS is suitable for use as an additive for electrolyte in lead acid batteries. Figure 3 shows electrochemical impedance spectroscopys in Nyquist plots of pure lead electrode in H 2 SO 4 solution without and with various concentrate of SLS, and an equivalent circuit using to fit the experimental data.
a b
The kinetic parameters of conversion reaction in positive active material drawn by fitting the measured impedance data to the equivalent circuit. In this particular, R s is the solution resistance, CPE represents the constant phase element which is substituted for double-layer c d e f capacitance, R ct is the charge transfer resistance, W stands for the diffusion impedance in the double layer. The results of fitting with equivalent circuit are listed in a Table 4.
From table 4, it can seen that the value of charge transfer resistance in the presence of SLS additive is smaller than that in absence of SLS additive. However, with increasing in SLS concentrates from 10 to 250 mg.L -1 , the value of R ct increases. The beneficial effect of SLS additive on the charge transfer process seems to be the result from crystallographic orientation of SLS additive.
It is known that the Warburg factor σ characterizes for the ability to interfere with the diffusion of reactants and reaction products. This is important because the delay in diffusion of them increases the concentration polarization and makes reaction process more difficult. Notice that the values of the Warburg factor of pure lead electrode in electrolyte without SLS additive is larger than that with SLS additive. This can be explained by the fact that, when added to the electrolyte solution, the SLS additive adsorbs to the electrode surface and contributes to the formation of a semipermeable membrane. The present of SLS in the formed semipermeable membrane reduces the porosity of the membrane. As a consequence, the transition of specials through the membrane and double-layer are impeded.
From Table 4, values of R s and CPE decrease in presence of SLS additive. This indicates that the addition of SLS in H 2 SO 4 solution decreases resistance of electrolyte and changes structure of double layer.
The reduction of charge transfer resistance and the increase of diffusion impedances explain the effect of SLS additive on conversion reactions, as discussed above. Figure 4 shows SEM surface images of pure lead electrode after 20 cycles of CV in H 2 SO 4 solution with and without various concentrations of SLS, respectively. Durring CVs, the conversion between PbO 2 and PbSO 4 occus on the surface of positive electrode arcording to reaction (1). As indicated from Fig. 4, formed crystals PbSO 4 and PbO 2 on the pure lead electrode surface in the presense of SLS additive are smaller in size and more spongy. It seem that SLS additive adsorbed on electrode surface, hence, changed the structures of crystals formed on the electrode surface.
CONCLUSIONS
The effects of SLS on electrochemical behavior of positive active material and commercial positive electrode were investigated. The following conclutions were drawn: i) The presence of SLS in electrolyte solution significantly improves the conversion reactions of both positive active material and commercial positive electrode and thus increases their discharge/charge capacity. However, the SLS additive makes electrode reactions become less reversible.
ii) The effects of SLS additve on conversion reactions of positive active material are the result of reduction of charge transfer resistance, change of double layer and increase of diffusion impedance.
iii) The addition of SLS in electrolyte solution changed surface morphology of positive electrode. The formed PbSO 4 and PbO 2 crystals are smaller in size and more spongy.
iv) The electrochemical behavior of positive active material and commercial positive electrode depends on SLS concentration in the electrolyte. The SLS additive is promising for use as suitable additives for lead-acid batteries.
|
2019-04-09T13:10:47.487Z
|
2018-03-24T00:00:00.000
|
{
"year": 2018,
"sha1": "899f5e45a716422e11e6240ffd3204e45cb7b1c6",
"oa_license": null,
"oa_url": "http://vjs.ac.vn/index.php/jst/article/download/12203/103810381750",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cdb6782fec3cb9852ec1b44223d2e28afe0ca455",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
227216988
|
pes2o/s2orc
|
v3-fos-license
|
Spectrophotometric quantification of dolutegravir based on redox reaction with Fe3+/1,10-phenanthroline
A simple and sensitive spectrophotometric method was developed for the quantitative measurement of dolutegravir in pure form and pharmaceutical formulation. The present method was based on redox reaction between dolutegravir and ferric chloride, which upon complexation with 1,10-phenanthroline formed an orange-colored complex that showed absorption maximum at 520.0 nm. The developed method obeyed linearity in the concentration range of 40.00–140.00 μg/mL. The method was also validated as per International Council for Harmonization guidelines and the results were within acceptance values. The validated method was employed for the determination of dolutegravir in pharmaceutical dosage form and the percentage assay value was found to be 102.5, which is in agreement with its label claimed. The developed redox-based colorimetric method could be used in the routine quality control analysis of dolutegravir present in various pharmaceutical dosage forms.
Literature review on dolutegravir revealed several analytical methods for its quantification either alone or in combination with other drugs. Ultraviolet-visible spectrophotometric technique for the analysis of dolutegravir sodium in tablet formulation in methanol [3], ultraviolet spectroscopic method using hydrotropic solubilizing agents [4], high performance liquid chromatographic method for its stereoisomers [5,6], high performance liquid chromatographic and high performance thin-layer chromatographic methods for its salt analysis [7,8], ultra-performance liquid chromatographic method [9], and bioanalytical methods using high performance liquid chromatography [10] and high performance liquid chromatography-mass spectroscopy [11] were reported in literature. The methods reported for dolutegravir in combination with other antiviral drugs involved reverse phase-high performance liquid chromatographic [12][13][14][15][16][17][18][19], ultra-performance liquid chromatographic [20], and normal phase high performance liquid chromatographic methods using rat plasma [21].
Iron [III] salts play an important role in spectrophotometric quantification of many pharmaceuticals. Ferric form (Fe 3+ ) of iron acts as oxidizing agent and causes oxidation of analyte under study and itself reduces to ferrous (Fe 2+ ) form. The later ions complex with reagent and produces chromophoric complex, which has λ max in visible region. 1,10-Phenanthroline is a heterocyclic compound and known as a redox indicator, because of its ability to make complexes with various metal ions. Determination of Fe(II) and/or ruthenium (II)-1,10-phenanthroline complexes are well documented in the literature [22][23][24][25].
Although many instrumental techniques are available till date, still spectrophotometry plays a significant role in micro/nanogram level analysis of pharmaceuticals. It is simple, low time, and labor-consuming and easy to perform analysis using ultraviolet-visible spectrophotometer. Chromatographic methods, such as high-performance liquid chromatography, high-performance thin-layer chromatography, ultra-performance liquid chromatography, and high-performance liquid chromatography-mass spectroscopy require lavish instrument set-up, skilled operators, expensive solvents, and tedious extraction procedures, unlike colorimetric methods [26,27]. As to date, no simple colorimetric method was developed for dolutegravir (using Fe 3+ /1,10-phenanthroline) to the best of our knowledge. In view of the above facts, a simple, sensitive, and extraction-free colorimetric method was attempted for dolutegravir using Fe 3+ /1,10-phenanthroline as chromogenic reagent. The same with success adopted for the ascertainment of dolutegravir in pharmaceutical formulation.
Instrument
The method was established by utilizing analytical grade chemicals and reagents. Dolutegravir standard gift sample was provided by Hetero Drugs Pvt. Ltd. and marketed solid dosage form (Tivicay) was procured from local pharmacy. The absorbance of the analytical solutions was determined by using double-beam Shimadzu Ultraviolet-Visible Spectrophotometer 1800. Spectral bandwidth 0.1 nm, wavelength accuracy ± 0.1 nm and a pair of 1 cm path length matched quartz cells were included in it.
Ferric chloride reagent (0.3% w/v) Ferric chloride (0.3 g) was weighed accurately and dissolved in sufficient distilled water (in a volumetric flask) to produce 100 mL.
Dolutegravir standard stock solution
The stock solution of dolutegravir (1000.00 μg/mL) was made by solubilizing 10 mg in 10 mL of acetonitrile and water (1:1). The solution was further diluted with distilled water to get the required concentration of dolutegravir for the λ max determinations and for further analysis.
Analysis of dolutegravir using Fe 3+ /1,10-phenanthroline Aliquots of 0.4, 0.6, 0.8, 1.0, 1.2, and 1.4 mL of dolutegravir standard solution (1000.00 μg/mL) were progressively taken in to 10 mL volumetric flasks. To this ferric chloride solution (2 mL, 0.3% w/v), 1,10-phenanthroline solution (1 mL, 0.5% w/v) were added and shaken vigorously and lay aside for 15 min to ensure the color development through redox-coupling reaction. The volume of the volumetric flask was made up to the mark with double distilled water to accord the ultimate concentrations holding 40.00-140.00 μg/mL of dolutegravir.
Blank solutions were made by adopting identical methodology mentioned above, by omitting the corresponding analyte. Then the absorbance of the colored compound was recorded at 520.0 nm against corresponding blank. All measurements were recurrent six-fold for every concentration.
Method optimization
The analytical method was optimized for the reagent concentration (ferric chloride and 1,10-phenanthroline), time for color development and mole ratio of the reaction and the details were provided in next sections.
Method validation
The proof of the method was established based on linearity, accuracy, precision, sensitivity, and robustness according to International Council for Harmonization guidelines [28].
Linearity
The linearity was examined in pure solutions (n = 6) over the concentration span of 40.00-140.00 μg/mL for dolutegravir. Calibration curve was plotted and from that slope, intercept, and correlation coefficient were computed.
Accuracy
The accuracy of the methodology was decided by recording the recoveries of the analyte using method of standard additions. Distinct levels of standard solutions (80, 100, and 120%) of dolutegravir was spiked to prequantified samples and analyzed by proposed method. Each sample was prepared in triplicate at each level. The mean percentage recoveries and percentage relative standard deviation were figured out statistically.
Precision
Precision is the level of repeatability of results as reported between samples analyzed on identical day (intra-day) and samples scampered on three completely distinct days (inter day) in order to examine the intra-and inter-day variant in the method. Solutions accommodating 40.00, 80.00, and 140.00 μg/mL of dolutegravir were subjected to the present spectrophotometric method. The discrepancies in the absorbance of the analyte solutions on intra-and inter-day were deliberately expressed in percentage relative standard deviation.
Sensitivity and robustness
Sensitivity of the method was denoted by limit of detection and limit of quantification values, determined based on standard calibration curve. They were calculated using the formulae 3.3 σ/s and 10 σ/s, respectively, where "σ" is the standard deviation of the yintercept of the regression equation and "s" is the slope of the calibration curve. Sandell's sensitivity was calculated from the ratio of molecular weight and molar absorptivity of the dolutegravir. Further the
Assay of dolutegravir
Twenty tablets of dolutegravir (Tivicay) were weighed accurately and ground to fine powder. A quantity of powder analogous to 200 mg of dolutegravir was dissolved in acetonitrile and water (1:1), the contents were shaken thoroughly for 5 min. Then, the volume was done up to 10 mL with acetonitrile and water and screened through Whatmann's filter paper (No. 42). To the 1 mL of above filtrate, 2 mL of ferric chloride solution (0.3% w/v), 1 mL of 1,10-phenanthroline reagent (0.5% w/v) were added and shaken vigorously. The emerged solution was diluted up to 10 mL with double distilled water and the colored chromogen was spectrophotometrically measured at 520.0 nm against the corresponding blank.
Results
The reaction of dolutegravir with ferric chloride in the presence of 1,10-phenanthroline resulted in the formation of orange colored product, which showed λ max at 520.0 nm (Fig. 2). The probable reaction mechanism was shown in Fig. 3. The method was optimized utilizing different concentrations of ferric chloride and 1,10-phenanthroline by varying one factor at a time. The effect of concentration of ferric chloride and 1,10-phenanthroline on the formation of colored complex was studied The optimum reaction time was determined by monitoring the color development at different time intervals (5,10,15,20,25, and 30 min). Maximum absorbance values were obtained at 15 min for dolutegravir (Fig. 6). Thereafter, the color developed was stable and the absorbance was constant up to 5 h under optimized conditions. Stoichiometry of the reaction was studied by continuous variation method. Equimolar solutions of dolutegravir (9.54 × 10 −5 M) and 1,10-phenanthroline were prepared by keeping other reaction conditions same as the analytical method discussed earlier. The drug and reagent (1,10-phenanthroline) were mixed in various proportions to produce different mole ratio values (0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0). A mole ratio of 0.5 gave the highest absorbance value, which is indicated by the stoichiometric relationship shown in Fig. 7.
The method was further justified as per International Council for Harmonization guidelines to prove its usefulness for quality control analysis of dolutegravir. The developed methodology obeyed Beer's law within the concentration range of 40.00-140.00 μg/mL. Linear regression analysis of the data given the equation y = 0.0055x − 0.004 with correlation coefficient 0.999. The relationship between the drug concentration and absorbance is verified from linear regression studies (r 2 = 0.999, Fig. 8).
The recovery of the analyte in standard addition method was utilized to establish the accuracy in the method. The percentage recoveries and percentage relative standard deviation were computed and reported ( Table 1). The percentage recoveries were varied between 99.7 and 101.8% for dolutegravir and the percentage relative standard deviation values were less than 2.0.
The repeatability and intermediate precision of the method were evaluated using three different levels of dolutegravir (40.00, 80.00, and 140.00 μg/mL). The results were summarized in Table 2 and the percentage relative standard deviation values were found to be satisfactory (< 2.0).
The responsiveness of the methodology was resolute with reference to limit of detection and limit of quantification. The method established 1.52 and 4.60 μg/mL as limit of detection and limit of quantification, respectively and Sandell's sensitivity was found to be 0.182 μg/cm 2 for dolutegravir ( Table 3).
The robustness of the proposed method was established by evaluating the influence of the small variations in the concentration of ferric chloride and 1,10-phenanthroine solutions both at 0.3 ± 0.1 and 0.5 ± 0.1% w/v, respectively. The results indicated that these changes did not greatly affect the absorbance of the formed colored complex.
The contemplated method was adopted to estimate the dolutegravir content in marketed formulation (Tivicay). The % assay value for dolutegravir was found to be 102.5 ( Table 4). The percentage relative standard deviation value was found to be 0.5 (< 2.0).
Discussion
The present colorimetric technique was rooted on redox reaction between dolutegravir and ferric chloride and further complexation with 1,10-phenanthroline [29]. The method was optimized by considering one factor at a time for the levels of ferric chloride and 1,10-phenanthroline and 0.3% w/v and 0.5% w/v, respectively, were considered as optimum for the analysis. The analytical method was further corroborated for linearity, accuracy, precision, sensitivity, and robustness in line with International Council for Harmonization guidelines. A good linear response between dolutegravir concentration and its absorbance was noticed over a concentration range of 40.00-140.00 μg/mL. The correlation-coefficient value reaching to unity indicated the same. The percentage relative standard deviation values less than 2.0 in recovery and precision studies indicated the accuracy and reproducibility of the method. The developed method was found to be sensitive based on its limit of detection (1.52 μg/mL) and limit of quantification (4.60 μg/mL) values. The validated methodology was employed for the quantification of dolutegravir in marketed formulation. The %assay and percentage relative standard deviation values were within the acceptable limits. Thus, the quantification of dolutegravir in marketed formulation was proved to be fruitful by adopting the proposed analytical method.
Conclusion
The proposed redox-based colorimetric method for the determination of dolutegravir using Fe 3+ /1,10-phenanthroline as chromogenic reagent was found to be simple, rapid, and does not involve any extraction step. The method was validated for linearity, accuracy, precision, sensitivity, and robustness in line the International Council for Harmonization regulations. The validated method was adopted for the assay of dolutegravir in formulation and results were accorded with the label claim. The results additionally urged that there is no intervention of formulation excipients within the estimation. With these advantages, the proposed methodology can be adopted in routine quality control testing of dolutegravir in its pharmaceutical dosage forms.
|
2020-11-30T14:35:09.398Z
|
2020-11-29T00:00:00.000
|
{
"year": 2020,
"sha1": "8fdcb16cadc6684fb6535e11b2982b3d60216f7a",
"oa_license": "CCBY",
"oa_url": "https://fjps.springeropen.com/track/pdf/10.1186/s43094-020-00121-2",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8fdcb16cadc6684fb6535e11b2982b3d60216f7a",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
23517141
|
pes2o/s2orc
|
v3-fos-license
|
Proof Reduction of Fair Stuttering Refinement of Asynchronous Systems and Applications
We present a series of definitions and theorems demonstrating how to reduce the requirements for proving system refinements ensuring containment of fair stuttering runs. A primary result of the work is the ability to reduce the requisite proofs on runs of a system of interacting state machines to a set of definitions and checks on single steps of a small number of state machines corresponding to the intuitive notions of freedom from starvation and deadlock. We further refine the definitions to afford an efficient explicit-state checking procedure in certain finite state cases. We demonstrate the proof reduction on versions of the Bakery Algorithm.
Introduction
Much of hardware and software system design focuses on how to optimize the execution of tasks by dividing the tasks into smaller computations and then scheduling and distributing these computations on the available resources. The natural specification for these systems is an assurance that the systems eventually complete the supplied tasks with results consistent with an atomic (or as atomic as feasible) execution of the task. We refresh the notion of fair stuttering refinements [10] as a means of codifying these specifications -a fair stuttering refinement between two systems ensures that every infinite run of a lower-level system with fair selection and finite stuttering maps to a similarly restricted infinite run of a higher-level system. This notion of refinement can allow sequences of smaller steps in the implementation to be mapped to single steps in the specification while additionally requiring that every task makes progress to completion.
Many previous efforts [10] have attempted to improve the capability of theorem provers in reasoning about refinements for distributed and concurrent systems. Previous efforts in regards to the ACL2 theorem prover [4] focused on trying to reduce the proofs of stuttering refinements with additional structures added to define fair selection and ensuring progress. These efforts generally boiled down to showing that a specification could match the step of an implementation or the implementation stuttered and some rank function decreased. The primary difficulty in these proofs was defining and proving an inductive invariant (either through ACL2 or trying to prove the invariant through some form of state exploration). In addition, the inclusion of additional structures to track fairness and progress as well as the resulting definition of rank functions proved complex. Further, the additional structures at times obfuscated whether the specification was complete and accurate.
In this paper, we take a different tack. We assume certain characteristics of the system we are trying to verify and leverage these characteristics in reducing the proof obligations. In particular, we first assume that the systems we are trying to verify are asynchronous in terms of how tasks make progress to completion. Further, we require the system definition to split the normal next-state transition relation into a next-state relation which only takes forward steps and a blocking relation which defines precisely when a task is blocked from making progress. From these assumed characteristics, we define proof reductions which reduce the goal of proving fair stuttering refinement to proving properties of a few task steps in relation to each other. These proof reductions have been formally defined and mechanically proven in ACL2 and are included in the supporting materials for this paper. In the remainder of this paper, we will cover two stages of proof reductions, review the application of the reductions to a version of the Bakery Algorithm. We conclude the paper with further reductions targeting efficient automatic checks in the finite state case.
Preliminaries
Commonly, systems are defined by an initial state predicate: (init x) and a next-state relation: (next x y). A run of the system is then simply a sequence of states where the first state satisfies (init x) and each pair of states in the sequence satisfies (next x y). We extend this basic construction in a couple of ways.
First, our goal is to reason about fair executions of a system (either as an assumption of fair selection for which task will update next or as a guarantee that every task makes progress). Thus, we assume that there is some set of task identifiers recognized by a predicate (id-p k) and add a task id parameter to the next-state relation: (next x y k) where this now relates state x to state y for an update to the task with id k. We also assume only one task updates at each step of the system without any prescribed order of task updates -essentially, the system is asynchronous at the level of task updates.
Second, we will find it useful to require the definition of an additional relation (blok x k) which returns true when the task identified by k is currently blocked from making progress in state x. Further, with this required definition of (blok x k), we will also require the theorem: (not (next x x k)) be proven and use inequality of next-states as a marker that a task is making progress to completion.
A system is then defined by three functions: (init x), (next x y k), and (blok x k). Our final goal is to prove that the fair runs of an implementation system map to fair runs of a specification system with an allotment for finite stuttering and some guarantee of progress. A run of a system is a function (run i) which takes a natural i and returns a state of the system. Runs will naturally need to satisfy some constraints as detailed in Figure 1. For a given system named sys, the macro (def-inf-run sys) assumes the definition of (sys-init x), (sys-next x y k), (sys-blok x k), (sys-pick i), (sys-run i) and generates the definitions and theorems defining the properties for the run as in Figure 1.
Of particular note, the function (step x y k) relates states x and y via (next x y k) only if k is not blocked in x and we are not stuttering (denoted by the input k being nil) -(as a note, the only requirement we place on id-p is that (not (id-p nil))). So, an infinite run is defined by two functions (run i) which defines the sequence of states and (pick i) which defines the sequence of task identifiers selected. We constrain (pick i) to only return an id-p or nil. We can now naturally define fair selection of (pick i) by positing the existence of a function (fair k i) which returns natural numbers and for each task id k will strictly decrease when k is not selected -see Figure 2. The macro (def-fair-pick sys id-p) assumes the definitions of (sys-pick i), (sys-fair k i), and (id-p k) and produces the theorems in Figure 2. We use the term fair run for an infinite run with a fair picker.
Fair selection of task identifiers ensures that each run only has finite stuttering and that each task gets a chance to make progress, but it does not guarantee that tasks actually make progress. We introduce the term valid run for a run which is not only fair but ensures progress for each task. In order to ensure progress, we define a function (prog k i) similar to (fair k i) but in addition to ensuring pick (encapsulate ((run (i) t) (pick (i) t)) (local (defun run (i) ....)) (local (defun pick (i) ....)) (defun step (x y k) (if (or (null k) ;; finite stutter (blok x k)) ;; or k is blocked in x (equal x y) (next x y k))) (defthm run-init-thm (implies (zp i) (init (run i)))) (defthm run-step-thm (implies (posp i) (step (run (1-i)) (run i) (pick i)))) )
Figure 1: Definition of an infinite run in ACL2
(defthm fair-nat-thm (natp (fair k i))) (defthm pick-fair-thm (implies (and (posp i) (id-p k) (not (equal (pick i) k))) (< (fair k i) (fair k (1-i))))) Figure 2: Fair Runs: fair task selection during a run (defthm prog-is-nat (natp (prog k i))) (defthm run-prog-thm (implies (and (posp i) (id-p k) (or (not (equal (pick i) k)) (equal (run i) (run (1-i))))) (< (prog k i) (prog k (1-i))))) Figure 3: Valid Runs: ensuring task progress during a run eventually equals k, we also need to ensure that a state change actually occurs. The properties in Figure 3 ensure a valid run and the macro (def-valid-run sys id-p) produces these theorems for id-p, sys-run, sys-pick, and sys-prog. We note that a valid run is also a fair run and thus our notion of refinement is compositional -but it is better to prove that all fair runs of the implementation are valid runs and then restrict the refinement to valid runs mapping to valid runs and reduce the proof requirements accordingly at each step. This is straightforward from what we present in this paper but we do not focus on it in this paper.
Proof Reduction to Single System Steps
The principle objective of fair stuttering refinement is to prove that the fair runs of an implementation map to valid runs of a specification. The first set of proof reductions we present refresh similar attempts in past work [10,8] in transferring these proof requirements on infinite runs to properties about single steps of two systems impl and spec. The difference between these past efforts and the work presented is that we directly specify properties related to guaranteeing progress for each task in the system and we leverage the definition of the blocking relation. In addition, while the proof reduction to single step presented in this section could be used as is, the design of the reduction is influenced by the needs of subsequent proof reductions over tasks presented in Section 4. The book "general-theory.lisp" in the supporting materials covers the work in this section.
The goal is to show that if one were to prove certain properties about steps of an implementation system impl and a specification system spec, then one could infer a fair stuttering refinement -every fair run of impl maps to a valid run of spec. We wish to prove this for any specification and implementation system, so specifically, for any impl and spec and any fair run impl-run of the implementation, if we have proven the required properties then we can map impl-run to a valid run spec-run of spec. An overview of the structure of the book "general-theory.lisp" is provided in Figure 4 and attempts to codify this goal. The definitions of the impl and spec systems and the fair run impl-run of impl are constrained within an encapsulate to only have the properties: (def-inf-run impl), (def-fair-pick impl id-p), (def-system-props impl id-p), (def-valid-system impl id-p), and (def-match-systems impl spec id-p). From this fair run impl-run and the properties proven on spec and impl, we can build a valid run spec-run. While it is not possible to make this a closed-form statement of correctness in ACL2, we believe the structure of the book is sufficient to establish the claim.
The function (spec-run i) in Figure 4 defines the spec state at each time to simply be (impl-map (impl-run i)) and the function (spec-pick i) is simply (impl-pick i) except that we introduce finite stutter (i.e. return nil) if the mapped state doesn't change. It is customary to define some notion of observation or labeling of states that must be preserved to ensure correlation of behavior between spec and impl -we assume human review has ensured that the mapping from impl states to spec states preserves any observations relevant to the specification. In this regard, it is relevant that the mapped run on the spec is relatively simple in definition as it avoids errors or oversights in specification due to an obfuscation of how the implementation and specification are correlated.
The properties we need to prove for impl and spec are defined by the macros def-system-props, def-valid-system, and def-match-systems. Along with the functions defining the impl and spec systems, additional definitions are required for each of these macros. We will shortly go into greater detail on the properties we will assume as constraints for these functions, but first, we refer to the listing provided in Figure 5. ;; ASSUMPTIONS: ;; assume relevant properties of given systems impl and spec: (def-system-props impl id-p) (def-valid-system impl id-p) (def-match-systems impl spec id-p) ;; assume an infinite run of the impl system: .... ;; def.s and theorems to establish results.
;; CONCLUSIONS: ;; and prove that the corresponding spec-run is indeed a valid run of spec: (def-inf-run spec) (def-valid-run spec id-p) Figure 4: Structure of the book ''general-theory.lisp'' • IMPL system definition: -(impl-init x) -initial predicate on states x for impl system -(impl-next x y k) -state x transitions to state y on selector k -(impl-blok x k) -state x blocked for transitions for selector k • SPEC system definition: -(spec-init x) -initial predicate on states x for spec system -(spec-next x y k) -state x transitions to state y on selector k -(spec-blok x k) -state x blocked for transitions for selector k • Definitions needed for (def-system-props impl id-p) macro: • Definitions needed for (def-match-systems impl spec id-p) macro: -(impl-map x) -maps impl states to corresponding spec states -(impl-rank k x) -ordinal decreases until spec matches transition for k • Definitions needed for (def-valid-system impl id-p) macro: The macro (def-system-props impl id-p) expands into simple theorems ensuring (not (id-p nil)), ensuring (impl-next x x k) is not valid, and ensuring the state predicate (impl-iinv x) is an inductive invariant for impl -namely that (impl-iinv x) holds in the initial state and persists across (impl-next x y k) transitions.
The (def-match-systems impl spec id-p) macro requires defining (impl-map x), a mapping from impl states to spec states and a ranking function (impl-rank k x) which returns an ordinal for each task id k. The main properties generated by def-match-systems are the following: The theorem map-matches-next ensures that on any step (impl-next x y k) for task k which is not blocked in x and where the mapped specification state changes (i.e. (!= (impl-map x) (impl-map y))) then the spec must be able to match the transition and the spec state cannot be blocked in the spec for task k. The theorem map-finite-stutter ensures that when the mapped implementation state does not change on an update for task k in impl, then the ordinal returned by impl-rank must strictly decrease and the theorem map-rank-stable ensures that this ordinal does not increase when task k is not selected. The clear intent of these properties is to ensure that as long as a task k is not indefinitely blocked when it is selected for update in impl, then eventually a matching spec transition must be generated. The question is then naturally how to ensure that a task is not indefinitely blocked. This concept of being indefinitely blocked is commonly called "starvation" in the literature and the def-valid-system macro will generate properties intended to ensure that no task is starved.
The (def-valid-system impl id-p) macro requires the definition of a predicate (impl-noblk k x) which is true when the task k can no longer be blocked in state x and a function (impl-nstrv k x) which nominally returns an ordinal that decreases until (impl-noblk k x) is true. Once a task k reaches an impl-noblk state, it can no longer be blocked until it transitions and thus the fair selection of k will ensure a transition of k occurs. Unfortunately, a task's progress to an impl-noblk state may be dependent on any number of other tasks or components in the impl state. At this general level of system definition, we only have system states x and task ids k, so we imagine that for any k and x, we could define a set of task ids called the starve-set which need to make progress before k can reach a noblk state. Updates to ids which are not in this starve-set should simply have no effect on this progress and so we will assume that (impl-nstrv k x) will strictly decrease on transitions for ids in the starve-set and remain unchanged otherwise. Unfortunately, it might be possible that all of the tasks in the starve-set are blocked and so we need the additional definition of an (impl-starver k x) which returns an id in this starve-set which is currently not blocked in state x. Additionally, we need to ensure that when an element outside of the starve-set is chosen, that the (impl-starver k x) remains unchanged. The encoding of these properties as ACL2 theorems are generated from the def-valid-system macro and are listed here: (defthm noblk-blk-thm (implies (and (iinv x) (id-p k) (noblk k x)) (not (blok x k)))) (defthm noblk-inv-thm (implies (and (iinv x) (id-p k) (id-p l) (!= k l) (next x y l) (noblk k x)) (noblk k y))) (defthm starver-thm (implies (and (iinv x) (id-p k) (not (noblk k x))) (not (blok x (starver k x))))) (defthm nstrv-decreases (implies (and (iinv x) (id-p k) (!= k (starver k x)) (next x y (starver k x)) (not (noblk k x))) (o< (nstrv k y) (nstrv k x)))) (defthm nstrv-holds (implies (and (iinv x) (id-p k) (id-p l) (!= k l) (next x y l) (not (noblk k x))) (o<= (nstrv k y) (nstrv k x)))) (defthm starver-persists (implies (and (iinv x) (id-p k) (id-p l) (!= k l) (!= l (starver k x)) (next x y l) (not (noblk k x)) (= (nstrv k y) (nstrv k x))) (= (starver k y) (starver k x)))) And with these properties assumed as constraints, we return to the goal of proving that the infinite run defined by (spec-run i) and (spec-pick i) from Figure 4 is indeed a valid run of spec. In order to do that we need to define a function spec-prog which satisfies the requirements set out in Figure 3. First, it is useful to define an (impl-prog k i) and show that the impl-run is indeed a valid run.
The definition of (impl-prog k i) is in Figure 6 and essentially looks forward into impl-run until we reach an i where k is picked and the state changes. The key point is obviously the question of what is the measure for demonstrating that this function terminates and this follows from our earlier discussion about the (impl-noblk k x), (impl-nstrv k x), and (impl-starver k x) functions. If we have (impl-noblk k ..) at the current state, then the task with id k cannot be blocked and we can simply countdown the (impl-fair k i) measure until task k is selected -the state will change at that time since k will still be unblocked and impl-next must change the state. If (impl-noblk k ..) does not currently hold then we know there is a task id (impl-starver k ..) which cannot be blocked in the current state and either (impl-nstrv k ..) strictly decreases or (impl-starver k ..) will not change. Thus, at each step, either the impl-nstrv measure strictly decreases or the fair measure for impl-starver counts down and will eventually expire and impl-nstrv will strictly decrease.
Proof Reduction to a Small Bounded Number of Tasks
In the previous section, we presented a proof reduction of the requirements for fair stuttering refinement from reasoning about infinite runs of systems to reasoning about single steps of systems. We did not make any assumption about the state structure of the systems other than that updates occurred asynchronously at some prescribed task level. In this section, we will assume a structure on the states of a system and show how to reduce the requisite properties from across the large state structure to the properties on components of the state. Throughout this section and the next, we will use the set (s k v r) and get (g k r) operations from the records book [5]. In particular, (g k r) takes a record r and returns either the value previously set for key k in record r or nil as default.
The book "trans-theory.lisp" in the supporting materials for this paper includes the definitions and proofs relating to this section. The structure of this book is similar to that shown for "general-theory.lisp" in Figure 4 in that there is an encapsulation which entails the system definitions and properties we want to assume and then outside of the encapsulation, we prove the derived results. For the previous section, in "general-theory.lisp", we proved the property in Figure 8 (in an abuse of notation pretending ACL2 were higher-order for a moment), For this section, our goal is to define systems at a task level and derive the system-level results. In the same higher-level-abuse format as above, we have the property from "trans-theory.lisp" also in Figure 8.
We take the state of the system to be a record associating keys to task states.. what we call t-states. The task id selected on input is now simply one of these keys and the update of the state will only update • TR-IMPL system definition: -(tr-impl-t-init a k) -initial state predicate for t-state a and key k -(tr-impl-t-next a b x) -t-state a transitions to t-state b in state x -(tr-impl-t-blok a b) -t-state a is blocked from stepping by t-state b • TR-SPEC system definition: -(tr-spec-t-init a k) -initial state predicate for t-state a and key k -(tr-spec-t-next a b x) -t-state a transitions to t-state b in state x -(tr-spec-t-blok a b) -t-state a is blocked from stepping by t-state b • Definitions needed for (def-tr-system-props tr-impl) macro: -(tr-impl-iinv x) -inductive invariant as previously.. no change at task-level • Definitions needed for (def-match-tr-systems tr-impl tr-spec) macro: -(tr-impl-t-map a) -maps tr-impl t-states to corresponding tr-spec t-states -(tr-impl-t-rank a) -ordinal decreases until mapped t-state must change • Definitions needed for (def-valid-tr-system tr-impl) macro: -(tr-impl-t-noblk a b) -is t-state a invariantly not-blocked by t-state b -(tr-impl-t-nstrv a b) -positive natural which strictly decreases until (t-noblk a b) -(tr-impl-t-nlock k x) -ordinal strictly decreases on from k to blocker of k in x (implies (and (def-system-props impl id-p) (def-valid-system impl id-p) (def-match-systems impl spec id-p)) (implies (and (def-inf-run impl) (def-fair-pick impl id-p)) (and (def-inf-run spec) (def-valid-run spec id-p)))) "trans-theory.lisp": (implies (and (def-tr-system-props tr-impl) (def-valid-tr-system tr-impl) (def-match-tr-systems tr-impl tr-spec)) (and (def-system-props tr-impl key-p) (def-valid-system tr-impl key-p) (def-match-systems tr-impl tr-spec key-p))) Figure 8: High-Level properties in for theory files definitions the corresponding entry of the record. We presume and constrain a fixed finite set of keys -(keys) -of arbitrary size and composition and membership in this set will define the id-p test for task id selection. The state of the system is then a record mapping members of this finite set (keys) to t-states and the system will be defined on the task level. We define task-based systems by assuming the pertinent definitions on task states in the system and derive the system-level definitions across the state. We name these systems derived from the task-level definitions as tr-impl and tr-spec. In Figure 5 from the previous section, we listed the function definitions required for the single-step system-level propertieswe do the same for the single-step task-level properties in Figure 7. Many of the system-level derived functions follow simply from the task-level. The system-level (tr-impl-init x) predicate checks that (tr-impl-t-init (g k x) k) holds for all keys k. The system-level (tr-impl-next x y k) only updates (g k x) as (tr-impl-t-next (g k x) (g k y) x) and leaves all other keys untouched in x. The system-level block function (tr-impl-blok x k) checks if there is any key l such that (tr-impl-t-blok (g k x) (g l x)). The systemlevel mapping function simply goes through all keys and calls tr-impl-t-map for the corresponding t-state and the system level rank just calls (tr-impl-t-rank (g k x)) directly. The inductive invariant does not change; there is just one inductive invariant defined on the entire record defining the system state. Additionally, the system-level proofs for (def-system-props tr-impl key-p) and (def-match-systems tr-impl tr-spec key-p) are straightforward and follow from these systemlevel definitions and properties of task-level definitions.
The functions and properties for proving progress and valid impl runs are more involved. For the sake of brevity and readability, we will drop the tr-impl-prefix from the system-level and task-level defintions for the remainder of this section. In addition to ensuring that t-nlock returns an ordinal and t-nstrv returns a positive natural number 1 , the macro (def-valid-tr-system tr-impl) introduces the following properties: (defthm t-noblk-blk-thm (implies (and (iinv x) (key-p k) (key-p l) (t-noblk (g k x) (g l x))) (not (t-blok (g k x) (g l x))))) (defthm t-noblk-inv-thm (implies (and (iinv x) (key-p k) (key-p l) (t-noblk (g k x) (g l x)) (t-next (g l x) c x)) (t-noblk (g k x) c))) (defthm t-nlock-decreases (implies (and (iinv x) (key-p k) (key-p l) (t-blok (g k x) (g l x))) (o< (t-nlock l x) (t-nlock k x)))) (defthm t-nstrv-decreases (implies (and (iinv x) (key-p k) (key-p l) (not (t-noblk (g k x) (g l x))) (not (t-noblk (g k x) c)) (t-next (g l x) c x)) (< (t-nstrv (g k x) c) (t-nstrv (g k x) (g l x))))) The system-level (noblk k x) definition simply checks that (t-noblk (g k x) (g l x)) holds for every key l and as such, the task-level t-noblk-blk-thm and t-noblk-inv-thm are task-level projections of their system-level counterparts and the system-level properties follow fairly easily. The more interesting case comes up in defining the system-level (nstrv k x) and (starver k x). For the task-level, the property t-nlock-decreases ensures that we don't have any "deadlocks" or simply that for any set of keys, there is always some key in that set which is not blocked in x by some other key in that set. The combination of t-nstrv-decreases and the properties of t-noblk ensure that no task can be starved by another task.
The intuition behind defining the system-level (nstrv k x) begins by recognizing that if (not (noblk k x)) then there is some set of keys l such that (not (t-noblk (g k x) (g l x))). We will call this set of keys the may-block set. But since t-noblk persists once we reach it, then we could sum up the (t-nstrv (g k x) (g l x)) for this may-block set and the resulting ordinal would decrease until we reached a state where k was t-noblk for all l and thus noblk. Assume for the moment that k were not blocked (i.e. we could set (starver k x) to be k), then consider an update for some key l. If that key were in the may-block set of k then the ordinal would decrease. If l is not in the mayblock set of k then (t-noblk (g k x) (g l x)) and the transition of l cannot change the blocked status of k and it cannot change the may-block set for k and so progress is made. Unfortunately there is no guarantee that k is not blocked and thus we cannot pick a suitable starver which ensures progress when selected.
But from the property t-nlock-decreases, starting with k in x, we can find a key which is not blocked by checking if the key is blocked and recurring on the first blocking key we find if we are blocked. This is the definition of the function (starver k x) and is included here: (defun starver (k x) (declare (xargs :measure (t-nlock (g k x)))) (if (and (iinv x) (key-p k) (blok x k)) (starver (pikblk k x) x) k)) The function (pikblk k x) returns the first key we find such that (t-blok (g k x) (g (pikblk k x))). So, from k, we can find a key which is unblocked, but the question is then how to build a measure from the starve-set including k and (starver k x). The answer is to build a natural list where each element is the sum of t-nstrv for the may-block set (as we described before) in each step along the path from k to (starver k x) and define our ordinal as the lexicographic product of the naturals in this list. The first observation is that at the end of this list we will have the summation of t-nstrvs for the may-block set of (starver k x) and since (starver k x) is not blocked, it will make progress as we discussed before. The other key observation is that at each step, the (pikblk k x) key will be in the may-block set of k and thus even though a transition of (pikblk k x) may modify its may-block set and potentially increase the measure from that point, the measure for the may-block set of k will decrease and the ordinal over all will decrease. This list of naturals is defined by the function (nstrvs* k x) as follows where the function (scar s) and (scdr s) return the first element and remainder of a set respectively and (card s) returns the cardinality of the set.
This construction also shows one of the reasons we assume an arbitrary fixed finite set of (keys) (in order to put a bound on (len (nstrv* k x))), but this restriction makes sense for other reasons as well. If the set of keys were not finite, then we would need some additional requirement to ensure that a task were not persistently blocked by an infinite sequence of newly instantiated tasks. Other options exist to avoid this (such as requiring that all new tasks cannot block existing tasks) but these alternatives end up imposing constraints we believe are too restrictive.
Example -A Bakery Algorithm
We use the Bakery algorithm as an example application of the proof reductions we present in this paper. The Bakery algorithm was developed by Lamport [7] as a solution to mutual exclusion with the additional assurance that every task would eventually gain access to its exclusive section. The Bakery algorithm has also been a focus of previous ACL2 proof efforts [9].
The essential idea of the algorithm is that each task first goes through a phase where it chooses a number (much like choosing a number in a bakery) and then later compares the number against the numbers chosen by the other tasks to determine who should have access to the exclusive section. The version of the Bakery algorithm we will use is defined in Figure 9 (the (upd r .. updates ..) simply expands into a nest of record sets).
In order to prove (def-valid-tr-system bake-impl), we need to define the t-nlock, t-noblk, and t-nstrv functions. The definition of (t-nlock x k) needs to return an ordinal that is strictly decreasing from the blocked task to the blocking task. From the bake-impl-t-blok relation, we note that :choosing states cannot be blocked and that lex< is already well-founded, so we can devise a suitable bake-impl-t-nlock: (defun bake-impl-t-nlock (k x) (let ((a (g k x))) (make-ord 2 (if (g :choosing a) 1 2) (make-ord 1 (1+ (nfix (g :pos a))) (ndx (g :key a)))))) For the t-noblk and t-nstrv definitions, we need to analyze where one task can no longer block another task. The simple answer is that (t-noblk a b) is reached once task b has chosen a :pos greater than the one in a, but we also have to make sure that task b is not choosing either. In addition, we note that if a cannot currently be blocked by any task, then we can set t-noblk and task a cannot be blocked if it is not in program locations 5 or 6. With that, we define bake-impl-t-noblk: (defun bake-impl-t-noblk (a b) (or (and (!= (g :loc a) 5) (!= (g :loc a) 6)) (and (not (g :choosing b)) (> (g :pos b) (g :pos a))))) Finally, we need to define t-nstrv which counts down until we reach the t-noblk state. The simple answer would be to count from the exit of :choosing phase until the next exit from the :choosing phase. Thus, we would return 8 if (g :loc b) was 5 and then proceed down to 6 for 7, then 5 for 0 (wrapping back), then down to 1 for 4 (end of next :choosing). This almost works.. except that it is possible for b to be in :loc 2, 3, or 4 with a :pos lower than a but a has proceeded further. Thus, we need to add a few steps for the case of being in 2,3,4 with a potentially lower :pos but when we come back around for the next :choosing, we will reach noblk: (defun bake-impl-t-nstrv (a b) (pos-fix (cond ((or (and (= (g :loc b) 2) (< (g :temp b) (g :pos a))) (and (> (g :loc b) 2) (<= (g :pos b) (g :pos a)))) (+ 8 (-8 (g :loc b)))) ((>= (g :loc b) 5) (+ 5 (-8 (g :loc b)))) (t (+ 0 (-5 (g :loc b))))))) With these definitions and a suitable invariant bake-impl-iinv, we can prove the theorems for (def-valid-tr-system bake-impl) -each of which just blasts into a big case split which pushes through. For the specification of the bakery algorithm, we have a simple system bake-spec defined in Figure 10. Each task in this system goes through the following steps: first, load up a new provisional :pos in the :load variable, then proceed to set the :pos variable and begin to arbitrate in the 'interested state. Tasks are blocked if some other task is in the 'go state or is in the 'interested state and has a lower :pos. The definitions and proof of (def-match-tr-systems bake-impl bake-spec) are fairly straightforward and included in Figure 10. We note that it is feasible (although not required) to define the supporting functions and prove (def-valid-tr-system bake-spec) -this proves that all fair runs of bake-spec are valid while the earlier proofs only ensured that the runs mapped from bake-impl runs were valid.
In previous work [10], a similar proof effort was conducted in proving a fair stuttering refinement for the definition of the Bakery Algorithm. In that effort, the proof was complicated by the need to add additional structures to track fair scheduling and to ensure correlation to a specification which had additional structures to ensure progress for each task. These complications were avoided in the proof here and as such, much less definition and details were required. The reduced proof we present here is primarily the definition and proof of a sufficient inductive invariant but much additional definition and proof was required in the earlier work [10].
(defun bake-impl-t-rank (a) (case (g :loc a) (0 1) (1 0) (2 1) (3 0) (4 2) (5 1) (6 0) (t 0))) Figure 10: Bakery Specification System and Definitions for Proving Matching from Impl This paper focused on mechanized proof reductions for general system definitions, but the work also supports improvements in more efficient automatic verification (in particular when the underlying task state space is finite). For example, take a somewhat draconian restriction that (t-next a b x) can be defined as (t-next a b) and similarly, the initial state predicate ignored the input k -a few things develop in this case. First, we note (somewhat trivially) that for every reachable system state composed of (say) n task states, that every "substate" of n − 1 task states can also be reached. Additionally, if the task state space were finite, then we could compute all of the potential cycles in the blocking relation and for each cycle of size n, we could determine if it was reachable by searching through the system states with only n keys. A similar check could be implemented for the other properties with no more than 2 keys needed.
Of additional interest in this case, is that reachable states of these systems have a particular characterization. Consider any run of a system.. any steps in the run can be permuted as long as the permutation does not change the blocking relationship between the tasks involved. This means that for every reachable state, one can define a set of canonical runs which involves only stepping tasks until the blocking relationship is changed with respect to another task and then switching to the blocking task or stepping back and switching to the blockee task. This property limits the structure of potential invariants and suggests procedures for proving invariants over pairs of states. The inductive invariant iinv over the system state can be defined by invariant definitions on single task states, pairs of states, triples, etc. and in most cases (potentially with additional auxiliary variables), sufficiently defined on single t-states and pairs of t-states. In this case, the requisite properties of the defined t-nlock, t-nstrv, t-noblk, t-map and t-rank definitions could be proven via GL on the specified finite t-state domain using a SAT solver with a sufficient conditions on the t-states assumed. An inductive invariant (defined on single t-states and pairs of t-states) could be defined that proved each of these sufficient condition assumptions as invariant of the system. A model checker could be used to reduce the definitional requirements further by checking invariants (not requiring inductive invariants) and by checking for bad cycles to show that one could infer the existence of suitable t-nlock, t-nstrv, and t-rank. The model checking problems could be limited to a small number of tasks and possibly only single task stepping depending on the conditions of the defintion. The work presented in this paper is a step into many potential future directions.
|
2017-05-03T01:49:45.000Z
|
2017-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "195f556e21eae03772859230698be481cfdb1227",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/1705.01230",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "9147adf7e365eac28b733af3b84d5873270139b2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
260265398
|
pes2o/s2orc
|
v3-fos-license
|
Improving the efficiency of cascade detection by the Baikal-GVD neutrino telescope
The deployment of the Baikal-GVD deep underwater neutrino telescope is in progress now. About 3500 deep underwater photodetectors (optical modules) arranged into 12 clusters are operating in Lake Baikal. For increasing the efficiency of cascade-like neutrino event detection, the telescope deployment scheme was slightly changed. Namely, the inter-cluster distance was reduced for the newly deployed clusters and additional string of optical modules are added between the clusters. The first inter-cluster string was installed in 2022 and two such strings were installed in 2023. This paper presents a Monte Carlo estimate of the impact of these configuration changes on the cascade detection efficiency as well as technical implementation and results of in-situ tests of the inter-cluster strings.
Baikal-GVD inter-cluster string Z. Bardačová The deployment of the Baikal-GVD deep underwater neutrino telescope is in progress now.About 3500 deep underwater photodetectors (optical modules) arranged into 12 clusters are operating in Lake Baikal.For increasing the efficiency of cascade-like neutrino event detection, the telescope deployment scheme was slightly changed.Namely, the inter-cluster distance was reduced for the newly deployed clusters and additional string of optical modules are added between the clusters.The first inter-cluster string was installed in 2022 and two such strings were installed in 2023.This paper presents a Monte Carlo estimate of the impact of these configuration changes on the cascade detection efficiency as well as technical implementation and results of in-situ tests of the inter-cluster strings.
Introduction
Since 2016, the construction of the Baikal-GVD neutrino telescope has been continuing in Lake Baikal [1].In the 2023 configuration, the telescope consists of 3456 optical modules combined into 12 GVD clusters and two experimental strings.One of the priority tasks of the Baikal project is to study the possibilities of increasing the efficiency of the detector based on the experience of its operation and the results obtained on other neutrino telescopes in recent years.The solution of this task, in particular, will create the necessary background for the development of a next-generation neutrino telescope project with an effective volume of 10 cubic kilometers scale.As experiments on neutrino telescopes have shown, 10 km 3 -scale detector will allow us to move from observing the diffuse flux of astrophysical neutrinos to studying individual neutrino sources.Research is being conducted in the areas of developing a new deep-sea photodetector (optical module) with an increased sensitivity, exploring the possibility of upgrading the data acquisition system based on fiber-optic technology and optimizing the configuration of the telescope's detection system.In this paper, an option of optimizing the telescope configuration is considered, based on the installation of additional, inter-cluster strings in the geometric centers of each three clusters of the detector.The first experimental version of the inter-cluster string (ICS) was installed in Lake Baikal in April 2022.In 2023, two more ICSs were commissioned.The paper presents the results of calculations of the telescope efficiency for the new configuration, the technical implementation, and the first results of in-situ tests of the ICSs.
Optimization of Baikal-GVD configuration
The Baikal-GVD neutrino telescope is located in the southern part of Lake Baikal.The depth of the lake at the telescope location is 1366 m.Registration of Cherenkov radiation from neutrino interaction products in Baikal-GVD is carried out by optical modules (OM) [2].A Hamamatsu R7081-100 photoelectronic multiplier tube (PMT) is used as a photosensitive element of the OM.Optical modules are placed on vertical strings anchored at the bottom of the lake and grouped into clusters.The cluster includes a central string and seven strings evenly spaced around a circle with a radius of 60 meters.Each string holds 36 optical modules placed with a step of 15 meters at depths from 750 to 1275 meters.
Optimization of the Baikal-GVD configuration (the distances between the OMs, strings, and clusters) in order to achieve maximum sensitivity to the astrophysical neutrino flux was carried out for the E -2 neutrino energy spectrum.Under this condition, the optimal distance between clusters was found to be 300 m.The energy spectrum of astrophysical neutrinos, later measured by the IceCube, showed a higher value of the spectral index [3].Taking into account the steeper neutrino spectrum, the distance between clusters was reduced in 2022 from 300 m to 250 m and actions were taken to increase the sensitivity of the inter-cluster area of the telescope.From the point of view of technical implementation, the most effective way to increase the sensitivity of the telescope is to install additional inter-cluster strings in the geometric centers of each triplet of the Baikal-GVD clusters (see Fig. 1).
To estimate the effect associated with the installation of ICSs, the response of the detector was simulated in muon and cascade detection modes.The configuration of the ICS was completely the same as the configuration of conventional telescope strings -36 optical modules located at distances of 15 meters vertically.Monte Carlo simulation shows that for muon events with a track length exceeding the geometrical dimensions of the telescope the effective area increased in proportion to the increase in the number of optical modules.That means that there is no significant effect from the ICS installation.One can only note the more uniform sensitivity of the telescope with ICS to the flux of near-horizontal muons.The situation is significantly different for cascade events, which are relatively compact light sources.To estimate the effect of the ICS installation, 10 4 electron neutrino interaction vertices were simulated for the configuration shown in Fig. 1.The azimuthal angle and the cosine of the zenith angle of the cascade were uniformly simulated at each vertex.The primary neutrino energy was uniformly simulated over the E -2.46 spectrum from 1 TeV to 10 5 TeV for each direction.The event selection criterion was a limit on the number of triggered channels more than 30.This requirement ensures reliable suppression of background from atmospheric muon bundles.The event distribution on distance to the intercluster string for the two telescope configurations, respectively (distances between clusters 250 m and 300 m), is shown in Fig. 2. Monte Carlo simulation shows that for a distance between clusters of 250 m, for a configuration consisting of 25 strings (24 strings grouped into 3 clusters and ICS), the number of selected events increased by 10% compared to the conventional configuration without ICS.The increase in the number of events was 5% for the distance between clusters of 300 m.For cascades with an energy more than 100 TeV, where the background from atmospheric muons and neutrinos becomes smaller than the signal from astrophysical neutrinos, the increase in the number of events is 24% for the distance between clusters being 250 m.Thess results show a significant increase in the efficiency of the telescope in a configuration with the ICS.
Technical implementation of the intercluster string
The configuration of the ICS (see Fig. 3) is basically identical to the conventional Baikal-GVD strings configuration [4].The ICS consists of three sections of OMs.Each section includes 12 OMs, and a section control module (CM).The OMs are connected to the CM by 92 m long deep-sea cables.The CM controls the OM operation, converts the analog signals of the PMTs into digital form, forms local triggers of the section and time frames of events containing the pulse waveform [5].The conversion of analog signals is carried out by a 12-channel ADC with a sampling frequency of 200 MHz.The control of the sections operation, formation of the string trigger, and the exchange of the data is provided by a separate deep-sea electronic unitstring control module (SM).The ICS is connected to the cluster control center, just like the conventional cluster strings.Data from the ICS are transmitted to the cluster center via shDSL Ethernet extenders and then transmitted to the shore station via a fiber optic communication line.
ICS is attached to the bottom of the lake by means of an anchor.Its vertical orientation is provided by buoys mounted at the top of the string.Due to the lake currents the string may deviate from the vertical position.The shift of the upper OMs can reach tens of meters.To measure the position of the OMs in real time a positioning system based on acoustic modems (AM) is used [6].The positioning system of the ICS consists of 4 AMs that provide positioning accuracy of about 0.3 m.AM1 and AM2 are connected to CM1, AM3 and AM4 -to CM3.
At a distance of 270 m from the ICS anchor, a laser calibration light source (laser beacon) connected to CM2 is installed.The power supply (24 V) and control systems (COM Server with RS-485 interface) for the laser and for the AMs are the same, which ensures the unification of all CMs in the string.The laser source emits light at a wavelength of 532 nm, the pulse energy is 0.37 mJ (~10 15 photons per pulse) with a flash duration of about 1 ns.The laser source includes a light emission system, a radiation stability control system, a controlled attenuator, and a diffuser that ensures the formation of the radiation flux.The attenuator has 6 levels of reduction with the highest attenuation corresponding to a factor of ~10 3 .The laser beacon provides the inter-cluster time calibration, as well as amplitude calibration of the channels, and also allows monitoring the characteristics of the lake water.
Time calibration of the ICS channels is carried out by using LED sources mounted inside each OM.The LED wavelength is 470 nm, and pulse duration is about 5 ns.A light pulse is formed in a 15cone.Each OM is equipped by two LED sources oriented upwards.Due to narrow radiation cone, emitted light is not detected by the OMs of neighboring strings.Corrections to the ICS time calibration relative to the surrounding clusters are determined using the system of 10 horizontally oriented LEDs mounted on a dedicated support.Typically, few OMs on the string have such instrumentation referred to as LED beacons.
The first ICS was installed and put into operation as part of the Baikal-GVD neutrino telescope in April 2022.Its successful operation throughout the year allowed to continue the installation of the inter-cluster strings -in 2023 two more similar strings were installed.GVD-2023 configuration is shown in Fig. 4. The dots show the strings grouped into clusters.The cluster numbers correspond to the sequence of their commissioning, the asterisks indicate the technological strings with laser calibration sources, circled asterisks show the locations of ICSs that are connected to the clusters 9, 11, and 12. Analysis of the data sample for 2022-2023 is in progress now.It is planned to estimate the increase of the number of events detected in the cascade channel associated to the ICS installation.In addition to an increase in the number of astrophysical events, an improvement in the background suppression capabilities is also expected.Atmospheric muon bundles are the main background source in the cascade detection mode.Fig. 4 shows an example of such background event.The detection of the muon tracks with the inter-cluster strings provides additional suppression of muon bundles in the telescope.
In-situ studies of the inter-cluster strings
The accuracy of the event reconstruction in a neutrino telescope depends on the accuracy of PMT pulse time measurement.Time uncertainties are determined by two factors: the accuracy of time offsets of the channels (time calibration) and the uncertainty of signal registration times (time synchronization).The full-scale ICS tests conducted in 2022 included studies of the accuracy of its time calibration and synchronization.
The equipment of the time calibration system was developed for the basic version of Baikal-GVD clusters with the distance between the strings of 60 meters.Under this condition, the accuracy of channel calibration is about 2 ns.With an increase in the distance between the conventional Baikal-GVD strings and ICS (about 80 m), the magnitude of the signal from the calibration LED source decreases, which should affect the accuracy of the time calibration of the ICS.To study this accuracy, several calibration series of measurements were carried out.An example of a calibration event initiated by the LED becon of the ICS and registered on three surrounding clusters is shown in Fig. 5.The triggered channels are highlighted with circles, the color shows the time of the signal registration.The zero-time count is in the middle of the ADC time frame and corresponds to the moment when the cluster trigger is registered by the section module.The adjustment of the time scale is carried out at the stage of setting up the installation using programmable delays of event frames.
The channels within each string were calibrated using LEDs embedded in OMs.Interstring time calibration of channels was performed using LED beacons.To calculate the relative time offsets between the channels of the ICS and strings of the clusters, the difference T between the time delay expected from the geometry dTg and measured time delay dTm between a pair of triggered channels was determined.The flash of the ICS's LED beacon provides several triggered channels in each of the nearby clusters (see Fig. 5).This allows to estimate the accuracy of calibration , which is determined by the spread of the measured values of T for different pairs of channels.An example of the time calibration of the ICS relative to the surrounding clusters is shown in Table 1.The table shows the distance R from the ICS to the nearest strings of the surrounding clusters, the average charge Q on the channels in photoelectrons, and the standard deviation of calculated from different pairs of channels.The charges of the signals on the calibration channels are about 4 p.e. on average.With such charge values, the uncertainty of the time calibration of the ICS is about 2.5 ns, that is close to the calibration accuracy of the conventional Baikal-GVD strings (2 ns).Such an accuracy is acceptable from the point of view of physics event reconstruction [7].It should be noted that the measured charge of the triggered channels does not correlate with the distance from ICS to the clusters.This is due to the fact that the amplitudes of the light pulses for different instances of LEDs can vary significantly, which violates the isotropy of the light flux from the LED beacon as a whole.
The Baikal-GVD time synchronization system ensures the operation of all telescope channels in a single time scale.It includes two subsystems that ensure synchronization of channels within one cluster, and synchronization of clusters with each other [7].The operation of these subsystems is based on different principles.Synchronization of channels within a cluster is carried out using a common trigger generated in the cluster control center and broadcast to all its sections.For inter-cluster synchronization, the time of common trigger is measured in each of them.To study the accuracy of ICS synchronization with clusters, calibration series in the regime of simultaneous illumination of the ICS and clusters by a laser were analyzed.As a parameter characterizing the accuracy of synchronization, the value of the standard deviation (RMS) of the delay of the response times of pairs of synchronized channels (dt) was used.Channels with charges exceeding 10 photoelectrons were selected for analysis.Fig. 6 illustrates the accuracy of the synchronization of the ICS with the surrounding clusters 5, 8, and 9.It should be emphasized that ICS synchronization with cluster 9, of which it is part as the 9th string, was carried out using the common trigger of the cluster, while for clusters 5 and 8, the time between cluster triggers were measured using the "White Rabbit" synchronization system.The accuracy of the synchronization of the ICS with the surrounding clusters was 2.1 -2.2 ns, which is in a good agreement with the expected value of 2.0 ns, which is determined by the time clock discretization of 5 ns.
Conclusion
Monte Carlo calculations show a significant increase in the efficiency of cascade detection by the Baikal-GVD telescope when inter-cluster strings are added between the clusters.For a configuration of three clusters, the installation of an inter-cluster string provides an increase in the number of events in cascade mode by 10% and 24% for cascade energy above 1 TeV and 100 TeV, respectively.The technical implementation of the inter-cluster strings installation is straightforward.The data acquisition system, deep-sea cable infrastructure, and power supply system of the Baikal-GVD cluster can be easily adapted to serve 9 strings (including ICS) instead of 8. In-situ tests of the ICS showed the correctness and reliability of the equipment operation, and a sufficiently high accuracy of its time calibration (~ 2.5 ns).The accuracy of time synchronization of the ICS was ~2 ns, coinciding with the accuracy of synchronization of the conventional Baikal-GVD strings.Based on the positive experience of operating the first ICS in 2022, two more such strings were installed in 2023.In the future, it is planned to equip all Baikal-GVD clusters with the inter-cluster strings.
Figure 1 :
Figure 1: Inter-cluster string located in the geometric center of three Baikal-GVD clusters.
Figure 2 :
Figure 2: Distribution of events on the distance to the geometric center of three clusters for two configurations: the distance between the cluster centers of 250 m (on the left) and 300 m (on the right).Solid lines are the configuration with the ICS, dashed lines are without ICS.
Figure 3 :
Figure 3: Scheme of the mounting and basic elements of the ICS: optical module (OM), acoustic modem (AM) and laser calibration source.
Figure 4 :
Figure 4: Left: Configuration of GVD-2023: asterisks indicate string equipped with lasers, circles show ICSs.Right: An example of muon bundle detection jointly by a GVD cluster and an ICS, the color shows the delays of the signals relative to the first triggered channel.
Figure 5 :
Figure 5: View of the calibration event initiated by the ICS LED beacon on the surrounding GVD-clusters in projections to the radiation source.The time scale is presented under each figure in nanoseconds.
Figure 6 :
Figure 6: Distribution of events on the time difference dt, measured between ICS and clusters 5, 8, and 9.
Table 1 :
Summary from an LED beacon calibration study of an inter-cluster string relative to nearby clusters (see text).
|
2023-07-29T15:06:47.004Z
|
2023-07-27T00:00:00.000
|
{
"year": 2023,
"sha1": "8d3f4827b4506eff181d4bd2244787f4340e163d",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/444/987/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c07be1eeb717ea95a1c50822b402ca17dab11694",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18684735
|
pes2o/s2orc
|
v3-fos-license
|
Surgical Repair of Bulbar Urethral Strictures: Advantages of Ventral, Dorsal, and Lateral Approaches and When to Choose Them
Objectives. To review the available literature describing the three most common approaches for buccal mucosal graft (BMG) augmentation during reconstruction of bulbar urethral strictures. Due to its excellent histological properties, buccal mucosa graft is now routinely used in urethral reconstruction. The best approach for the placement of such a graft remains controversial. Methods. PubMed search was conducted for available English literature describing outcomes of bulbar urethroplasty augmentation techniques using dorsal, ventral, and lateral approaches. Prospective and retrospective studies as well as meta-analyses and latest systematic reviews were included. Results. Most of the studies reviewed are of retrospective nature and majority described dorsal or ventral approaches. Medium- and long-term outcomes of all three approaches were comparable ranging between 80 and 88%. Conclusion. Various techniques of BMG augmentation urethroplasty have been described for repairs of bulbar urethral strictures. In this review, we describe and compare the three most common “competing” approaches for bulbar urethroplasty with utilization of BMG.
Introduction
Buccal mucosa graft (BMG) is now routinely used in urethral reconstruction since its popularization by Burger et al. in 1992 in pediatric reconstruction [1] and subsequently by El-Kasaby et al. in 1993 for adult urethroplasty [2]. Its use in urethroplasty is arguably the gold standard for treatment of medium-and long-length strictures [3]. The first use of buccal mucosa in urethral reconstruction is attributed to Professor Sapezhko who by 1894 had performed 4 operations on humans [4,5]. In 1941, Humby, a British surgeon, described using buccal mucosa in hypospadias repair [6]. The excellent histological properties of buccal mucosa were subsequently described by Duckett et al. [7]. In comparison to skin, buccal mucosa holds the distinct advantage of being hairless and accustomed to a moist environment. Moreover, it has a thicker epithelial layer, thinner lamina propria, and a greater density of capillaries with an abundance of Type IV collagen.
All these qualities are thought to improve graft inosculation and survival after transplantation.
Various techniques of BMG augmentation urethroplasty have been described for repairs of bulbar urethral strictures. In this review, we describe and compare the three most common "competing" approaches for bulbar urethroplasty with utilization of BMG.
Indications.
Before describing the approach to bulbar stricture in detail, it is important to reiterate the indications for the use of oral mucosa in urethral reconstruction. The authors follow a traditional algorithm, where bulbar strictures <2 cm in length can mostly be treated with excision and primary anastomosis, whereby strictures longer than 2 cm may require adjunct maneuvers and the use of graft tissue to augment the caliber of the urethra. These maneuvers may 2 Advances in Urology include augmented anastomotic urethroplasty typically used for strictures between 2 and 5 cm in length or, for longer strictures, "pure" urethral augmentation in order to establish a larger gauge urethra. The choice of where and how to augment the urethra is discussed here in further detail.
Dorsal
Onlay. This technique was first described by Barbagli et al. in 1998 and involved circumferential bulbar urethral dissection, dorsal stricturotomy followed by augmentation of the stricturotomy by a penile skin graft (in the first 31 patients) or by BMG (in the last 6 patients) [8]. The key step of the procedure was "quilting" or spreadfixation of the graft on the tunica albuginea overlying the corpora cavernosa prior to suturing the edges of urethral mucosa to the edges of the graft. Even spread-fixation has a range of implementation techniques; some surgeons prefer a "traditional" manner of suturing through the graft to the underlying tunica albuginea, while others advocate for use of a biologic "glue." The advantage of suture quilting the graft includes microfenestration of the graft resulting from the surgical needle, which may aid in allowing any trapped blood to escape, preventing hematoma under the graft, and increasing the likelihood of proper buccal mucosa engraftment. This maneuver is critical in fostering sufficient graft apposition to the well-vascularized tissue of the corpora cavernosa and minimizing the risks of graft contracture and pseudodiverticula formation.
One of the advantages of the dorsal approach is that it yields a relatively bloodless operation. This is because the bulbar urethra is eccentrically located in the corpus spongiosum, with only thin dorsal coverage by corpus spongiosum that requires incision. Another advantage of the dorsal approach is its versatility and applicability for strictures of any length and location. The dorsal stricturotomy in the bulbous urethra can be extended proximally towards the membranous urethra or distally into penile urethra if required by intraoperative findings without dramatically altering the plan for reconstruction. In the event that complete or near-complete obliteration is identified after committing to dorsal stricturotomy, several solutions are described. These include (a) excision of the obstructed segment and conversion to augmented anastomotic urethroplasty [9], (b) removal of ventral mucosal strip and addition of ventral BMG onlay [10], and (c) ventral stricturotomy and addition of elliptical ventral inlay [11].
One of the disadvantages of the original dorsal approach is the need to circumferentially mobilize the urethra. Kulkarni et al. addressed this with their modification where mobilization is undertaken unilaterally and carried just across the midline dorsally, preserving the lateral blood supply on the contralateral side [12].
There have been numerous studies examining the success of BMG bulbar urethroplasty over the last two decades, with a wide range of follow-up and varying definitions of success. The Société Internationale d'Urologie (SIU) with the International Consultation on Urological Disease (ICUD) published a systematic review of 66 studies, describing outcomes of a total of 934 patients after dorsal onlay urethroplasty with average follow-up of 42 months and mean success rates of 88.3% [13]. Soon after, Barbagli et al. published a long-term retrospective paper on the deterioration rate of augmentation urethroplasty [14]. In this study, only patients with followup of greater than 6 years were included, totaling 81 patients after dorsal onlay BMG urethroplasty. At a median followup of 111 months, the authors reported an 80.2% success rate, defined as requiring absolutely no further instrumentation including dilation. This compared to 81.5% and 83.3% for ventral and lateral onlay techniques, respectively, with similar lengths of follow-up. The overall conclusion drawn from these reviews is that no significant difference exists in recurrence rates between dorsal, ventral, and lateral approaches to bulbar urethroplasty [13,14].
Ventral
Onlay. The ventral "patch" onlay urethroplasty came to the forefront of urethral reconstruction in 1996 when, encouraged by the use of BMG in complex pediatric hypospadias repair, Morey and McAninch applied the graft to repair strictures of the bulbar urethra [15]. The authors describe direct saggittal ventral urethrotomy through the diseased bulbar urethra, followed by sewing of the graft to each edge of the native urethral mucosa. Subsequently, the corpus spongiosum is closed over the graft in a second layer and the bulbospongiosus muscle over this. While there is no separate tissue to which the graft can be "quilted," the spongiosal closure typically incorporates a small "bite" of the graft, to increase proper apposition to the spongiosum that will provide its blood supply. The technique was introduced contemporarily with Barbagli's dorsal onlay technique, and the advantages and superiority of each have been the subject of intense debate ever since.
Proponents of the ventral onlay cite a straightforward approach, not requiring extensive circumferential mobilization and the technical demand of dorsal graft placement. This allows urologists who treat strictures only occasionally to still feel comfortable in performing urethroplasty for strictures that may not be amenable to excision and primary anastomosis. Moreover, the argument may be made that the thicker, ventrally placed corpus spongiosum provides a more robust vascular bed for buccal mucosa engraftment. Another anatomic consideration is specific location of the bulbar stricture. Patterson and Chapple, in a comparison of surgical techniques, note that, for very proximal bulbar strictures, ventral onlay poses a clear advantage in exposure and technique and is the appropriate choice [16]. Palminteri et al. also contend that ventral placement of BMG in bulbar urethroplasty has no significant impact on sexual quality of life and in fact improved most measures of sexual life, aside from postejaculatory dribbling [17]. An additional benefit is that the ventral approach is amenable to use in complex situations, including recurrent stricture [18], after radiation [19], and with adjunct maneuvers such as gracilis muscle flap coverage in particularly high risk, long segment strictures [20]. The ventral approach has also been used as a direct route to the dorsal aspect of the urethra, allowing preservation of bilateral vascular supports to the urethra [21].
Opponents of the ventral technique point to the need to make incision through the thicker ventral corpus spongiosum in order to reach the eccentrically located bulbar urethra, resulting in a bloodier operation. There is also a concern about increased risk of sacculation, diverticulum, or pouch formation, as well as more frequent irritative voiding symptoms and urine infection [22]. In their review of 11 series, Patterson and Chapple note several groups with higher incidence of sacculation or diverticulum formation with resultant worse postvoid dribbling in ventral onlays. They go on, however, to document that an equal number of series found no significant anatomic or clinical difference in these findings in comparing ventral or dorsal onlay [16]. What is ultimately evident is that, in experienced hands and with meticulous technique, these issues can be minimized; furthermore, the issue of sacculation seems dramatically higher in older series based on the use of skin, versus the more modern use of BMG [3].
This being said, there are certain disadvantages to the ventral approach. Several authors [23,24] have noted finite incidence of urethrocutaneous fistulae after ventral stricture repair with BMG, which is essentially unheard of in the dorsal approach. Reiterating an advantage of the dorsal approach mentioned earlier here, the ventral approach is less versatile, as it does not lend itself to extension of the urethrotomy distally into the penis should intraoperative findings require it.
While the global definition of success varies, a common criterion in most if not all series is the patency rate. The International Consultation on Urological Disease (ICUD) reviewed techniques in management of anterior strictures and found the success rate of ventral onlay to range from 43 to 100%. The authors summarize these series, generating a total number of 563 patients treated at a mean followup of 34.4 months, yielding a mean success rate of 88.8%, comparable with dorsal onlay urethroplasty. A number of smaller series, including a recent prospective randomized study, have compared dorsal and ventral techniques and reached a similar conclusion to the ICUD group: that there is no significant difference in success rates based on graft placement [13,25,26].
Lateral Onlay.
Lateral onlay BMG augmentation urethroplasty is described but not well established in the literature. It is utilized infrequently and this is reflected by its limited description in the literature. The procedure resembles the ventral onlay technique described above; however, the urethrotomy is made laterally after unilateral urethral mobilization. The graft is similarly sutured in place and the spongiosum is closed over the graft.
As described above, the various locations of the urethrotomy in substitution urethroplasty afford different benefits and can also result in varying consequences. The lateral urethrotomy was described by Barbagli et al. in 2005 [27]. This actually preceded the description of the modified dorsal onlay technique where dissection remains unilateral. In a similar vein to the one-sided dissection technique described by Kulkarni et al. [12], it was felt that eliminating circumferential dissection would help preserve the contralateral urethral blood supply. Furthermore, avoiding urethrotomy through the robust ventral spongiosum may decrease intraoperative blood loss.
While the advantages of one-sided dissection are shared with the modern dorsal onlay technique, several advantages are lost with a lateral onlay procedure. There is a stronger potential for sacculation and diverticulum formation. Additionally, the corpora cavernosa, which are used as a structured vascular bed in dorsal onlay urethroplasty, are not utilized in the same manner in the lateral technique. And while it may seem easier to carry lateral urethrotomy as compared to a dorsal urethrotomy proximally into the membranous urethra, there is no actual data to support the use of lateral onlay in this setting.
In both lateral and ventral onlay, the spongiosum is closed over the BMG. However, in the case of lateral closure, the spongiosum can be rotated dorsally to protect the suture line. Unfortunately, the lateral spongiosal tissue is not as thick and vascular and accordingly may serve as a lowerquality bed for buccal mucosa engraftment. Like ventral grafting, lateral onlay urethroplasty should not be utilized in repair of pendulous urethral strictures. Aside from the similar concerns for sacculation, there is also a conceptual concern for lateral curvature. This is not specifically documented in the literature, likely because it is a technique already not employed in this arena.
One study describes outcomes in 6 patients undergoing lateral onlay urethroplasty. The nonreintervention rate at a mean of 42 months was 83%. Keeping in mind the context of a small sample size and the retrospective nature of the analysis, the lateral technique was comparable to dorsal (85%) and ventral (83%) onlay techniques [27].
The lateral approach offers few advantages, and those too are largely outweighed by its own disadvantages and the advantages of the dorsal and ventral approaches. This technique should be used sparingly and reserved for special circumstances when intraoperative limitations compromise the ability to complete dorsal mobilization.
Complications
The complications of bulbar urethral augmentation relate ostensibly more to the surgery itself, rather than any specific technique, although, as discussed in each of the sections above, particular techniques may predispose patients to specific postoperative concerns. Complications can include wound and/or urine infection, urethrocutaneous fistula, perineal hematoma, blood loss requiring transfusion, or nerve injuries related to positioning. The overall incidence is low, and, in their series comparing these 3 approaches, Barbagli et al. noted no such complications amongst 50 patients [27].
Conclusion
Because the ventral, dorsal, or lateral placement of BMG is typically determined based on location and length of stricture and surgeon preference, comparative studies are limited. This review outlines the best available evidence supporting each technique. Aside from one randomized trial and one systematic review, the remainder of the studies referenced in this paper are retrospective reviews. While the best data suggest that patency outcomes are similar for each technique, appropriate patient selection is paramount to utilize the strengths of a given technique and avoid its shortcoming.
|
2018-04-03T04:24:56.315Z
|
2015-10-21T00:00:00.000
|
{
"year": 2015,
"sha1": "6bde139814f64a54cf934664fc12acb4f03f4583",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/au/2015/397936.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "575a546b7275e282737f1bb227c3ea73ce3a273c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17072309
|
pes2o/s2orc
|
v3-fos-license
|
Subchondral bone density distribution of the talus in clinically normal Labrador Retrievers
Background Bones continually adapt their morphology to their load bearing function. At the level of the subchondral bone, the density distribution is highly correlated with the loading distribution of the joint. Therefore, subchondral bone density distribution can be used to study joint biomechanics non-invasively. In addition physiological and pathological joint loading is an important aspect of orthopaedic disease, and research focusing on joint biomechanics will benefit veterinary orthopaedics. This study was conducted to evaluate density distribution in the subchondral bone of the canine talus, as a parameter reflecting the long-term joint loading in the tarsocrural joint. Results Two main density maxima were found, one proximally on the medial trochlear ridge and one distally on the lateral trochlear ridge. All joints showed very similar density distribution patterns and no significant differences were found in the localisation of the density maxima between left and right limbs and between dogs. Conclusions Based on the density distribution the lateral trochlear ridge is most likely subjected to highest loads within the tarsocrural joint. The joint loading distribution is very similar between dogs of the same breed. In addition, the joint loading distribution supports previous suggestions of the important role of biomechanics in the development of OC lesions in the tarsus. Important benefits of computed tomographic osteoabsorptiometry (CTOAM), i.e. the possibility of in vivo imaging and temporal evaluation, make this technique a valuable addition to the field of veterinary orthopaedic research.
Background
Joint loading, including tensile and compressive stresses, is an important factor in cartilage and subchondral bone physiology and pathology [1][2][3]. Changes in loading and biomechanical properties of these important loadbearing structures, play a key-role in the development and progression of orthopaedic disease [4].
In recent years, different methods have been applied to study joint biomechanics both in vitro and in vivo. The initial studies presented detailed anatomical descriptions of joint structures [5,6], followed by studies describing pressure distributions and contact areas [7,8]. These studies were often done on cadaveric specimens and required a certain degree of dissection, thus altering joint kinematics. In vivo biomechanics are often limited to kinetic and kinematic research using marker data and pressure plates [4]. Actual joint loading cannot easily be assessed non-invasively in vivo, since it requires intraarticular insertion of pressure films [9] making it difficult to apply in studies using larger populations and patient populations. Subchondral bone density is directly influenced by joint biomechanics and limb function and can be used to evaluate joint biomechanics.
The stresses acting on the joint surface induce modelling and remodelling of the bony tissue, depending on whether the local strains either exceed the modelling threshold or stay below the remodelling threshold. Because of that, increased joint loading leads to increased local strains and bone modelling ensures an increase in subchondral bone density to withstand the increased loading [1,3]. In addition, altered joint biomechanics lead to altered joint loading distribution, leading in turn to alterations in the subchondral bone density distribution [10,11].
The subchondral bone density in joints is highly correlated with joint loading and reflects the loading history of the joint [3,[11][12][13]. Using computer tomographic osteoabsorptiometry (CTOAM), the density distribution of the subchondral bone can be visualised and evaluated [3,[11][12][13]. In order to evaluate subchondral bone density and changes associated with orthopaedic conditions, the normal, physiological subchondral bone density distribution has to be described first.
In addition, this type of biomechanical research can help to elucidate the role of joint biomechanics in the development of osteochondrosis (OC) [14]. Osteochondrosis is an orthopaedic condition in dogs that is considered to be multifactorial, with hereditary, dietary and environmental factors playing a role [15]. An environmental factor likely to influence the occurrence of OC is joint biomechanics, since OC lesions are often found in specific locations within the joint [16][17][18][19]. In the tarsocrural joint, lesions can be found medially (medial trochlear ridge tarsocrural osteochondrosis (MTRT-OC)) and laterally (lateral trochlear ridge tarsocrural osteochondrosis (LTRT-OC)) [14,20,21]. The specific joint anatomy likely affects the location of the OC lesions, and this study can aid in the understanding of the pathophysiology of this condition.
This study was conducted to describe the subchondral bone density distribution of the talus of healthy Labrador Retrievers non-invasively, as a parameter reflecting longterm joint loading in the tarsocrural joint, using CTOAM. The authors hypothesise an inhomogeneous distribution of the density of the subchondral bone.
Study population
A total of 20 tarsal joints (ten left and ten right) from ten adult (age 24-28 months) Labrador Retrievers, submitted for computer tomographic (CT) examination of the elbow joint for the screening of elbow dysplasia, were included in this study. The study was approved by the ethical committee of the Faculty of Veterinary Medicine, Ghent University (approval nr. EC2011/193) and informed, written owner consent was obtained in each case. Inclusion criteria for this study were no abnormalities on orthopaedic examination and lameness evaluation and no abnormalities on radiographs of hips, elbows and tarsal joints. After CT examination of the elbow joint, the tarsal joints were scanned as well.
Image acquisition
Under general anaesthesia and the dog positioned in ventral recumbency, CT images were acquired from the tarsal joints using a four slice helical CT scanner (Lightspeed Qx/i, General Electric Medical Systems, Milwaukee, WI, USA). The CT parameters were 120 kVp and 300 mAs. Contiguous, 1,25 mm collimated, transverse images were obtained in a soft tissue reconstruction algorithm. Left and right tarsal joints were scanned simultaneously, with the tarsal joints in extension, according to patient protocol [21]. Acquisition time was approximately five minutes, including repositioning after CT examination of the elbow joints.
Image analysis
The CT images were exported in DICOM format to commercially available software (Analyze 11.0, Biomedical Imaging Resource, Mayo Foundation, Rochester, MN, USA), used to complete the CTOAM workflow ( Fig. 1). In the first step, the talus was segmented using the segmentation algorithm in ' Analyze'. Based on the segmented images, two different three-dimensional (3D) views of the trochlear ridges were reconstructed (Fig. 2). A proximal view was reconstructed first, and the distal view was obtained by tilting the proximal view backwards approximately 90 degrees. This allowed the evaluation of the entire proximal talar joint surface of the lateral and medial trochlear ridge (Fig. 3). Subsequently, the subchondral bone plate of the articulating surface was isolated and reconstructed in exactly the same orientations. The maximum bone density was projected onto the articular surface using a maximum intensity projection (MIP). With a MIP, the three-dimensional (3D) data volume (in voxels) of the subchondral bone plate is converted to a 2D image (in pixels) in which each pixel represents the maximum value (based on the HU). This maximum value is obtained from the voxels along the line perpendicular to the pixel in the 2D image. The length of this line, i.e. the depth of the MIP, is based on the thickness of the subchondral bone plate and was set at 1.5 mm. This MIP view was then converted to a false colour scale, where the range of 200 -1200 Hounsfield Units (HU) was divided in value ranges of 100 HU each representing a colour. In descending order these colours were black, dark red, light red, orange, yellow, dark green, light green, dark blue, light blue, and white. This resulted in a densitogram (Figs. 1 and 3), which displays lines of isodensities, i.e. lines connecting regions of equal density. This densitogram is a visual representation of the apparent density distribution and was further evaluated.
For quantification purposes, the density values (in HU) were converted to 8-bit values, i.e. 256 density values, which were split equally over eight bins, according to literature [22]. Thus, each bin contains a range of 32 density values. A density maximum was defined as an area with density values in the two highest density bins of the densitogram. To allow the comparison of the individual subchondral bone density distributions, a 30 × 30 unit grid was projected over the densitogram of the proximal and dorsal view of the trochlear ridges. The grid edges were positioned thus the entire joint surface could fit within. The number of units in each grid was kept the same, to standardize the coordinates of the density maxima. The x-and y-coordinates ( Fig. 4) were used to describe the location of the density maxima on the joint surface.
In addition, the size of the maximum was described as a ratio of the area of the density maximum and the joint surface area of the proximal and distal view respectively, and defined as the maximum area ratio (MAR).
MAR ¼ number of pixels of the density maximum Ä number of pixels of the total joint surface The use of MAR allows a relative comparison between individuals, and accounts for size-differences.
Statistics
Using commercially available software (SPSS Statistics 22), the location of the density maxima and the MAR were compared between left and right and between dogs. Data was evaluated using a Student's T-test and ANOVA (with Bonferroni post-hoc) and significance was set at P < .05.
Regional variation of subchondral bone density
The proximal and distal reconstructions provided full visualization of the subchondral bone, with a small visual overlap in the transitional area (proximodorsal area) (Fig. 2). The subchondral bone density distribution showed considerable regional differences in both the
Differences in subchondral bone density distribution between medial and lateral trochlear ridge
In general, the lateral trochlear ridge had a higher apparent density in comparison to the medial trochlear ridge, as illustrated by the colour map (Fig. 3). The medial and lateral trochlear ridges showed a distinctly different density pattern.
The medial trochlear ridge had a density maximum in its proximal part. More distally, density values were lower. In 80 % of the joints (n = 16), a focal additional density maximum was present at the most distal part of the trochlear ridge.
On the lateral trochlear ridge, the density maximum was found at the distal part of the trochlear ridge at the level where the medial trochlear ridge shows an area of low density. This density maximum was larger (Table 1) and showed a larger variety in shape than the maximum on the medial trochlear ridge. The density maxima on medial and lateral trochlear ridges were located adjacent to the medial and lateral border of the ridge respectively.
Quantification of density maxima
The location of the density maxima on the standardized grid is displayed in a summary view for both views of the talus (Fig. 5). The density maxima clearly display a very similar distribution in all dogs. No significant differences in the coordinates were found between left and right joints (p-values .607 and .540) and between different dogs (p values .755 and .367).
Comparison of MAR
There was no significant difference in the MAR between left and right ( Table 1). Between dogs, there was no significant difference for the MAR in the proximal view
Discussion
This study describes the subchondral bone density distribution of the talus in a group of healthy Labrador Retrievers, using conventional CT data and CTOAM. In addition to a description and visual representation of the subchondral bone density distribution. Density maxima were described using a standardised grid overlay and the maximum area ratio (MAR) was calculated. Previous studies in humans have shown regional subchondral bone density variations in many different joints [11,12,23], but studies in dogs have been limited to the elbow and stifle joint [24][25][26]. In this study, considerable regional differences of subchondral bone density were found in the convex surface of the talus, articulating with the distal tibia.
The density distribution of the trochlear ridges of the proximal talus is characterized by two density maxima. One is located at the proximal part of the medial trochlear ridge and the other one is located more distally on the lateral trochlear ridge. In addition, the apparent density of the lateral trochlear ridge is higher than the apparent density of the medial trochlear ridge. A possible explanation for this is the fact that the lateral trochlear ridge in the dog is more pronounced and is more likely to endure increased loads during gait. Geometry plays a major role in the development of subchondral bone density patterns, as it determines the magnitude and direction of the dynamic loads, which in turn will guide the modelling process, leading to morphological adaptations [3,27], which is in this case an increase in apparent density.
Both the location of the density maxima and the MAR showed no significant differences between left and right limbs. A recent study described asymmetry in limb and joint mechanics in orthopedically sound Labrador Retrievers [28]. Mechanical dominance has been described in various species, and in dogs right hind limb dominance appears to be most common [29]. These conclusions are based on the calculation of the total support moment of the limbs, and showed that the tarsal joint moment was significantly larger on the dominant side. Mechanical dominance was not evaluated using gait analysis in the dogs used in this study. Based on our findings, we assume that the dogs used in this study have symmetrical gait, or that the differences in case of asymmetry due to hind limb dominance, did not significantly effect subchondral bone density distribution. Whether or not hind limb dominance in dogs has an influence on subchondral bone density is a very interesting topic, and is subject of further research.
On the proximal view there was no significant difference found for the MAR between dogs, whereas on the distal view there was a significant difference for the MAR. As mentioned above, the maximum on the lateral trochlear ridge was located distally, so it was visualised best on the distal view, and showed more variety in shape compared to the maximum on the medial trochlear ridge. This explains the difference in MAR between dogs in the distal view. A possible explanation is that the proximal part of the medial trochlear ridge is subjected to more homogeneous loading. Another possibility, that is likely to play a simultaneous role, is that the force-transmitting area of the medial trochlear ridge is much more constant between dogs, whereas for the lateral trochlear ridge this can vary more between dogs.
Possible drawback of CTOAM for the evaluation of subchondral bone density is that the density distribution of a 3D volume (the voxels) is displayed in 2D (pixels). Because the density is evaluated over the thickness of the subchondral bone plate, perpendicular to the line of sight on the joint surface, this will cause no problems on flat articular surfaces. On more curved articular surfaces, the use of multiple views is necessary to evaluate the subchondral bone density distribution.
Differences in the size of the area of maximum density can be caused by absolute size differences (i.e. larger or smaller talus) but in this study this effect will be very minimal since all dogs were Labrador Retrievers of approximately the same size, weight, and age. Another reason is differences in scanning parameters, specifically the size of the field of view (FOV). Pixel size depends on the size of the scanned object and FOV used for the scan. Since we used consistently a FOV of 512×512, this effect will be minimal due to a standardised position of the joint, and minimal size differences between the dogs used in this study. The use of the MAR, allows a relative comparison of the area of maximum density, accounting for the above confounders when using absolute size values.
When considering joint loading and joint congruency, another important factor is the joint cartilage. Joint cartilage has the important biomechanical role to provide an even distribution of the joint loading on the articular surface [30]. Thicker cartilage is found in places with higher biomechanical loads. A study by Brunnberg et al. supports our conclusion that the lateral trochlear ridge is most likely subjected to higher loads. The cartilage of the lateral trochlear ridge is significantly thicker than the cartilage at the medial trochlear ridge [31].
The location of the density maximum on the medial trochlear ridge is the same location where the majority of MTRT-OC lesions are found [14]. Repetitive loading above the bone modelling threshold, can cause accumulation of microdamage to the bone [32]. Areas with increased subchondral bone density, and thus increased loading conditions, are more likely to be subjected to loading causing microdamage as well. On the talus, these areas subjected to high loading coincide with the location of MTRT-OC lesions. Thus, this study supports previous suggestions that repetitive microdamage [33] is an important factor in the development of OC, although more research is necessary to elucidate the exact pathophysiology.
Lesions on the lateral trochlear ridge (LTRT-OC lesions), are larger than MTRT-OC lesions and have a larger variation in size [14]. Interestingly, the subchondral bone density maximum on the lateral trochlear ridge is larger and shows more variation compared to the medial trochlear ridge, and similar distribution as the OC lesions.
However, changes in subchondral bone density can be cause or effect of OC lesions. A local increase in subchondral bone density, as is the case at the level of a subchondral bone density maximum, may increase the discrepancy between the biomechanical properties of two articulating surfaces. In humans, this mismatch has been suggested to contribute to the development of OC lesions [33,34].
Conclusion
This study shows a distinct pattern of subchondral bone density in the talus of healthy, adult Labrador Retrievers. This pattern, or density distribution, provides more information on the biomechanical aspects of the tarsocrural joint and the morphological adaptations under normal joint loading conditions. The influence of altered joint kinematics, bone geometry and leg conformation on the subchondral bone density distribution remains subject of further research.
Although the evaluation of the subchondral bone density distribution pattern supports previous suggestions on the role of joint biomechanics in the development of tarsocrural OC, more research is needed to determine cause and effect. Therefore, research should focus on early stages of OC lesions and systematically review all factors contributing to the biomechanical joint loading.
In the field of veterinary biomechanics, CTOAM could provide new insights in physiological joint loading distribution, and alterations in pathological conditions. The technique can be used in vivo, in patient populations, and to evaluate temporal changes, for instance following orthopaedic surgery. This implies significant advantages compared to more traditional and invasive techniques used to evaluate joint loading.
Competing interests
The authors declare they have no financial or personal relationships with other people or organisations that could inappropriately influence their work.
Authors' contributions WD carried out the data collection and data processing and drafted the manuscript. MMG participated in the data processing and study design. IJ and JVS participated in the study design and helped to draft the manuscript. HvB and IG participated in the conceiving and design of the study, the statistical analysis, as well as drafting the manuscript. All authors read and approved the final manuscript.
Submit your next manuscript to BioMed Central and take full advantage of:
|
2016-05-15T07:59:54.258Z
|
2016-03-15T00:00:00.000
|
{
"year": 2016,
"sha1": "9fbb05a59ea0542c385262e68a6c1fe5369c27c9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12917-016-0678-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fbb05a59ea0542c385262e68a6c1fe5369c27c9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
38357242
|
pes2o/s2orc
|
v3-fos-license
|
Delayed internal pancreatic fistula with pancreatic pleural effusion postsplenectomy Delayed internal pancreatic fistula with pancreatic pleural effusion postsplenectomy.
The occurrence of pancreatic pleural effusion, secondary to an internal pancreatic fistula, is a rare clinical syndrome and diagnosis is often missed. The key to the diagnosis is a dramatically elevated pleural fluid amylase. This pancreatic pleural effusion is also called a pancreatic pleural fistula. It is characterized by pro-fuse pleural fluid and has a tendency to recur. Here we report a case of delayed internal pancreatic fistula with pancreatic pleural effusion emerging after splenectomy. From the treatment of this case, we conclude that the symptoms and signs of a subphrenic effusion are often obscure; abdominal computed tomography may be required to look for occult, intra-abdominal infection; and active conservative treatment should be carried out in the early period of this complication to reduce the need for endoscopy or surgery.
Abstract
The occurrence of pancreatic pleural effusion, secondary to an internal pancreatic fistula, is a rare clinical syndrome and diagnosis is often missed. The key to the diagnosis is a dramatically elevated pleural fluid amylase. This pancreatic pleural effusion is also called a pancreatic pleural fistula. It is characterized by profuse pleural fluid and has a tendency to recur. Here we report a case of delayed internal pancreatic fistula with pancreatic pleural effusion emerging after splenectomy. From the treatment of this case, we conclude that the symptoms and signs of a subphrenic effusion are often obscure; abdominal computed tomography may be required to look for occult, intra-abdominal infection; and active conservative treatment should be carried out in the early period of this complication to reduce the need for endoscopy or surgery.
INTRODUCTION
A pancreatic fistula is a common complication after pancreatic surgery, trauma, and inflammation, etc. However, the emergence of a delayed postoperative internal pancreatic fistula with pancreatic pleural effusion is still relatively rare. Here we report such a case after splenectomy.
CASE REPORT
A 52-year-old man was admitted to our department for hepatic cirrhosis with splenomegaly and hypersplenism. Physical examination showed a smooth, hard spleen palpated under the left rib margin. Laboratory examinations showed no obvious abnormality except on routine blood examination (white blood count, hemoglobin, and platelet count were 1.90 × 10 9 /L, 65 g/L, and 24 × 10 9 /L, respectively). Abdominal computed tomography (CT) showed hepatic cirrhosis and splenomegaly. Endoscopic examination showed mild esophageal varicose veins without signs of bleeding. Thus splenectomy was conducted and the splenic bed was sewn up with a 4-0 Prolene suture to cover the rough surface of the splenic bed tissue and to prevent subphrenic infection. In the operation, we examined the diaphragm and tail of the pancreas carefully and found no obvious injury. A drainage tube was placed at the left subphrenic fossa. The operation was successful.
During the first 4 d after surgery, the patient recovered smoothly. Amylase in the drainage fluid was normal on the 4th d postoperatively and the drainage tube was removed the following day. Subsequently, however, a fever of unknown origin occurred and fluctuated between 37.5℃ and 40℃ in the following days, without other abnormal symptoms and signs. Then ultrasound examination of the abdomen, including the portal venous system, a chest X-ray, and blood culture were performed to determine the cause, but no obvious positive results were found initially. We had initially considered the cause was spleen fever.
The patient's condition gradually worsened. Dyspnea and acute heart failure occurred but it was not until the 17th d postoperatively that a left pleural effusion was found in the chest X-ray film (Figure 1A), and a left subphrenic effusion encapsulating about 16 cm × 9 cm was revealed by abdominal CT ( Figure 1B). Immediately, abdominal paracentesis and thoracocentesis under ultrasound guidance were conducted, and slightly turbid alutaceous liquid was drained out. Amylase values of the protein-rich fluid from the peritoneal cavity and thoracic cavity were significantly elevated at 19 202 IU/L and 17 531 IU/L, respectively. In the following 20 d, more than 2000 mL sterile fluid were drained from the peritoneal cavity and thoracic cavity ( Figure 1C). The patient gradually recovered.
DISCUSSION
The occurrence of pancreatic pleural effusion, secondary to internal pancreatic fistula, is a rare clinical syndrome and diagnosis is, therefore, often missed. The fluid accumulation is attributed to disruption of the pancreatic duct or to rupture of a pseudocyst. The key to the diagnosis is a dramatically elevated pleural fluid amylase. Effusions in association with acute pancreatitis, esophageal perforation, and thoracic malignancy are important to consider in the differential diagnosis of an elevated pleural fluid amylase but are usually easy to exclude.
The pancreatic duct disruption can also develop posteriorly. Extravasated fluid travels in a cephalad direction through the retroperitoneum to reach the thoracic cavity, or by the lymphatic system and stomata [1][2][3] of the dia-phragm flow into the pleural cavity. Stomata in the peritoneum covering the inferior surface of the diaphragm were first described by von Recklinghausen in 1863. These stomata communicate with lymphatic vessels within the diaphragm. This pancreatic pleural effusion is also called a pancreatic pleural fistula, according to Michael [4] . Pancreatic pleural effusions are typically large and have a tendency to recur. This is in contrast to sympathetic effusions without significant elevated amylase that occur in the setting of acute pancreatitis or secondary to subphrenic abscesses, and tend to be small and self-limiting.
In this case, we supposed that the pathogenesis of the internal pancreatic fistula arose as a result of posterior pancreatic duct rupture. As a minor leakage encapsulated by surrounding tissues, pancreatic leakage was not obvious initially, but it became significant when the inflammation regressed, so a delayed internal pancreatic fistula presented. Because of the short duration of the complication and rapid recovery, CT failed to show the pancreatic pleural fistula, and endoscopic retrograde cholangiopancreatography [5][6][7] examination was also not considered necessary. An internal pancreatic fistula with pleural effusion can usually be managed nonoperatively by percutaneous drainage and reoperation is rarely required [8,9] .
From the treatment of this case, we have come to some important conclusions: (1) a delayed internal pancreatic fistula can occur postsplenectomy; (2) patients who continue to have a fever and slow clinical progress may require CT of the abdomen to look for occult, intra-abdominal infection accounting for the fever [10] ; and (3) active conservative treatment should be carried out in the early period of this complication to reduce the need for endoscopy or surgery.
|
2018-04-03T04:16:28.596Z
|
2010-09-21T00:00:00.000
|
{
"year": 2010,
"sha1": "b67ff6b367c33e3e54b1ecf76f88c214eeed574a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v16.i35.4494",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "ee4c528ee108b68cad50cf739fdfa301b8bea05d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236213122
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Outcomes of Pencil Beam Scanning Proton Therapy in Locally Advanced Non-Small Cell Lung Cancer: Propensity Score Analysis
Simple Summary We analyzed the oncologic outcomes and toxicities after intensity-modulated radiation therapy (IMRT) or pencil beam scanning proton therapy (PBSPT) in patients with locally advanced non-small cell lung cancer treated with concurrent chemoradiation therapy. Due to an imbalance in baseline characteristics between IMRT and PBSPT, we used propensity score-based statistical analysis. Regarding radiation therapy planning, PBSPT exhibited superior sparing of the lung, heart, and spinal cord compared to intensity-modulated (photon) radiotherapy in patients with advanced NSCLC. However, PBSPT resulted in higher incidence of grade 3 or more dermatitis and esophagitis compared to IMRT. Despite declined baseline lung function, PBSPT demonstrated a comparable rate of symptomatic radiation pneumonitis compared to IMRT. PBSPT could be an effective and safe treatment technique with comparable locoregional control. Abstract This study compared the efficacy and safety of pencil beam scanning proton therapy (PBSPT) versus intensity-modulated (photon) radiotherapy (IMRT) in patients with stage III non-small cell lung cancer (NSCLC). We retrospectively reviewed 219 patients with stage III NSCLC who received definitive concurrent chemoradiotherapy between November 2016 and December 2018. Twenty-five patients (11.4%) underwent PBSPT (23 with single-field optimization) and 194 patients (88.6%) underwent IMRT. Rates of locoregional control (LRC), overall survival, and acute/late toxicities were compared between the groups using propensity score-adjusted analyses. Patients treated with PBSPT were older (median: 67 vs. 62 years) and had worse pulmonary function at baseline (both FEV1 and DLCO) compared to those treated with IMRT. With comparable target coverage, PBSPT exhibited superior sparing of the lung, heart, and spinal cord to radiation exposure compared to IMRT. At a median follow-up of 21.7 (interquartile range: 16.8–26.8) months, the 2-year LRC rates were 72.1% and 84.1% in the IMRT and PBSPT groups, respectively (p = 0.287). The rates of grade ≥ 3 esophagitis were 8.2% and 20.0% after IMRT and PBSPT (p = 0.073), respectively, while corresponding rates of grade ≥ 2 radiation pneumonitis were 28.9% and 16.0%, respectively (p = 0.263). PBSPT appears to be an effective and safe treatment technique even for patients with poor lung function, and it does not jeopardize LRC.
Introduction
The mainstay treatment for locally advanced non-small cell lung cancer (NSCLC) is concurrent chemoradiation therapy (CCRT), with a median survival of 29 months [1]. Higher radiation therapy (RT) dose to improve locoregional control (LRC) [2] is often limited given the close proximity to the target volume of critical normal organs.
Besides local control, radiation-related toxicities during RT could reduce patients' quality of life and treatment compliance, and some toxicities may be lethal [3]. Beginning typically at the second or third week of CCRT, acute RT-induced esophagitis occurred in 4-18% of patients, interfering with appropriate nutritional support and inducing longlasting dysphagia [4,5]. After several months, some patients experienced symptomatic radiation pneumonitis (RP), which may affect both quality of life and survival [6]. Owing to promising results from systemic treatments, concerns regarding late-onset toxicity (i.e., cardiotoxicity) have also recently increased [6,7]. Therefore, meticulous RT planning is needed to maximize the therapeutic ratio.
Given the physical advantages over photon RT, proton beam therapy (PBT) is expected to increase the therapeutic ratio by delivering a higher dose to the tumor and sparing normal organs. Along with early dosimetric studies [8,9], both retrospective and prospective clinical studies have reported promising results regarding efficacy and safety [10][11][12][13][14]. However, a recent randomized trial comparing PBT with passive scattering to intensity-modulated radiation therapy (IMRT) demonstrated that PBT did not confer a clinical benefit of RP or LRC over IMRT [15]. Nevertheless, technical advances in proton planning for intensity modulation as well as a change in the delivery form from wobbling to scanning are expected to provide further improvement in normal organ sparing and ultimate dose escalation of tumor [16]. Although several early reports of pencil beam scanning proton therapy (PBSPT) showed promising results, these were mainly focused on planning results. There is no randomized trial comparing PBSPT and IMRT in locally advanced NSCLC [17][18][19][20][21].
Herein, we retrospectively reviewed patients with stage III NSCLC treated with CCRT using PBSPT and compared them with those treated with CCRT using IMRT in terms of planning and clinical outcomes.
Patient Population
After approval from the institutional review board of Samsung Medical Center (No. 2020-01-034), we identified 283 patients with locally advanced NSCLC treated with CCRT between November 2016 and December 2018. Patients were excluded if they underwent PBSPT in combination with IMRT (n = 12), did not complete RT (n = 13), and follow-up details were missing (n = 13). Ultimately, we retrospectively reviewed the medical records of 219 patients; 194 patients were treated with IMRT (IMRT group) and 25 patients were treated with PBSPT (PBSPT group), respectively. Informed consent was waived due to the retrospective nature of this study, and the study was performed in accordance with the provisions of the Declaration of Helsinki and Good Clinical Practice guidelines.
Using the Vmax 22 system (SensorMedics, Yorba Linda, CA, USA), spirometric analysis and diffusing capacity of the lungs for carbon monoxide (DLCO) values were assessed according to the American Thoracic Society/European Respiratory Society criteria. After obtaining absolute values of forced expiratory volume in one second (FEV1) and DLCO, the percentage of the predicted values for FEV1 and DLCO was calculated based on a representative Korean population [22]. Moderately low FEV1 and DLCO were defined as 50% ≤ FEV1 < 70% predicted and 40% ≤ DLCO < 60% predicted; severely low FEV1 and DLCO were defined as FEV1 < 50% and DLCO < 40% predicted [23].
Radiation Therapy
Based on all available clinical information, the gross tumor volume (GTV) was delineated in the average-intensity projection images reconstructed from 10 breathingphase, four-dimensional computed tomography (CT) scans. All patients underwent 18Ffluorodeoxyglucose positron emission tomography/computed tomography for determining GTV and detecting distant metastasis at the time of diagnosis. Internal target volume was established by expanding the GTV to include GTV for each phase of the breathing cycle. The clinical target volume (CTV) was generated by extending a 5 mm margin from the GTV. We routinely did not perform elective node irradiation in the uninvolved lymph node region. For planning target volume (PTV), a uniform 5 mm margin was placed on the CTV to account for setup uncertainty. Median total dose of 66 Gy (range, 59.4-74.0) with a fractional dose of 2.2 Gy (range, 2-2.2) was prescribed for PTV. Specifically, 66 Gy in 30 fractions was the most frequently adopted dose schedule in 152 patients (69.4%) followed by 66 Gy in 33 fractions (n = 34, 15.5%), 70 Gy in 35 fractions (n = 13, 5.9%), 60 Gy in 30 fractions (n = 9, 4.1%), 70.4 Gy in 32 fractions (n = 4, 1.8%), 70 Gy in 35 fractions (n = 4, 1.8%), and 74 Gy in 37 fractions (n = 3, 1.4%). For all patients, 97% of the prescribed dose should encompass at least 95% of the CTV. The planning requirements for organ-at-risk were as follows: both lungs V 5GyE < 65% (where V XXGyE is defined as the percentage of the volume receiving more than XX GyE), V 10GyE < 45%, V 20GyE < 35%, mean lung dose < 20 GyE, heart V 40GyE < 50%, esophagus maximum dose (D max ) < 66 GyE, V 45GyE < 50%, and spinal cord D max < 45 GyE.
The relative biological effectiveness (RBE) for PBSPT was considered as a fixed value of 1.1. Most PBSPT plans were calculated under pencil beam algorithm (n = 21, 84.0%) followed by Monte Carlo algorithm (n = 4, 16.0%). Pencil beam algorithm, considering patients as a stack of semi-infinite layers, models the treatment beam with a summation of narrow pencil beam which interacts with medium for delivering energy [16]. In addition, a single-field optimization (n = 23, 92.0%) with 2 fields (n = 18, 72.0%) rather than 3 fields was utilized. All fields were delivered in the same day. For all patients in the PBSPT group, the continuous line-scanning method was used; detailed information on beam delivery and treatment procedure have been described previously [24]. Briefly, all PBSPT plans were robustness-optimized plans using minimax optimization [25]. Setup and range uncertainty was addressed as 5 mm and +/−3.5%, respectively.
Daily image guidance was performed with kilovoltage or megavoltage cone beam CT for IMRT and orthogonal kilovoltage X-ray images/or cone beam CT provided by VeriSuite (MedCom, Darmstadt, Germany) before each treatment session.
For additional dosimetric comparisons, matched IMRT plans were generated for the corresponding 25 patients in the PBSPT group. Matched IMRT plans were calculated with volumetric arc-modulated therapy and generated under the condition of achieving acceptable target coverage.
Chemotherapy
Overall, 206 patients (94.1%) were treated with the paclitaxel/cisplatin regimen, 8 with paclitaxel/carboplatin, 4 with etoposide/cisplatin, and 1 with cisplatin alone. The paclitaxel/cisplatin or carboplatin regimen consisted of 6 cycles of weekly intravenous paclitaxel (50 mg/m 2 ) with cisplatin (25 mg/m 2 ) or carboplatin (area under curve of 1.5). The first dose of chemotherapy was delivered on the first day of RT, and additional consolidation chemotherapy was performed following CCRT. There were 20 patients without epidermal growth factor receptor mutation who received consolidative durvalumab (monoclonal PD-L1 antibody) after CCRT: 1 and 19 patients in the PBSPT and IMRT group, respectively.
Surveillance
Once the planned treatment was completed, patients underwent chest CT, pulmonary function test (PFT), and/or positron emission tomography/CT scan at 1 month after the planned CCRT, as well as every 3 months for the first 3 years and every 6 months thereafter. Local failure was defined as recurrence within the PTV; recurrent regional nodes outside the PTV were considered as regional failures. Recurrences beyond the primary and regional sites were denoted as distant failures. The acute and late toxicity events noted during and after RT were assessed by the treating physicians based on the Common Terminology Criteria for Adverse Events (CTCAE, ver 5.00). Absolute changes in PFT were calculated based on pre-treatment PFT values for available patients. Major cardiac adverse events were defined based on AHA/ACC guidelines: cardiac death, acute myocardial infarction, unstable angina hospitalization, and heart failure [26].
Statistical Analysis
Differences in continuous variables between the two groups were analyzed with Student's t-test (normally distributed data) and Mann-Whitney U test (non-normally distributed data). The Chi-square test or Fisher's exact test was used to evaluate differences in categorical variables between the two groups. The Wilcoxon signed rank test for nonparametric paired data was used to compare the PBSPT and paired IMRT plans. All events (including loco-regional failure and death) were measured from the day of CCRT to the time of the event. The Kaplan-Meier analysis was used to estimate LRC and OS. Multivariable analyses of LRC and OS were performed using Cox regression analysis; logistic regression analysis was used to identify the prognostic factors for grade ≥ 3 esophagitis and grade ≥ 2 RP. Factors with p < 0.10 in univariable analysis were further assessed in multivariable analysis. Propensity scores were calculated using a multivariate logistic regression model, including sex (female vs. male), age (continuous), pathology (adenocarcinoma vs. non-adenocarcinoma), T stage (T1-2 vs. T3-4), N stage (N2 vs. N3), predicted value of FEV1 (continuous), predicted value of DLco (continuous), and PTV (continuous). Each patient was then assigned an estimated propensity score based on the patient's baseline characteristics. First, patients were matched using 1:2 optimal matching with a caliper distance set at 0.05 standard deviations of the logit of the propensity scores. Second, stabilized inverse probability of treatment weighting (IPTW) was used to adjust for any covariable imbalance. The standardized mean difference was used to evaluate the balance of covariate distribution between the 2 groups. A two-tailed p < 0.05 was considered statistically significant. All statistical analyses were performed using IBM SPSS Statistics version 25 (IBM Corp., Armonk, NY, USA) and R (version 3.6.3; R Foundation for Statistical Computing, Vienna, Austria).
Baseline Characteristics
In the studied patient population, the median age of the patients was 62 (interquartile range, 57-68) years, and most patients (97.7%) had a good performance status of ECOG PS 0-1 (Table 1). Patients in the PBSPT group were older (median, 67 vs. 62, p = 0.003) and had less frequent contralateral mediastinal lymph node involvement (20.0% vs. 43.3%, p = 0.044) than those in the IMRT group. The median FEV1 (percentage predicted) and DLCO (percentage predicted) values in the PBSPT group were significantly lower than those in the IMRT group (both p < 0.05, Figure S1). In addition, the prevalence of severely low FEV1 and moderately to severely low DLCO in the PBSPT group (20.0% and 40.0%, respectively) was higher than in the IMRT group (4.6% and 15.5%, respectively). Values are presented as number of patients (%) or median (interquartile range). Abbreviations: IMRT, intensity-modulated radiation therapy; PBSPT, pencil beam scanning proton therapy; SMD, standardized mean difference; SD, standard deviation; IQR, interquartile range; NOS, not otherwise specified; LCNEC, large cell neuroendocrine carcinoma; EGFR, epidermal growth factor receptor; ALK, anaplastic lymphoma kinase; FEV1, forced expiratory volume in 1 s; DLCO, diffusing capacity of the lung for carbon monoxide.
Radiation Therapy Characteristics
There was no significant difference in total prescription dose and target volumes between the two groups ( Figure 1, Table S1). Although PBSPT plans covered 95% of PTV with a lower dose than IMRT plans (94.8% vs. 97.1%, p = 0.013), both plans encompassed 100% of CTV under the acceptable institutional criteria, presenting 96.2% and 96.7% of the prescribed dose, respectively (p = 0.314). Regarding both lungs, PBSPT significantly reduced not only the average dose but also V 5GyE , V 10GyE , and V 20GyE (all p < 0.001). Although Dmax of esophagus in the IMRT group was higher than that in the PBSPT group (71.2 vs. 69.7 GyE, p = 0.042), V 45GyE , V 55GyE , and V 66GyE were comparable between the two groups (all p > 0.05). Plans in the PBSPT group also showed lower mean heart dose (7.7 vs. 12.8 GyE, p = 0.006) and D max of spinal cord (31.0 vs. 42.6 GyE, p < 0.001) than those in the IMRT group.
Figure 1.
Dose-volume parameters for target volume and normal organs in patients treated with IMRT and PBSPT. Data are presented as the median, interquartile range. Abbreviations: IMRT, intensity-modulated radiation therapy; PBSPT, pencil beam scanning proton therapy; GyE, gray equivalent; CTV, clinical target volume; PTV, planning target volume; VXX%, volume receiving XX% of the prescription dose; VXXGyE, volume receiving more than XX Gy; Dmean, mean dose; Dmax, maximum dose.
Oncologic Outcomes
With a median follow-up of 21.7 (interquartile range, 16.8-26.8) months for the entire cohort, the rates of 2-year LRC and OS were 72.8% and 82.9%, respectively. During the last follow-up, 50 (22.8%) and 117 patients (53.4%) experienced locoregional failures and distant metastases. The rates of 2-year LRC were 72.1% and 84.1% in the IMRT and PBSPT groups, respectively (p = 0.287, Figure 2A). Patients in the PBSPT group showed lower OS rates than those in the IMRT group (rates of 2-year OS: 74.9% vs. 84.4%, p = 0.061, Figure 2B). Multivariable analysis revealed that treatment modality had little impact on both LRC and OS; only GTV ≥ 100 cc showed borderline significance in LRC (HR 1.74, p = 0.069, Table 2).
Oncologic Outcomes
With a median follow-up of 21.7 (interquartile range, 16.8-26.8) months for the entire cohort, the rates of 2-year LRC and OS were 72.8% and 82.9%, respectively. During the last follow-up, 50 (22.8%) and 117 patients (53.4%) experienced locoregional failures and distant metastases. The rates of 2-year LRC were 72.1% and 84.1% in the IMRT and PBSPT groups, respectively (p = 0.287, Figure 2A). Patients in the PBSPT group showed lower OS rates than those in the IMRT group (rates of 2-year OS: 74.9% vs. 84.4%, p = 0.061, Figure 2B). Multivariable analysis revealed that treatment modality had little impact on both LRC and OS; only GTV ≥ 100 cc showed borderline significance in LRC (HR 1.74, p = 0.069, Table 2). The foreparts of the parentheses were set as the reference group. Abbreviations: HR, hazard ratio; CI, confidence interval; RT, radiation therapy; IMRT, intensity-modulated radiation therapy; PBSPT, pencil beam scanning proton therapy; ADC, adenocarcinoma; EGFR, epidermal growth factor receptor; SCF, supraclavicular fossa; GTV, gross tumor volume; PTV, planning target volume; GyE, gray relative biologic effectiveness; BED 10 , biological effective dose with α/β of 10; FEV1, forced expiratory volume in 1 s; DLCO, diffusing capacity of the lung for carbon monoxide.
Toxicity
The toxicities reported in this study are summarized in Table 3. Twenty-four (11.0%) grade 3 or more acute toxic events were observed in the entire cohort. Among 21 patients with grade 3 or more esophagitis, all patients were hospitalized with temporary total parenteral nutrition and five patients required tube feeding. There was a trend toward frequent grade ≥ 3 esophagitis with PBSPT (20.0% vs. 8.2%, p = 0.073, Figure 3); grade ≥ 3 radiation dermatitis was more frequently observed in the PBSPT group than the IMRT group (8.0% vs. 0.5%, p = 0.035). PBSPT was associated with frequent grade ≥ 3 esophagitis after multivariable analysis (odds ratio (OR) 3.68, Table 4). Additionally, esophagus V 45GyE ≥ 35% was also related to the incidence of grade ≥ 3 esophagitis in multivariable analysis. There were two patients in the IMRT group who experienced trachea-esophageal fistula requiring surgical intervention. There were 60 patients (27.4%) who experienced symptomatic RP with comparable incidence between the IMRT and PBSPT groups (28.9% vs. 16.0%, p = 0.263). Multivariable analysis showed that both-lung V 10GyE ≥ 45% significantly increased the grade ≥ 2 RP (OR 4.37, Table 4). Differences in declined pulmonary function between the IMRT and PBSPT groups were not statistically significant throughout the follow-up period ( Figure S2). Regarding cardiac adverse events, there was no significant difference between the IMRT and PBSPT groups (9.3% vs. 8.0%, Table 4). The foreparts of the parentheses were set as the reference group. Abbreviations: OR, odds ratio; CI, confidence interval; RT, radiation therapy; IMRT, intensity-modulated radiation therapy; PBSPT, pencil beam scanning proton therapy; SCF, supraclavicular fossa; FEV1, forced expiratory volume in 1 s; DLCO, diffusing capacity of the lung for carbon monoxide; PTV, planning target volume; GyE, gray relative biologic effectiveness; BED10, biological effective dose with α/β of 10; V XXGyE = volume receiving more than XX GyE.
Dosimetric Comparison for Matched IMRT and PBSPT Plans
After propensity score matching, 50 patients from the IMRT group and 25 patients from the PBSPT group were included following analysis with well-balanced baseline characteristics. In addition, baseline characteristics, except for T stage, were adequately balanced after IPTW (Table 5). Values are presented as number of patients (%) or mean (standard deviation). Abbreviations: IPTW, inverse probability of treatment weighting; IMRT, intensity-modulated radiation therapy; PBSPT, pencil beam scanning proton therapy; SMD, standardized mean difference; ADC, adenocarcinoma; EGFR, Epidermal growth factor receptor; FEV1, forced expiratory volume in 1 s; DLCO, diffusing capacity of the lung for carbon monoxide.
A dose-volume histogram for an average of matched IMRT and PBSPT plans is shown in Figure 4. Although both plans achieved similar CTV/PTV coverage under the institutional dose constraints, PBSPT significantly reduced the volume of lungs, heart, and spinal cord exposed to low-to-high doses of radiation (Table S2)
Oncologic and Toxicity Outcomes for Propensity Score-Adjusted Patients
After PSM and IPTW, PBSPT showed comparable LRC and OS outcomes ( Figure 5, Table 6). Regarding toxicity, PBSPT was associated with frequent grade ≥ 3 esophagitis after IPTW (OR 5.33, Table 6). In addition, PBSPT showed a borderline benefit over IMRT for grade 2 or more RP in propensity score-adjusted analyses ( Table 6).
Oncologic and Toxicity Outcomes for Propensity Score-Adjusted Patients
After PSM and IPTW, PBSPT showed comparable LRC and OS outcomes ( Figure 5, Table 6). Regarding toxicity, PBSPT was associated with frequent grade ≥ 3 esophagitis after IPTW (OR 5.33, Table 6). In addition, PBSPT showed a borderline benefit over IMRT for grade 2 or more RP in propensity score-adjusted analyses ( Table 6).
Discussion
Given the recent advances of the scanning beam in PBT, the current results support the early clinical feasibility of PBSPT in definitive CCRT for NSCLC. PBSPT plans significantly reduced the radiation dose to the lung and spinal cord, with comparable target coverage. Showing comparable survival outcomes, PBSPT resulted in similar rates of symptomatic RP, even in patients who were relatively elderly and in those with poor pulmonary function, compared to those treated with IMRT. However, PBSPT was associated with frequent severe acute esophagitis, even with comparable dosimetric results.
Several plan comparison studies demonstrated that PBT with passive scattering could reduce the volume of lung, esophagus, and spinal cord by up to 30% compared to threedimensional conformal RT or even IMRT [8,9]. However, a recent randomized trial on PBT with passive scattering showed no benefit compared to IMRT in the doses to normal lung (mean, 16.1 vs. 16.6 Gy), resulting in no significant difference in grade ≥3 RP (10.5% vs. 6.5%) [15]. A possible reason for these conflicting results might result from the technical issues associated with three-dimensional PBT with passive scattering. Recent planning studies of PBSPT showed significant improvements in sparing normal organs [17][18][19][20]. In the current study, both dosimetric results from the entire cohort and head-to-head plan comparison outcomes show consistent results of sparing dose to the normal lung, spinal cord, and heart. Further technical advancements in scanning performance and calculation algorithms could increase both the robustness and advantage in normal tissue sparing [16].
The incidence of grade ≥ 2 (24.0%) or grade ≥ 3 (8.0%) acute skin toxicities after PBSPT in the current study was relatively higher than historical data of <5% regarding severe dermatitis (wet desquamation). A relatively high rate of grade ≥ 3 dermatitis after PBT for treating NSCLC has been reported, ranging from 6% to 24% [10][11][12]14]. Concerns remain regarding the increased dermatitis after PBT due to a higher entry dose of the spread-out Bragg peak of protons or the limited number of beams (2-3 per patient) to minimize the radiation dose to the normal lung [14,27,28]. Regarding dose constraints concerning skin, further cost function in planning PBSPT could reduce potential severe dermatitis [29]. The incidence of 20% for grade ≥ 3 esophagitis seems comparable to that of the 18% obtained in the meta-analysis from a historical randomized trial of photon RT [4], while it appears to be more frequent than that of the 13.2% reported in the IMRT group from the secondary analysis of the RTOG 0617 trial [30]. A recent systemic review identified the dose-volume relationship in esophagitis regarding V60GyE, and the current study demonstrated that V45GyE ≥ 35% is associated with esophagitis [5]. However, further analysis adjusted for dose-volume parameters and IPTW analysis demonstrated that PBSPT could increase severe esophagitis despite the comparable dose distribution. We could speculate several possible reasons for frequent severe esophagitis, including robust optimization methods and RBE. Robust optimizations with additional margins to compensate for dose uncertainty could broaden distal falloff and stiffness of target coverage, resulting in unexpected exposure to the esophagus by the beam angle. Although a fixed value of 1.1 is commonly used as RBE for PBT, RBE itself shows various values according to depth, with the highest value observed near the distal edge of the beam [31].
However, further investigation regarding these technical and biological issues related to toxicities is required.
Since the incidence of grade ≥ 4 RP after photon RT was relatively high, ranging from 18.2% to 35.7%, in patients with poor lung function [32,33], physicians were forced to compromise CTV/PTV coverage or reduce the total dose to prevent severe lung toxicity in such patients [34]. In the current study, although the PBSPT group had poor baseline pulmonary function, there was no grade ≥ 3 RP in the PBSPT group, and the pattern of changes in PFT was similar to that in the IMRT group. Since both mean lung dose and lung V 5GyE-20GyE have been reported to be associated with RP [6], PBSPT might influence the RP development. Despite an absence of long-term follow-up for cardiotoxicity in the current study, PBSPT could potentially reduce the radiation-induced cardiac toxic events resulting from reduced mean dose and V 30GyE-50GyE of the heart [6]. Atkins et al. suggested stringent avoidance of cardiac radiation dose based on an increased risk of cardiotoxicity and mortality with increasing cardiac dose in patients with locally advanced NSCLC [7]. The recent post hoc modeling study of the RTOG 0617 trial also showed a relationship between a higher dose to cardiopulmonary substructures and unexpected mortality [35]. The reduced dose to those structures of PBT might translate into improved survival of patients undergoing PBT compared to IMRT, which was observed in the National Cancer Database; this potential benefit could thus be maximized when adopting PBSPT [36]. Although there was no difference in major cardiac events between the PBSPT and IMRT groups in the current study, further follow-up of the current study and a more recent randomized trial would further validate the reduced incidence of cardiotoxicity after PBT and demonstrate a survival benefit.
There are some potential drawbacks in utilizing PBSPT. First, overall costs of PBT easily exceed those of photon RT, even after toxicity rate-adjusted analysis [37]. However, PBT increased the quality-adjusted life-years by 0.549 and 0.452 compared to 3D CRT/IMRT [38]. An ongoing RTOG 1308 trial (ClinicalTrials.gov: NCT01993810) will address cost effectiveness. Further cost-effectiveness analysis in patients with poor lung function should be considered. Second, although 31 centers are available for PBT in the United States [39], there is limited availability of PBT in other regions due to the higher cost of infrastructures relative to photon RT. Therefore, clear evidence demonstrating obvious clinical benefit is needed to justify the implementation of PBSPT in CCRT for NSCLC.
Although propensity score-adjusted analysis was undertaken, a major confounder cannot be adjusted due to the small sample size of the PBSPT group. Second, as a retrospective study, the physician-assessed toxicities should be interpreted with caution. However, our analysis was strengthened by use of PBSPT and by our inclusion of patients with poor pulmonary function. A recent randomized trial only included patients with FEV1 > 1.0 L; however, we observed 40% of patients in the PBSPT group with moderate-to-severe impaired pulmonary function. In addition, thorough individualized plan analysis not only for the entire patient cohort but also for the parallel patients in the PBSPT group could provide more detailed information. The relatively lower OS of the PBSPT group than the IMRT group might stem from a difference in age distribution; there was no difference in OS outcomes after PSM and IPTW analysis.
In conclusion, we note a possible benefit of PBSPT regarding tolerable toxicities with comparable survival outcomes based on real-world clinical data. Further randomized trials might be warranted to endorse PBSPT as an alternative treatment option in locally advanced NSCLC.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers13143497/s1, Figure S1: Baseline pulmonary function tests according to treatment modality: forced expiratory volume in 1 s (A); diffusing capacity of the lung for carbon monoxide (B), Figure S2: Changes in pulmonary function test after treatment: forced expiratory volume in 1 s (A); diffusing capacity of the lung for carbon monoxide (B), Table S1: Dose volume parameters according to the treatment modality, Table S2: Comparison of target coverage and normal tissue sparing with matched intensity-modulated radiation therapy (IMRT) and intensity-modulated proton therapy (PBSPT) plans. Informed Consent Statement: Patient consent was waived due to the retrospective nature of the current study.
Data Availability Statement:
The datasets generated and analyzed during the current study are not publicly available due to institutional data protection law and confidentiality of patient data but are available from the corresponding author upon reasonable request in person.
Conflicts of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
2021-07-25T06:17:01.988Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b12177baecb515341ba1233ed333033ace5f8201",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/14/3497/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe1b70185b7eeeaec9c899308c1e88605537e02e",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221911706
|
pes2o/s2orc
|
v3-fos-license
|
Differences in Thematic Map Reading by Students and Their Geography Teacher
A school world atlas is likely the first systematic cartographic product which students encounter in their lives. However, only a few empirical studies have analysed school atlases in the context of map reading and learning geographical curricula. The present paper describes an eye-tracking study conducted on 30 grammar school students and their geography teacher. The study explored ten tasks using thematic world maps contained in the Czech school world atlas. Three research questions were posed: (i) Are students able to learn using these particular types of maps? (ii) Have the cartographic visualization methods in the school atlas been adequately selected? (iii) Does the teacher read the maps in the same manner as students? The results proved that the students were sufficiently able to learn using thematic maps. The average correctness of their answers exceeded 70%. However, the results highlighted several types of cartographic visualization methods which students found difficult to read. Most of the difficulties arose from map symbols being poorly legible. The most problematic task was estimating the value of the phenomenon from the symbol size legend. Finally, the difference between the students’ and teacher’s manner of reading maps in each task was analysed qualitatively and then quantitatively by applying two different scanpath comparison methods. The study revealed that the geography teacher applied a different method than her students. She avoided looking at the map legend and solved the task using her knowledge.
Map Reading
According to Pravda [1] and Pravda and Kusendová [2], reading a map (perception and understanding map content) is an essential indicator of intelligence in modern humans. Reading a map consists of perceiving the map, using the map's legend, and understanding the map's content. Reading a map is therefore a process of understanding its content through knowledge of the map's language and methods of its use. Reading a map would be meaningless if it was not followed by the use of information acquired from the map, such as standard navigation of the terrain and simple map measurements or generating information which enhances human knowledge. Map reading images studied by Vondráková and Voženílek [3] indicated some specific findings which were reflected in map reader's preferences. Initially, the preferred image map was very often useless, and users chose the map that was subjectively evaluated as one of the worst, but was much more suitable for solving the task. The literature review on map reading found that the vast majority of sources focused on reading topographic maps and navigation of the terrain. In the present paper, the authors understand "map reading" differently. It is not wayfinding, but rather how maps are used to obtain desired School world atlases are teaching materials used by students from the sixth grade in elementary school (11 years) onwards to high school students (19 years). School world atlases have many complementary atlases which focus mainly on the Czech Republic, individual continents, and topics such as world energy or finance. All three of the above-mentioned publishers provide their school world atlases in digital versions.
Evaluation of Thematic Maps and School World Atlases
Only a few studies have focused on the educational aspects of map reading. Brychtova, et al. [17] indicate that a visually appealing map usually achieves higher preferences and popularity with the public, especially students. Carswell [18] analysed children's abilities in topographic map reading, summarizing the findings that teachers overestimate success in teaching map reading skills while also underestimating the map-reading abilities of children. van Dijk, et al. [19] and Schee and Dijk [20] tested the ability of students to use different types of map skills. Their studies revealed that giving students an opportunity to determine their own sequence of performing map assignments is a recommended strategy. Hanus and Marada [21] compared the curricular documents of different countries with special emphasis on map skills. The results showed that the potential of geography in map skills in the Czech curricula is not sufficiently fulfilled.
Havelková and Hanus [9] conducted research examining the effect of different (thematic) mapping methods. The results indicated that students experienced problems with maps using quantitative mapping methods. Students were more successful in tasks where qualitative or both qualitative and quantitative mapping methods were used. Working with thematic maps was also evaluated by Reyes Nuñez, et al. [22] on a sample of students from Argentina and Hungary. The evaluation was supplemented by a questionnaire for teachers. Thematic (political) atlases are used by teachers in Argentina, but mainly physical atlases are used in Hungary. Reyes Nuñez and Juhász [23] also analysed the effectiveness of cartograms. The results showed that geometric area cartograms were more suitable than geographic area cartograms for use in school cartography. The effectiveness of an area cartogram in visualizing spatial data was also evaluated by Sun and Li [24]. Their analysis showed that a pseudo-cartogram is the most preferred technique, and a Dorling cartogram is the least preferred.
Kubíček, et al. [25] measured response times and error rates in map-reading tasks relative to different variations of linear feature visualizations. The results confirmed that colour hue and size were more efficient than shape and colour value.
Gołębiowska [26] aimed to understand how the types of legend layout in thematic maps functioned during map reading. Study participants were asked to perform two sets of tasks using two thematic maps with different legend layouts. Three types of legend layouts were used in the study: list legend, grouped legend, and natural legend. The use of a natural legend required the most time, as this type of legend is not very common, and participants had to concentrate on understanding the legend principle. The arrangement of symbols in a grouped legend reduced the load on working memory. Pétera, et al. [27] conducted an empirical study exploring map drawing skills. Their empirical research showed less developed competence models of map drawing as opposed to map reading.
However, none of the above-mentioned studies tested maps from atlases. Słomska [28] created an overview of different types of maps used as stimuli in cartographical empirical research. The study summarized 103 empirical studies from four cartographic journals. The study substantiated that only one study [26] used maps from atlases. The atlas used in the study [29] was an interactive digital atlas which displayed a broad range of thematic data for the USA. This type of information is completely different material than a school world atlas.
One of the most comprehensive studies of school atlases was conducted by Bugdayci and Bildirici [16], who evaluated 22 atlases used in geography education and social studies. The authors examined generalization, symbology, fonts, colours, and common map elements. The final chapter of the study described the map legend, geographic location, and map scale. It also contained some examples and suggestions for improving the cartographic design of maps contained in the atlases.
Voženílek, et al. [30] investigated the awareness of Czech students of the symbol sets used in 11 different world school atlases. The research applied methods for literature search, comparison of atlases, online surveys, and statistical processing. The results confirmed that Czech students were able to understand the map symbols and cartographic methods used in European school atlases. These results were consistent with Michaelidou, et al. [31], who analysed the ability of elementary school children to analyse the map content of different thematic maps.
Blaha [32] highlighted the importance of aesthetics in the user-friendliness of cartographic products and proposed evaluation methods for map aesthetics, such as scoring, classification, expert estimations, and surveys. The scoring system was used in another study [33] on two Czech school world atlases and explored aesthetics and user-friendliness in maps.
Peresadko and Baltabaeva [34] evaluated the school atlases currently used in Turkmenistan. They indicated that the atlases were outdated and contained a large number of cartographic inaccuracies. The authors justified the need to create a new school atlas for Turkmenistan. Gómez Solórzano, et al. [35] conducted a survey of 50 respondents to compare printed and digital atlases. Using five tasks, the authors measured correctness, reaction time, satisfaction, perception, and emotions. The research showed that printed and digital atlases complement each other. Usability metrics varied slightly; those related to correctness and reaction time were higher for the digital atlas, while those related to satisfaction and perception were higher for the printed atlas.
Song, et al. [36] analysed the main factors affecting the design of symbols in the National Economic Atlas of China. Zhang and Chen [37] undertook an evaluation of the structure, content and design of the Shanxi Province tourist atlas.
The Use of Eye-Tracking
The first decade of the twenty-first century opened a new stage in perceptual research. This stage could be described as cognitive-digital since this type of research is based on computer software and deals with the cognitive aspects of map perception [38]. According to Rohrer [39], one of the most objective methods in evaluating (cartographic) stimuli is eye-tracking since it shows "what people do" instead of "what people say". Popelka and Vozenilek [40] described the common aspects of eye-tracking and space-time-cube and have encouraged joint studies in cartographic research.
Dong, et al. [41] applied eye-tracking in geographic education to evaluate the impact of geography courses in students' abilities to work with maps. However, the map used in the experiment was not from a school atlas but a terrain visualization. Biland and Çöltekin [42] used a similar type of stimuli. Havelková and Gołębiowska [43] evaluated thematic maps using eye-tracking. In their study, the stimuli were created by the authors but selected according to a content analysis of school geography atlases and textbooks.
Kiik, et al. [44] compared four different designs of area symbols in thematic maps in a study to determine whether area symbols are suitable in identifying the extent of polygons while not distracting the map reader. The best results were achieved with hatches. Popelka and Dolezalova [45] used three-dimensional thematic maps as stimuli in eye-tracking experiments. Brychtova and Vondrákova [46] evaluated sequential colour schemes used in thematic maps. Göbel, et al. [45] used eye-tracking to study the adaptation of legend content using gaze-based methods. The study showed that legend content changed according to gaze. The symbol types which had been fixated on previously were drawn with full opacity in the map's legend, while all others were reduced.
The present study compares the map reading strategy of students and their teacher, which focuses on the comparison between experts and novices. This issue was previously analysed in a topic related to cartography, for example, in a study by Burian, et al. [46]. The study evaluated the interpretation of four different urban plans and compared students and experts in urban planning. The results showed that the experts made a relatively large number of mistakes since they were too self-confident and did not look into the map legend. This might signify a parallel with the teacher's strategy in this study, who also avoided using a map legend and answered directly. The difference between the students' and teacher's map-reading strategy might be also studied as the singular value decomposition similarity between scanpath sets [47].
Anderson and Leinhardt [48] asked participants to draw the shortest distance between two locations as they would appear on the earth's surface (using a map with Mercator projection). The results showed that geography experts performed significantly better than novices and pre-service teachers. Their results contrast with the results of the present study, but it is necessary to acknowledge that the tasks were completely different. The participants in the study of Anderson and Leinhardt [48] were expected to use the rules according to their knowledge. In the present study, the participants were instructed to use the map to solve the task.
The difference between experts and novices in reading planimetric and contour maps was analysed by Thorndyke and Stasz [49] and Gilhooly, et al. [50]. More recently, the perception of interactive and static 3D maps was investigated by Herman, et al. [51]. In contrast to our findings, the authors uncovered a statistically significant difference from an accuracy point of view when experts were more correct.
Other cartographic studies have been conducted by Ooms, et al. [52] and Ooms, et al. [53]. The participants in these studies worked with different types of map. Their results indicated that an expert's process of interpretation was much quicker than a novice's. The research confirmed that the trial duration of the teacher was quicker in some tasks but slower in others.
To the best of our knowledge, no previous eye-tracking study has evaluated students working with school atlases.
Motivation and Research Questions
According to cartographic communication models [6,[54][55][56], maps are products which aim to assist people in understanding the world. Generally, the first systematic cartographic product young people encounter in their lives is a school world atlas.
The most commonly used school world atlas in the Czech Republic is published by Kartografie PRAHA. The authors of the present study surveyed 600 Czech geography teachers with an online questionnaire, discovering that most of these teachers (94%) used this atlas in their geography lessons. One of the survey questions asked about the role of the school world atlas in teaching. On a 10-point Likert scale, 10 indicated the most important role; the median value of responses to this question was 9. Most of the teachers worked with the atlas every lesson (57%), while 29% of them worked with the atlas every second lesson. Only 3% of the teachers used the atlas less than every third lesson. These findings verify that the school world atlas is crucial material in geography teaching.
The atlas from Kartografie PRAHA contains 162 maps. Of these, 127 are thematic and 35 are generally geographic. From the 50 world maps contained in the atlas, 9 thematic maps were selected for the experiment. The selection criteria and detailed characteristics of the experiment's maps are described in Section 2.2. (Stimuli and Tasks).
These atlases should help students understand the natural and socio-economic environment of the Earth. School atlases should therefore be comprehensible, well-arranged, and clear and easy to use by students and their teachers.
Studies which examine school atlas map reading can reveal whether students are able to retrieve the information presented on these maps and can also potentially detect problems in map design. However, the process of understanding maps in school atlases has not yet been fully explored. As described above, no study from 103 cartographic user studies has focused on school world atlases, or atlases in general [28]. The objective of the present study is to begin to fill this gap.
The vast majority of cartographic communication models describe the process between the cartographer and map reader. However, these models do not describe whether readers interpret maps in the same manner. A comparison of map reading strategies between students and teachers might unveil a source of problems some students have with map reading. If the teacher and students read maps differently, educational processes might also be affected and disrupted.
The present paper describes an eye-tracking study using thematic maps from a school world atlas as stimuli. Participants solved several tasks using these maps. According to this distribution of map skills, the tasks applied in the experiment fell into the categories of map reading (symbol detection, legend comprehension) and map analysis (extraction of phenomenon location and distribution, comparison of spatial phenomena distribution) categories.
The main aim of the experiment in the present study was to analyse how students and their teacher read maps in a school world atlas. The task in the experiment was to locate a particular object on a thematic map. The present paper addresses three research questions: Q1: Are students able to learn with thematic maps and legends from a school world atlas by finding information and searching for specific objects on a map? Q2: Are the cartographic methods used in the school world atlas comprehensible to students? Q3: Do students read the thematic maps from the school world atlas in the same manner as their teacher?
Experiment Design
At the beginning of the testing session, the purpose of the experiment was explained to participants and basic information obtained about the principle of eye-tracking technology. The experiment was designed in the GazePoint Analysis software. A scheme of the study and experiment is given in Figure 1. The main aim of the experiment in the present study was to analyse how students and their teacher read maps in a school world atlas. The task in the experiment was to locate a particular object on a thematic map. The present paper addresses three research questions: Q1: Are students able to learn with thematic maps and legends from a school world atlas by finding information and searching for specific objects on a map? Q2: Are the cartographic methods used in the school world atlas comprehensible to students? Q3: Do students read the thematic maps from the school world atlas in the same manner as their teacher?
Experiment Design
At the beginning of the testing session, the purpose of the experiment was explained to participants and basic information obtained about the principle of eye-tracking technology. The experiment was designed in the GazePoint Analysis software. A scheme of the study and experiment is given in Figure 1. The experiment was calibrated before testing commenced, and the results were then inspected by the technician responsible for testing. After successful calibration, a task with no stimulus was given to each respondent. The respondents received an indefinite time to read and remember the task. A fixation cross was displayed for 600 ms between the task and the map stimuli to calibrate the origin of eye-movement trajectory to the centre of the screen. The stimulus was displayed for a maximum of 60 s, and respondents were required to find particular objects on the map. Stimuli were presented in a fixed order, from simplest to more complex (according to the authors' opinions). In most of the tasks, the participants responded using a mouse click (clicks) directly on the map. Only task 10 required the participants to search for specific information on the map and say it aloud. The technician registered these answers.
Stimuli and Tasks
All stimuli used in the study were obtained from the electronic version of the School Atlas of the The experiment was calibrated before testing commenced, and the results were then inspected by the technician responsible for testing. After successful calibration, a task with no stimulus was given to each respondent. The respondents received an indefinite time to read and remember the task. A fixation cross was displayed for 600 ms between the task and the map stimuli to calibrate the origin of eye-movement trajectory to the centre of the screen. The stimulus was displayed for a maximum of 60 s, and respondents were required to find particular objects on the map. Stimuli were presented in a fixed order, from simplest to more complex (according to the authors' opinions). In most of the tasks, the participants responded using a mouse click (clicks) directly on the map. Only task 10 required the participants to search for specific information on the map and say it aloud. The technician registered these answers.
Stimuli and Tasks
All stimuli used in the study were obtained from the electronic version of the School Atlas of the World published by Kartografie PRAHA (4th edition) [57]. All the maps are identical to the print version of the atlas. Nine thematic world maps with different topics were selected for the experiment.
Because of the monitor's aspect ratio and resolution (4:3; 1280 × 1024), some maps required cropping for better legibility. No relevant parts or information that could affect the results of the experiment were removed. Each map always contained at least a map field and legend to preserve as much of the map as possible concerning legibility. All maps used as stimuli differ in visualization methods, data type (qualitative/quantitative) and style of legend. All maps are shown in Figure 2. Because of the monitor's aspect ratio and resolution (4:3; 1280 × 1024), some maps required cropping for better legibility. No relevant parts or information that could affect the results of the experiment were removed. Each map always contained at least a map field and legend to preserve as much of the map as possible concerning legibility. All maps used as stimuli differ in visualization methods, data type (qualitative/quantitative) and style of legend. All maps are shown in Figure 2. Full-resolution previews are included in the Supplementary Materials. The three research questions determined strategic selection of map-stimuli and compilation of tasks. Q1 asks whether students are able to learn with thematic maps in a school world atlas. The atlas was thoroughly inspected for its coverage of a wide range of geography curriculum topics. Accordingly, various world maps focusing on different geographical themes (vegetation zones, urbanisation, geology, economy, etc.) were selected for the eye-tracking experiment. Q2 probes the comprehension of cartographic methods. Maps which applied different cartographic methods (graduated symbols, choropleth maps, area symbols, etc.) were therefore selected. Q3 investigates the similarities and differences in the map-reading strategies of the students and their teacher and builds on concepts of Q1 and Q2.
The tasks were formulated for each map stimuli according to the type of information displayed, visualization method, and legend style and related directly to the research question of whether respondents could read thematic maps and use the legend to search for information and find a specific object on the map.
The maps used in the experiment fell into several types according to the type of data which they displayed: qualitative (Map01, Map05, and Map06), quantitative (Map02, Map04, and Map09), and both qualitative and quantitative (Map03, Map07, Map08, and Map10).
In the maps which displayed qualitative data, the assigned task was straightforward. Respondents were required to find an object in the legend and then identify it on the map. The task was to identify all the areas with temperate deciduous forests in Map01, a convergent plane boundary in Map05, and places where iron ore was mined in Map06.
Quantitative data were visualized using a choropleth map (Map02), graduated symbol map (Map04), and flow map (Map09). The task in the choropleth map (Map02) was to identify all the countries with less than 20% urban populations. The task in both diagram maps was to find urban agglomeration and shipping routes with certain properties.
The remaining maps contained both qualitative and quantitative information. All of these maps included proportional symbols, and areas were displayed as either choropleth maps or area symbols. The tasks required participants to work with the diagrams and identify the country with the highest proportion of potatoes in total calorie consumption (Map03), countries with specific GDP (Map07), and countries with higher imports than exports (Map08). In the task for Map08, the answer could be The three research questions determined strategic selection of map-stimuli and compilation of tasks. Q1 asks whether students are able to learn with thematic maps in a school world atlas. The atlas was thoroughly inspected for its coverage of a wide range of geography curriculum topics. Accordingly, various world maps focusing on different geographical themes (vegetation zones, urbanisation, geology, economy, etc.) were selected for the eye-tracking experiment. Q2 probes the comprehension of cartographic methods. Maps which applied different cartographic methods (graduated symbols, choropleth maps, area symbols, etc.) were therefore selected. Q3 investigates the similarities and differences in the map-reading strategies of the students and their teacher and builds on concepts of Q1 and Q2.
The tasks were formulated for each map stimuli according to the type of information displayed, visualization method, and legend style and related directly to the research question of whether respondents could read thematic maps and use the legend to search for information and find a specific object on the map.
The maps used in the experiment fell into several types according to the type of data which they displayed: qualitative (Map01, Map05, and Map06), quantitative (Map02, Map04, and Map09), and both qualitative and quantitative (Map03, Map07, Map08, and Map10).
In the maps which displayed qualitative data, the assigned task was straightforward. Respondents were required to find an object in the legend and then identify it on the map. The task was to identify all the areas with temperate deciduous forests in Map01, a convergent plane boundary in Map05, and places where iron ore was mined in Map06.
Quantitative data were visualized using a choropleth map (Map02), graduated symbol map (Map04), and flow map (Map09). The task in the choropleth map (Map02) was to identify all the countries with less than 20% urban populations. The task in both diagram maps was to find urban agglomeration and shipping routes with certain properties.
The remaining maps contained both qualitative and quantitative information. All of these maps included proportional symbols, and areas were displayed as either choropleth maps or area symbols. The tasks required participants to work with the diagrams and identify the country with the highest proportion of potatoes in total calorie consumption (Map03), countries with specific GDP (Map07), and countries with higher imports than exports (Map08). In the task for Map08, the answer could be discovered from the graduated symbols (showing values for imports and exports) or using area symbols (chorochromatic map showing trade balance). In the final task (Map10), participants estimated Brazil's exports according to a value scale.
Because the atlas is in the Czech language, all of the tasks were also formulated in Czech. Translations of these tasks are in Table 1. Table 1. List of the experiment's tasks (translated from Czech to English).
Task01
Identify all areas with temperate deciduous forests. Task02 Identify all countries with less than 20% urban populations. Task03 Identify the country with the highest proportion of potatoes in total calorie consumption.
Task04
Identify urban agglomerations with more than 20 million inhabitants in North America, Central America, and South America. Task05 Identify a convergent plate boundary. Task06 Identify a place on every continent where iron ore is mined. Task07 Identify three countries with a total GDP of approximately USD 2500 billion. Task08 Identify three countries whose imports exceed exports. Task09 Identify three shipping routes with an annual capacity under 100 million tonnes. Task10 Estimate Brazil's export volume in billions of USD.
Participants
Forty-one third grade students (~18 years) from a Czech grammar school participated in the experiment. Testing was conducted in two stages over two weeks at the end of 2018. The students' geography teacher also attended the testing in the first half of 2019. For all of the participants, the testing in this experiment was their first experience with eye-tracking technology. Some of them may have felt nervous, which may have affected the data quality. Eleven of the 41 students were removed from the dataset because of the inaccuracy of the device or problems with calibration. This data pre-processing stage is described later. The data recorded for 30 students (8 males and 22 female) and one geography teacher (female) were eventually included in the analysis.
The teacher who participated in the research has been teaching geography for over 30 years at grammar school with more than 400 students. She uses the school world atlas from Kartografie PRAHA (version from 2006) and older atlases (from around 1989) in her classes. Her students use atlases every lesson, primarily with general geographic maps and less with thematic maps (climate, hydrology, lithosphere, biosphere, pedosphere, etc.).
Apparatus
Eye trajectories were measured using three GazePoint 3 eye-trackers operated by three technicians. The GazePoint eye-tracker is an inexpensive device similar to TheEyeTribe tracker and Tobii EyeX. The accuracy and precision of all these low-cost eye-trackers have been previously tested in the studies by Dalmaijer [58], Ooms, et al. [59] and Popelka, et al. [60]. Janthanasub and Meesad [61] tested the accuracy of the GazePoint 3 eye-tracker in their study. The results showed it was suitable for use in research. GazePoint 3 has also been used in studies in the field of neurosciences [62], marketing [63], mathematics [64], physics [65], kinesiology, and sports science [66]. A comprehensive list of publications concerning the GazePoint tracker is available at https://www.gazept.com/meet-the-team/publications/.
Data Pre-Processing
Recorded eye-movement data was pre-processed and validated before data analysis. Recording was conducted in the classroom. The students had had no previous experience with eye-tracking testing.
Data were recorded using GazePoint Analysis software. However, the application's capabilities for data analysis are minimal. The data were therefore converted into a format readable by the open-source application OGAMA [67] using the online tool at http://eyetracking.upol.cz/gp2ogama. The OGAMA application allows the ratio of samples with coordinates 0;0 (upper-left corner of the stimulus) to be calculated. These samples represent data loss caused by eye-blinking and lost signals. The ratio of samples recorded off-screen was another factor which required checking because of the GazePoint eye-tracker. In the extreme cases, the ratio exceeded 60%. This data had to be removed from the dataset.
The values of the ratio of data loss (α) and off-screen samples (β) for each participant and stimuli are given in Table 1. The instances where α or β ≥ 10 are highlighted in red. In the next step, the sum of these samples was calculated, and 11 students with more than three problematic stimuli were excluded from further analysis. The remainder of the participants were renamed consecutively S01-S30. The students' geography teacher also engaged in the testing. A summary of the quality of recorded data is depicted in Figure 3. The values in the table represent the ratio of data loss α (left) or off-screen samples β (right) for each participant. The TOTAL column contains the number of cases where the values exceeded 10%. The ratio of samples recorded off-screen was another factor which required checking because of the GazePoint eye-tracker. In the extreme cases, the ratio exceeded 60%. This data had to be removed from the dataset. The values of the ratio of data loss (α) and off-screen samples (β) for each participant and stimuli are given in Table 1. The instances where α or β ≥ 10 are highlighted in red. In the next step, the sum of these samples was calculated, and 11 students with more than three problematic stimuli were excluded from further analysis. The remainder of the participants were renamed consecutively S01-S30. The students' geography teacher also engaged in the testing. A summary of the quality of recorded data is depicted in Figure 3. The values in the table represent the ratio of data loss α (left) or off-screen samples β (right) for each participant. The TOTAL column contains the number of cases where the values exceeded 10%.
Methods of Analyses
Fixations and saccades were identified before the analyses. The fixation detection algorithm (I-DT) thresholds were set to 20 pixels (distance between points) and 5 (minimum number of samples). The optimal fixation detection algorithm is described by Popelka [68] in more detail.
Q1 (students' ability to learn with thematic maps) was analysed according to the correctness of the responses, and trial duration was analysed as a metric indicating the time required for respondents to give an answer. Participants marked their answers on the stimuli using mouse clicks. The online tool http://eyetracking.upol.cz/gp2vanalytics/ converted data from GazePoint Analysis into V-Analytics [69]. V-Analytics was used to visualize mouse clicks and can also be applied to eyemovement data analysis [70]. The Kruskal-Wallis post hoc Nemenyi test was applied to statistically evaluate the recorded data in RStudio at a significance level of 0.05.
Q2 (comprehension of cartographic methods) was answered based on a visual inspection of recorded scanpaths and data visualization using sequence charts. The results obtained in Q1 (correctness of answers and trial duration) were used for pointing to problematic cartographic tasks. In the next phase, two experimenters analysed eye-movement trajectories (scanpaths) and created
Methods of Analyses
Fixations and saccades were identified before the analyses. The fixation detection algorithm (I-DT) thresholds were set to 20 pixels (distance between points) and 5 (minimum number of samples). The optimal fixation detection algorithm is described by Popelka [68] in more detail.
Q1 (students' ability to learn with thematic maps) was analysed according to the correctness of the responses, and trial duration was analysed as a metric indicating the time required for respondents to give an answer. Participants marked their answers on the stimuli using mouse clicks. The online tool http://eyetracking.upol.cz/gp2vanalytics/ converted data from GazePoint Analysis into V-Analytics [69]. V-Analytics was used to visualize mouse clicks and can also be applied to eye-movement data analysis [70]. The Kruskal-Wallis post hoc Nemenyi test was applied to statistically evaluate the recorded data in RStudio at a significance level of 0.05. Q2 (comprehension of cartographic methods) was answered based on a visual inspection of recorded scanpaths and data visualization using sequence charts. The results obtained in Q1 (correctness of answers and trial duration) were used for pointing to problematic cartographic tasks. In the next phase, two experimenters analysed eye-movement trajectories (scanpaths) and created sequence charts. From these visualizations, they tried to reveal the reason for low correctness or high trial duration. Typically, the distribution of attention between the map and the legend or focusing on specific parts of the map was analysed.
Sequence charts are displaying the distribution of fixations in predefined areas of interest (AOI). Participants' eye-movement data are represented with coloured bars. The colour of each cell in a bar represents one fixation in the particular AOI. Unfortunately, neither OGAMA nor GazePoint Analysis offers this type of visualization; the charts were created manually in MS Excel using the PART function and conditional formatting. Sequence charts for all tasks are available at http://eyetracking.upol.cz/atlases_thematic/SequenceCharts.pdf.
Q3 (comparison of difference of map-reading strategies of students and their teacher) addressed an analysis of eye-movement data using two approaches to calculate scanpath similarity. The first approach is based on the string-edit-distance using the ScanGraph tool [71,72], which is designed to process data exported from OGAMA directly. ScanGraph analyses the order of visited AOIs as a sequence of characters and calculates the similarity of these sequences using Levenshtein distance, the Needleman-Wunsch algorithm or Damerau-Levenshtein distance. Individual participants are visualized as nodes in a graph, and ScanGraph looks for cliques in this graph. The cliques represent groups of participants who were similar to each other at least to a specific (user-defined) degree. The second approach in analysing the scanpath similarity is based on the multimatch method introduced by Jarodzka, et al. [73] and Dewhurst, et al. [74]. This method represents scanpaths as mathematical vectors and allows the scanpath to retain a sequence of fixations and saccades and measure similarity using geometry. Multimatch similarity measurements are sensitive to the differences in shape, position, length, direction, and duration between two scanpaths [73]. As the authors of multimatch indicate, the method does have some drawbacks, the most significant being that measurements only compare two scanpaths.
In the present study, this drawback is addressed by using batch computation in a python-based multimatch alternative called multimatch-gaze [75]. Batch computing was possible in all similarity measurements except duration. In this case, the results were normalized according to the length of the longer scanpath, so it is not possible to compare values for multiple pairs of scanpaths. The results from multimatch-gaze were transformed into separate matrices for each task and each type of similarity (vector, direction, length, position). These matrices can be either imported into ScanGraph for visual analysis or analysed directly (i.e., in MS Excel).
Correctness of Answers-Students' Ability to Learn with Maps (Q1)
In the majority of tasks, participants marked their answers directly on the map using mouse click (clicks). The correctness of these answers was then determined. These answers were used to discuss and resolve the first research question. Participants only estimated the value in Task10 according to the symbol size legend. Figure 4 contains a summary of the answers. The correct answers are highlighted in green, incorrect in red, and partially correct answers (i.e., not all correct countries were marked) in orange. All missing answers were marked as incorrect. Correctness in all the tasks by all participants was summarized. Each correct answer was allocated 1 point, and partially correct answers 0.5 points. The trial duration of each task was also investigated. The boxplots in Figure 5 chart the data for 30 students. The value for the geography teacher is indicated with a red dot. Statistically significant differences between the tasks according to the Kruskal-Wallis post hoc Nemenyi test are represented using blue lines.
The participants required the most time to solve Task07 and Task10. These two tasks also indicated problems with correctness. A high trial duration value was also observed for Task01. However, participants needed only around 19 s (median) to solve Task02. The trial duration value for this task differed significantly from four other tasks (Task01 (p < 0.001), Task04 (p = 0.004), Task07 (p < 0.001) and Task10 (p < 0.001)).
No clear connection for trial duration between the students and teacher was identified. In some tasks, the teacher was quicker than the students, but for other tasks, the teacher's trial duration was much higher. The most straightforward tasks were Task02 ("Identify all countries with less than 20% of the urban population") and Task05 ("Identify convergent plate boundary"), with a correctness of 92%. The most difficult task, however, was Task10, where participants estimated the volume of Brazil's exports according to symbol size legend. Although the correct answer was USD 250 billion, the responses from participants varied from 3 to 5000. Because of tolerances, responses indicating a value between 200 and 300 were counted as partially correct. Participants also demonstrated problems with Task07, which required them to identify countries with a specific GDP value according to symbol size legend.
The average correctness of answers from all students achieved was 71%. It could therefore be said that generally, students were sufficiently able to read thematic maps and learn from them.
The trial duration of each task was also investigated. The boxplots in Figure 5 chart the data for 30 students. The value for the geography teacher is indicated with a red dot. Statistically significant differences between the tasks according to the Kruskal-Wallis post hoc Nemenyi test are represented using blue lines.
The participants required the most time to solve Task07 and Task10. These two tasks also indicated problems with correctness. A high trial duration value was also observed for Task01. However, participants needed only around 19 s (median) to solve Task02. The trial duration value for this task differed significantly from four other tasks (Task01 (p < 0.001), Task04 (p = 0.004), Task07 (p < 0.001) and Task10 (p < 0.001)).
No clear connection for trial duration between the students and teacher was identified. In some tasks, the teacher was quicker than the students, but for other tasks, the teacher's trial duration was much higher.
Results of Individual Tasks-Comprehension of Cartographic Methods (Q2)
The next step analysed the participants' behaviour in solving individual tasks.
Task01
In the experiment's first task, the participants identified all areas with temperate deciduous forests. It was assumed this task would be very easy for the students since all that was required was identifying the correct symbol from the legend and recognizing all the areas indicated by this symbol. However, the accuracy of the answers was 61%, and only 13 students solved the task correctly and 11 students partially. The students marked temperate deciduous forests together with taiga or even subtropical and tropical forests. The reason was probably a poorly distinguishable legend, with all three types of vegetation being visualized using very similar symbols (see bottom-left section of Figure 6). Figure 6 indicates the fixations of participants in grey and the teacher's fixation in red. The answers (clicks) are visualized as blue dots. From the distribution of fixations, it is evident that participants did not focus their attention on the strip with climate belts at the edge of the map field.
The teacher answered partially by clicking on the temperate deciduous forests in Europe and also taiga in Canada. She did not focus her attention on the legend.
Results of Individual Tasks-Comprehension of Cartographic Methods (Q2)
The next step analysed the participants' behaviour in solving individual tasks.
Task01
In the experiment's first task, the participants identified all areas with temperate deciduous forests. It was assumed this task would be very easy for the students since all that was required was identifying the correct symbol from the legend and recognizing all the areas indicated by this symbol. However, the accuracy of the answers was 61%, and only 13 students solved the task correctly and 11 students partially. The students marked temperate deciduous forests together with taiga or even subtropical and tropical forests. The reason was probably a poorly distinguishable legend, with all three types of vegetation being visualized using very similar symbols (see bottom-left section of Figure 6). Figure 6 indicates the fixations of participants in grey and the teacher's fixation in red. The answers (clicks) are visualized as blue dots. From the distribution of fixations, it is evident that participants did not focus their attention on the strip with climate belts at the edge of the map field.
The teacher answered partially by clicking on the temperate deciduous forests in Europe and also taiga in Canada. She did not focus her attention on the legend. ISPRS Int. J. Geo-Inf. 2020, 9, x FOR PEER REVIEW 13 of 24 Figure 6. Fixations of students (grey) and their teacher (red), together with answers (blue dots). Similar symbols from the legend are enlarged in the left lower corner.
Task02
In the second task, respondents found and marked all countries on the choropleth map with less than 20% of the urban population. Participants found this task very easy, demonstrating one of the highest accuracies in the entire experiment and requiring the least amount of time to solve the task (19.25 s as can be seen from Figure 5).
The sequence chart in Figure 7 shows the distribution of fixations in the map field (green) and legend (red). Only two students (S13 and S22) answered incorrectly ( Figure 4) since they did not look at the legend at all (see Figure 7).
Task02
In the second task, respondents found and marked all countries on the choropleth map with less than 20% of the urban population. Participants found this task very easy, demonstrating one of the highest accuracies in the entire experiment and requiring the least amount of time to solve the task (19.25 s as can be seen from Figure 5).
The sequence chart in Figure 7 shows the distribution of fixations in the map field (green) and legend (red). Only two students (S13 and S22) answered incorrectly ( Figure 4) since they did not look at the legend at all (see Figure 7).
Task02
In the second task, respondents found and marked all countries on the choropleth map with less than 20% of the urban population. Participants found this task very easy, demonstrating one of the highest accuracies in the entire experiment and requiring the least amount of time to solve the task (19.25 s as can be seen from Figure 5).
The sequence chart in Figure 7 shows the distribution of fixations in the map field (green) and legend (red). Only two students (S13 and S22) answered incorrectly ( Figure 4) since they did not look at the legend at all (see Figure 7). The correct answer for this task was marking eight countries with the brightest colour. The teacher marked only three of them. Since it is visible from the sequence chart, the teacher looked at the legend when she began to view the stimulus, although only for a brief moment. She probably indicated countries according to her knowledge from urban geography.
Task03
In the third task, the participants identified the country with the highest proportion of potato consumption according to total calories. The map legend contained three sections, and the participants were required to discover the information from pie charts, where brown depicted potatoes. Each of the participants looked at the legend, and four of them answered incorrectly (S5, S13, S21, and S28).
The teacher looked at the legend only briefly compared to students (9 fixations, while the students' average was more than 21). Her answer was ranked as partially correct since she selected more than one country. This task took her the most time to solve in the whole experiment.
Task04
Task04 required identifying a particular graduated symbol on the map. The task was to identify an urban agglomeration with more than 20 million inhabitants in North America, Central America, and South America. The legend contained three different symbol sizes for urban agglomerations (see Figure 8). The participants looked for the biggest circle on the map. Ten students indicated incorrect answers, and four others were only partially correct. The problem was likely in the difficulty of identifying graduated symbols (Figure 8). The correct answer for this task was marking eight countries with the brightest colour. The teacher marked only three of them. Since it is visible from the sequence chart, the teacher looked at the legend when she began to view the stimulus, although only for a brief moment. She probably indicated countries according to her knowledge from urban geography.
Task03
In the third task, the participants identified the country with the highest proportion of potato consumption according to total calories. The map legend contained three sections, and the participants were required to discover the information from pie charts, where brown depicted potatoes. Each of the participants looked at the legend, and four of them answered incorrectly (S5, S13, S21, and S28).
The teacher looked at the legend only briefly compared to students (9 fixations, while the students' average was more than 21). Her answer was ranked as partially correct since she selected more than one country. This task took her the most time to solve in the whole experiment.
Task04
Task04 required identifying a particular graduated symbol on the map. The task was to identify an urban agglomeration with more than 20 million inhabitants in North America, Central America, and South America. The legend contained three different symbol sizes for urban agglomerations (see Figure 8). The participants looked for the biggest circle on the map. Ten students indicated incorrect answers, and four others were only partially correct. The problem was likely in the difficulty of identifying graduated symbols (Figure 8). The teacher encountered the above-mentioned problem. From the recordings of her eyemovements, it was evident that she had problems in distinguishing the size of the symbols in North and South America. She spent a considerable time on this task and answered only partially correctly since she mismatched the size of the symbols.
Task05
Task05 was one of the easiest in the whole experiment with correctness of 90% as can be seen from Figure 4. Participants were required to identify the convergent plate boundary. The legend contained four different linear symbols for plate boundary types. Only one student responded incorrectly (S13) and was the only who recorded no fixation on the correct part of the legend (with linear symbols for plate boundaries). This student achieved the worst results in the whole experiment.
The teacher recorded the quickest answer for Task05. Her trial duration of 7.7 s was also quicker than the students (p = 0.06). The teacher omitted the legend and spontaneously focused her attention on the plate boundaries on the map. Unfortunately, she mismatched the divergent and convergent boundary, so her answer was incorrect.
Task06
Task06 was also a simple task. The participants identified a place on every continent where iron ore was mined. Iron ore was indicated with a red "Fe" symbol, and many students needed only a few fixations to inspect the legend to find the right symbol. Only two students indicated an incorrect answer. One of them (S17) did not remember the task and searched for a different symbol (oil field). The teacher encountered the above-mentioned problem.
From the recordings of her eye-movements, it was evident that she had problems in distinguishing the size of the symbols in North and South America. She spent a considerable time on this task and answered only partially correctly since she mismatched the size of the symbols.
Task05
Task05 was one of the easiest in the whole experiment with correctness of 90% as can be seen from Figure 4. Participants were required to identify the convergent plate boundary. The legend contained four different linear symbols for plate boundary types. Only one student responded incorrectly (S13) and was the only who recorded no fixation on the correct part of the legend (with linear symbols for plate boundaries). This student achieved the worst results in the whole experiment.
The teacher recorded the quickest answer for Task05. Her trial duration of 7.7 s was also quicker than the students (p = 0.06). The teacher omitted the legend and spontaneously focused her attention on the plate boundaries on the map. Unfortunately, she mismatched the divergent and convergent boundary, so her answer was incorrect.
Task06
Task06 was also a simple task. The participants identified a place on every continent where iron ore was mined. Iron ore was indicated with a red "Fe" symbol, and many students needed only a few fixations to inspect the legend to find the right symbol. Only two students indicated an incorrect answer. One of them (S17) did not remember the task and searched for a different symbol (oil field).
The teacher again responded according to her knowledge, not according to the map. Although she looked briefly at the legend twice, she did not search for the correct symbol. She marked the countries where iron ore was mined (Canada, South Africa, Sweden, and Brazil), but her clicks were not near the "Fe" symbols.
Task07
Task07 was one of the most complicated in the experiment with correctness of only 35% as can be seen from Figure 4. The participants identified three countries with a total GDP of approximately USD 2500 billion. GDP information was visualized in a proportional pie chart with a logarithmic scale. To find the correct answer, participants had to imagine how large the symbol depicting the value of USD 2500 billion was. This process is indicated in Figure 9. The teacher again responded according to her knowledge, not according to the map. Although she looked briefly at the legend twice, she did not search for the correct symbol. She marked the countries where iron ore was mined (Canada, South Africa, Sweden, and Brazil), but her clicks were not near the "Fe" symbols.
Task07
Task07 was one of the most complicated in the experiment with correctness of only 35% as can be seen from Figure 4. The participants identified three countries with a total GDP of approximately USD 2500 billion. GDP information was visualized in a proportional pie chart with a logarithmic scale. To find the correct answer, participants had to imagine how large the symbol depicting the value of USD 2500 billion was. This process is indicated in Figure 9. Participants had difficulties in estimating the pie chart size. Only eight indicated the correct answer. Almost all the pie charts on the map were marked at least once, which may denote that students misunderstood the legend. As evident from the sequence chart in Figure 10, a majority (55%) fixated on the legend (red), and yet they did not respond correctly. Participants had difficulties in estimating the pie chart size. Only eight indicated the correct answer. Almost all the pie charts on the map were marked at least once, which may denote that students misunderstood the legend. As evident from the sequence chart in Figure 10, a majority (55%) fixated on the legend (red), and yet they did not respond correctly. The teacher again responded according to her knowledge, not according to the map. Although she looked briefly at the legend twice, she did not search for the correct symbol. She marked the countries where iron ore was mined (Canada, South Africa, Sweden, and Brazil), but her clicks were not near the "Fe" symbols.
Task07
Task07 was one of the most complicated in the experiment with correctness of only 35% as can be seen from Figure 4. The participants identified three countries with a total GDP of approximately USD 2500 billion. GDP information was visualized in a proportional pie chart with a logarithmic scale. To find the correct answer, participants had to imagine how large the symbol depicting the value of USD 2500 billion was. This process is indicated in Figure 9. Participants had difficulties in estimating the pie chart size. Only eight indicated the correct answer. Almost all the pie charts on the map were marked at least once, which may denote that students misunderstood the legend. As evident from the sequence chart in Figure 10, a majority (55%) fixated on the legend (red), and yet they did not respond correctly. In this task, the teacher used the legend for the first time in the experiment. It took a long time until she oriented herself in the map, but despite this, her trial duration was less than the median value of students. Her answers were recorded as correct.
Task08
Task08 was to identify three countries whose imports exceeded exports. Finding the correct answer was possible in two manners. The first was to search for the area symbols (chorochromatic map) where the information for the trade balance was depicted directly. The second was to use the bar charts ( Figure 11) to find the countries where the bar for imports was taller than the one for exports.
In this task, the teacher used the legend for the first time in the experiment. It took a long time until she oriented herself in the map, but despite this, her trial duration was less than the median value of students. Her answers were recorded as correct.
Task08
Task08 was to identify three countries whose imports exceeded exports. Finding the correct answer was possible in two manners. The first was to search for the area symbols (chorochromatic map) where the information for the trade balance was depicted directly. The second was to use the bar charts ( Figure 11) to find the countries where the bar for imports was taller than the one for exports. Figure 11. Legend for Task08 (translated into English).
Only five participants (S12, S14, S23, S27, and S29) worked with area symbols. The charts were used by 19 other participants, who also indicated the correct answer. These numbers suggest that the task was relatively easy for the participants and was also one of those with low trial duration.
After the experience from the previous task, the teacher looked directly at the legend, and she spent a relatively long time there. She focused on the bar charts in the legend and selected countries accordingly. Her trial duration was slightly less than the median for students. Her answer was also correct.
Task09
Task09 was to identify three shipping routes with an annual capacity below 100 million tonnes. Information about the shipping routes was visualized using graduated linear symbols in blue. The colour was similar to the colour for parallels and meridians, and some participants mismatched these objects. Four participants marked the correct symbol in close proximity to the harbours. Discussion with the students revealed that they marked the lines near the ports to avoid confusion with symbols depicting parallels and meridians.
In general, the task was relatively easy; only four students marked the answer incorrectly. Participant S13 did not look at the legend at all. The teacher behaved similarly, and her response was also incorrect.
Task10
In the final task, participants estimated Brazil's export volume in billions of USD. The map was the same as the map used forTask08. To find the correct answer, participants inspected the bar chart's legend, where 1 mm corresponded to USD 50 billion (Figure 11). This was similar to Task07. The participants fixated mostly on the legend (51%), but only one-quarter of participants indicated a correct answer. Brazil's export value was approximately USD 250 billion, so values 200 and 300 were Only five participants (S12, S14, S23, S27, and S29) worked with area symbols. The charts were used by 19 other participants, who also indicated the correct answer. These numbers suggest that the task was relatively easy for the participants and was also one of those with low trial duration.
After the experience from the previous task, the teacher looked directly at the legend, and she spent a relatively long time there. She focused on the bar charts in the legend and selected countries accordingly. Her trial duration was slightly less than the median for students. Her answer was also correct.
Task09
Task09 was to identify three shipping routes with an annual capacity below 100 million tonnes. Information about the shipping routes was visualized using graduated linear symbols in blue. The colour was similar to the colour for parallels and meridians, and some participants mismatched these objects. Four participants marked the correct symbol in close proximity to the harbours. Discussion with the students revealed that they marked the lines near the ports to avoid confusion with symbols depicting parallels and meridians. In general, the task was relatively easy; only four students marked the answer incorrectly. Participant S13 did not look at the legend at all. The teacher behaved similarly, and her response was also incorrect.
Task10
In the final task, participants estimated Brazil's export volume in billions of USD. The map was the same as the map used forTask08. To find the correct answer, participants inspected the bar chart's legend, where 1 mm corresponded to USD 50 billion (Figure 11). This was similar to Task07. The participants fixated mostly on the legend (51%), but only one-quarter of participants indicated a correct answer. Brazil's export value was approximately USD 250 billion, so values 200 and 300 were considered partially correct. The responses varied from 3 to 5000. This wide range suggested that participants were completely lost with this task and that estimating the value caused them difficulties.
The teacher looked into the legend but did not fixate on Brazil. Thus, she could not estimate the size of the bar chart and her answer was incorrect.
Scanpath Similarity-Difference between Students and Their Teacher (Q3)
The third research question dealt with comparing the strategy used to inspect stimuli between students and their geography teacher. As was described in the previous part of the text, the teacher attempted to solve the tasks mainly using her knowledge, not using the map. The quantitative comparison of the strategies used by the students and the teacher was based on the results of the multimatch-gaze and ScanGraph tool. The similarity of the scanpaths for each pair of participants was evaluated according to four multimatch-gaze metrics (vector, direction, length, and position) and using string-edit-distance (Levenshtein distance) in ScanGraph. The resulting matrices (Figure 12) show the average mutual similarity between students and the average similarity between the teacher and her students. The subtracted average values (∆) indicate whether the teacher applied a unique strategy in inspecting stimuli or used a more conventional approach (similar to students). The higher the value, the more unique the teacher's scanpath, which meant the more dissimilar the map-reading strategy. Values higher than average + standard deviation are highlighted in red.
considered partially correct. The responses varied from 3 to 5000. This wide range suggested that participants were completely lost with this task and that estimating the value caused them difficulties.
The teacher looked into the legend but did not fixate on Brazil. Thus, she could not estimate the size of the bar chart and her answer was incorrect.
Scanpath Similarity-Difference between Students and Their Teacher (Q3)
The third research question dealt with comparing the strategy used to inspect stimuli between students and their geography teacher. As was described in the previous part of the text, the teacher attempted to solve the tasks mainly using her knowledge, not using the map. The quantitative comparison of the strategies used by the students and the teacher was based on the results of the multimatch-gaze and ScanGraph tool. The similarity of the scanpaths for each pair of participants was evaluated according to four multimatch-gaze metrics (vector, direction, length, and position) and using string-edit-distance (Levenshtein distance) in ScanGraph. The resulting matrices ( Figure 12) show the average mutual similarity between students and the average similarity between the teacher and her students. The subtracted average values (Δ) indicate whether the teacher applied a unique strategy in inspecting stimuli or used a more conventional approach (similar to students). The higher the value, the more unique the teacher's scanpath, which meant the more dissimilar the mapreading strategy. Values higher than average + standard deviation are highlighted in red. Figure 13 reveals the teacher's unique strategy in Task05 from Levenshtein distance and in Task09 from position measurements using multimatch-gaze. In these cases, the similarity between the teacher and students was clearly lower than between the students. Figure 13 depicts these two extreme examples together with Task02, where the differences were minimal. On the left side of the figure, the teacher's scanpath is highlighted using red. The scanpaths of the students are displayed in grey. In Task09 and Task05, the teacher used a different strategy than the students, because she did not look into the legend and focused her attention on different parts of the stimuli than the students. On the other hand, in Task02, the teacher looked at the legend and focused her attention on Africa, where the correct answer was located. The same strategy was used by the students.
The middle part of Figure 13 displays the results of the position measurement calculated in multimatch-gaze and visualized with the ScanGraph tool. Each dot in the graph represents one participant. The participants with a similarity of at least 85% are connected. The teacher is visualized in red. Task09 and Task05 evidently show that the teacher is not connected to the students. By contrast, in Task02, the teacher used a strategy at least 85% similar to 27 students. The section at the right of the figure displays the results of string-edit-distance using Levenshtein distance (similarity greater than 75%) and confirms a similarity in strategy. It was calculated from the sequence of visited areas of interest. In Task09 and Task05, the teacher did not inspect the legend; therefore, the similarity of her strategy towards the students was low. In Task02, the teacher looked at the legend as students did and she had similarity higher than 75% with nine of them. Figure 13 reveals the teacher's unique strategy in Task05 from Levenshtein distance and in Task09 from position measurements using multimatch-gaze. In these cases, the similarity between the teacher and students was clearly lower than between the students. Figure 13 depicts these two extreme examples together with Task02, where the differences were minimal. On the left side of the figure, the teacher's scanpath is highlighted using red. The scanpaths of the students are displayed in grey. In Task09 and Task05, the teacher used a different strategy than the students, because she did not look into the legend and focused her attention on different parts of the stimuli than the students. On the other hand, in Task02, the teacher looked at the legend and focused her attention on Africa, where the correct answer was located. The same strategy was used by the students.
The middle part of Figure 13 displays the results of the position measurement calculated in multimatch-gaze and visualized with the ScanGraph tool. Each dot in the graph represents one participant. The participants with a similarity of at least 85% are connected. The teacher is visualized in red. Task09 and Task05 evidently show that the teacher is not connected to the students. By contrast, in Task02, the teacher used a strategy at least 85% similar to 27 students. The section at the right of the figure displays the results of string-edit-distance using Levenshtein distance (similarity greater than 75%) and confirms a similarity in strategy. It was calculated from the sequence of visited areas of interest. In Task09 and Task05, the teacher did not inspect the legend; therefore, the similarity of her strategy towards the students was low. In Task02, the teacher looked at the legend as students did and she had similarity higher than 75% with nine of them. Figure 13. Comparison of students' and teacher's map-reading strategies. The left column indicates the scanpaths, the middle column shows the ScanGraph visualization of position measurement, and the right column depicts the results of Levenshtein distance (also visualized using ScanGraph). The teacher is in red.
Discussion
The present paper describes an empirical study which evaluates student learning with a school world atlas. The research is one of the first eye-tracking studies using this kind of stimuli.
In designing the present study's experiment, the authors selected ten thematic maps which depicted the entire world. The maps were also selected to include different types of cartographic visualization methods. For use as stimuli, these maps were cropped to preserve the legibility on computer monitors with an aspect ratio of 4:3. Monitors with this aspect ratio were used to ensure good quality in the recorded eye-movements. With wide-screen monitors, the pupils of the eyes might have been obscured by eyelids.
Students had 60 s to solve the task. Sixty seconds was a sufficient amount of time for most of the participants and we chose this limit to avoid the situation when the student will try to solve the task for so long. The correctness of the students' answers was consistent with the work of Havelková and Hanus [9]. They determined that the students were more successful in tasks with either qualitative or both qualitative and quantitative cartographic methods.
Students from two third grade grammar school classes (~18 years) were selected as participants. Data for 41 students were recorded, but 11 were excluded from the experiment because of inaccuracies in the eye-tracker. All the students shared the same geography teacher, which allowed Figure 13. Comparison of students' and teacher's map-reading strategies. The left column indicates the scanpaths, the middle column shows the ScanGraph visualization of position measurement, and the right column depicts the results of Levenshtein distance (also visualized using ScanGraph). The teacher is in red.
Discussion
The present paper describes an empirical study which evaluates student learning with a school world atlas. The research is one of the first eye-tracking studies using this kind of stimuli.
In designing the present study's experiment, the authors selected ten thematic maps which depicted the entire world. The maps were also selected to include different types of cartographic visualization methods. For use as stimuli, these maps were cropped to preserve the legibility on computer monitors with an aspect ratio of 4:3. Monitors with this aspect ratio were used to ensure good quality in the recorded eye-movements. With wide-screen monitors, the pupils of the eyes might have been obscured by eyelids.
Students had 60 s to solve the task. Sixty seconds was a sufficient amount of time for most of the participants and we chose this limit to avoid the situation when the student will try to solve the task for so long. The correctness of the students' answers was consistent with the work of Havelková and Hanus [9]. They determined that the students were more successful in tasks with either qualitative or both qualitative and quantitative cartographic methods.
Students from two third grade grammar school classes (~18 years) were selected as participants. Data for 41 students were recorded, but 11 were excluded from the experiment because of inaccuracies in the eye-tracker. All the students shared the same geography teacher, which allowed a comparison to be made between the teacher's and students' strategies. This determined the total number of study participants, which was limited to the total number of students of both class groups.
Several approaches to data analysis were employed to compare the results of the students and teacher. First, the teacher's eye-movement data was thoroughly inspected and qualitatively described. Two other methods of quantitative similarity calculations were applied. One approach involved a string-edit-distance method which had been previously used in many eye-tracking studies to compare different participant groups (i.e., [76][77][78][79]). Specifically, ScanGraph calculated the similarity of scanpaths according to Levenshtein distance (i.e., [43,80,81]) and visualized the results calculated using the multimatch method, which can only indicate similarity between two scanpaths. The present study used batch calculations to calculate the similarity between all possible pairs of participants, in other words, 961 calculations (31 × 31) for each of the ten stimuli in the experiment. The only problem encountered was with metric duration, one of the five metrics used in multimatch. The results were normalised by the length of the longer of two analysed scanpaths. It was complicated to find a solution to this problem, and therefore this metric was excluded from the analysis.
Summarizing the differences between the students and the teacher was based on the average similarity for all students and the average similarity between the teacher and all the students. These two values were then subtracted. A greater difference suggested the greater uniqueness of the teacher's strategy. Although this approach directed the present study to instances when the teacher applied a very different strategy to the students, the authors were aware that using these methods might not be an ideal solution. Using any of the clustering methods to calculate the difference between dissimilarity matrices might be a possible enhancement for future research.
Qualitative analysis of the teacher's eye-movements especially and an analysis of her answers revealed that the teacher used a completely different strategy to solve tasks. In the majority of cases, the teacher did not look at the legend and attempted to solve the tasks directly. Unfortunately, her answers were very often not correct. In the discussion after the experiment, the teacher explained that she had a feeling that she should know the correct answers, and therefore she solved the tasks according to her knowledge, not with the aid of the map. This may have been caused by the tasks focusing on topics which were part of the geography curricula. A completely different scenario might occur if the map of an unknown territory or a fictitious map were used. Kulhavy and Stock [82] stated that people do not learn maps in a conceptual vacuum; their map representations are affected by information already retained in memory.
The results of the students' answers showed that they indicated a considerable number of incorrect answers in several tasks. The problems with Task01, Task04, Task08, and Task10 may have been caused by poor choice of cartographic visualization methods or barely legible symbols in the legend. These findings are important, and it may be beneficial to focus on them in future research. School atlases are used in most schools in the Czech Republic, and more user studies focusing on problematic maps may be helpful to publishers and improve the cartographic literacy of students.
Conclusions
School world atlases are crucial in geography education. However, only a few user studies have analysed student learning with a school atlas. The present paper aims to contribute in filling this gap. An eye-tracking experiment with ten tasks using thematic maps from the Czech school world atlas was designed, and the eye-tracking data of 30 students were recorded using GazePoint eye-tracker. The eye-movements of the students' geography teacher and students were recorded and compared.
The paper defined three research questions and explored the results of an experiment designed to provide answers to these questions.
The results for Q1 show that in general, the students were able to learn with the maps effectively. In this research question, the accuracy of answers of all participants was analysed together with the trial duration. The average correctness of answers from all students was 71%. This analysis pointed to several problematic tasks. Reading values from pie-charts with a logarithmic scale (Task07) posed the greatest difficulties. The topics of logarithmic scales and pie-charts could be addressed in geography education.
The results for Q2 revealed difficulties in solving tasks due to poor cartographic visualization methods, for example, some symbols were hard to distinguish (Task01, Task04). The most serious problems were discovered in students estimating the value of the bar chart (Task10). Students barely understood the legend scale in which one millimetre of the bar represented USD 50 million in export volume. These issues should be considered in the next edition of the school word atlas.
Q3 targeted map-reading strategies. The study proved that the geography teacher used a different approach in solving tasks to her students. The experiment revealed that the teacher had a feeling that she should know the correct solution to the task, so she answered according to her knowledge and did not read the map at all. This performance was observed in most of the tasks. The teacher looked at the legend in only a few tasks. This strategy, however, resulted in few correct answers. Discovering that a teacher reads a map and solves tasks differently to her students is very serious. If teachers are not aware of this difference and select maps and compile tasks according to "their own strategies", student learning may not be effective. It is desirable that learning with an atlas is based on consistency in the compilation of tasks with the maps and the student's ability to work with these maps. Geography curricula should focus on issues in map reading.
The present eye-tracking study highlighted several maps with poorly applied cartographic methods which created difficulties for students. Moreover, the research highlighted that the teacher used a different approach in map reading. She rather relied on her knowledge than reading the map, answering directly instead of using the map legend.
The results can assist cartographers and map publishers in improving their maps to be more comprehensible to readers. Geography teachers can also use the results to understand how their students read the maps and how to teach geography more attractively and effectively.
|
2020-08-20T10:12:31.392Z
|
2020-08-19T00:00:00.000
|
{
"year": 2020,
"sha1": "c6c67eb7c759a6f6727dfe0e7d9567585e84ac48",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/9/9/492/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "46ff108c9f0dde352097c8dccae79192596bc2f2",
"s2fieldsofstudy": [
"Geography",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
853922
|
pes2o/s2orc
|
v3-fos-license
|
Fibrils of Truncated Pyroglutamyl-Modified Aβ Peptide Exhibit a Similar Structure as Wildtype Mature Aβ Fibrils
Fibrillation of differently modified amyloid β peptides and deposition as senile plaques are hallmarks of Alzheimer’s disease. N-terminally truncated variants, where the glutamate residue 3 is converted into cyclic pyroglutamate (pGlu), form particularly toxic aggregates. We compare the molecular structure and dynamics of fibrils grown from wildtype Aβ(1–40) and pGlu3-Aβ(3–40) on the single amino acid level. Thioflavin T fluorescence, electron microscopy, and X-ray diffraction reveal the general morphology of the amyloid fibrils. We found good agreement between the 13C and 15N NMR chemical shifts indicative for a similar secondary structure of both fibrils. A well-known interresidual contact between the two β-strands of the Aβ fibrils could be confirmed by the detection of interresidual cross peaks in a 13C-13C NMR correlation spectrum between the side chains of Phe 19 and Leu 34. Small differences in the molecular dynamics of residues in the proximity to the pyroglutamyl-modified N-terminus were observed as measured by DIPSHIFT order parameter experiments.
function of time. In agreement with previous results 6 , it is observed that also under our conditions the fibrillation of pGlu 3 -Aβ (3-40) has a significantly shorter lag time and is overall faster than for WT Aβ . The lag time for pGlu 3 -Aβ (3-40) is 7 ± 1 h, while it is 43 ± 2 h for Aβ . The morphology of the pGlu 3 -Aβ fibrils was studied by electron microscopy (EM). Figure 1B shows a typical EM micrograph of pGlu 3 -Aβ (3-40) fibrils after 3 weeks of incubation, displaying fibrils of homogeneous morphology. These fibrils have a width of 12.8 ± 2.1 nm (n = 20), which is slightly larger than the mean diameter of WT Aβ (1-40) fibrils (10.0 ± 1.6 nm) 18 . A preferential shorter fibril length for pGlu 3 -Aβ as reported in 6 could not be observed under our fibrillation conditions. Fibrils of pGlu 3 -Aβ (3-40) exhibit a mean length of 850 ± 300 nm WT Aβ fibrils of 500 ± 200 nm (n = 30). Also, the X-ray diffraction pattern of pGlu 3 -Aβ (3-40) fibrils (Fig. 1C) exhibit the typical cross-β structure as observed for all other amyloid fibrils. The measured main X-ray reflections correspond to repeat spacings of 4.7 Å and 10.3 Å, which represent the typical values for the interstrand spacing and the intersheet distance, respectively 19 .
To obtain insights into the secondary structure on the level of individual amino acids, solid-state NMR spectra of pGlu 3 -Aβ (3-40) fibrils with uniformly 13 C/ 15 N-labeled amino acids (for the labeling scheme see experimental section) were measured. In choosing the sites for isotopic labeling, we paid special attention to the N-terminus of pGlu 3 -Aβ to study possible differences in the structures due to the pGlu modification in position 3 and to probe the extent of the known secondary structure elements. To assign the 13 C and 15 N chemical shifts, 13 C-13 C DARR and 15 N-13 Cα NMR correlation spectra were conducted under magic-angle spinning (MAS) conditions in dual acquisition mode 20 . Supplementary Figure S1 shows a 13 C-13 C DARR and a 15 N-13 Cα NMR spectrum as examples. The chemical shifts values for all labeled amino acids are listed in Table S1. Figure 2 reports the 13 Cα and 13 Cβ chemical shifts of pGlu 3 -Aβ and mature WT Aβ (1-40) fibrils (data taken from ref. 21) as differences from random coil values reported in the literature 22 . Since NMR chemical shifts are sensitive to the secondary structure, values close to zero correspond to random coil regions, while negative values for Cα and positive values for Cβ report β -strand conformations 22 . One can clearly see that most of the chemical shift values of both fibrillar species are very similar. Some alterations are observed for the Cβ signal of Phe 4 , which may result from the direct vicinity to the chemically modified pGlu 3 . For Phe 19 Cβ and Gly 29 Cα , two chemical shift values were observed. Such structural polymorphism of Aβ fibrils has been observed before Overall, a striking structural similarity between pGlu 3 -Aβ (3-40) and WT Aβ (1-40) fibrils is observed and one has to conclude that the typical secondary structure elements of WT Aβ (1-40) with an unstructured N-terminus and two β -strand regions comprising amino acids 10-22 and 30-38, which are connect by a short unstructured region, holds also true for pGlu 3 -Aβ .
For a systematic comparison of the secondary structure of pGlu 3 -Aβ (3-40) fibrils to different preparations of WT Aβ fibrils in different stages of the fibrillation process, Figure 3 shows correlation plots of the differences in the chemical shift values ( 13 Cα -13 Cβ ), which are very sensitive to secondary structure and have the advantage of being independent of chemical shift referencing, which may vary between the different laboratories. On the secondary structure level, a very good correlation of the pGlu 3 -Aβ (3-40) fibrils with the results for WT Aβ fibrils is obtained (A-C). Only slightly smaller correlation coefficients are obtained when pGlu 3 -Aβ fibrils are compared to protofibrils (D) and oligomers (E, F). This indicates that the secondary structure of the pGlu 3 -Aβ (3-40) fibrils is very similar to mature fibrils, but also to oligomers and protofibrils.
Tertiary structure information for Aβ (1-40) fibrils represents a field of some controversy and is highly dependent on the number and precision of the structural constraints available, which has led to different structural models 16,17,26,27 . To obtain some insight into the spatial relationship of the two β -strands and the tertiary structure of the pGlu 3 -Aβ (3-40) peptides in the fibrils, 13 C-13 C DARR NMR experiments were conducted with a long mixing time of 500 ms. This allows to observe interresidual contacts between carbons in spatial proximity of up to ~6 Å, via the detection of cross peaks between carbons of different amino acids. For mature Aβ (1-40) fibrils as well as oligomers and protofibrils, a contact between the side chains of Phe 19 and Leu 34 has been well-described [16][17][18]28,29 . This contact indicates the close proximity between the two β -strands of the monomer and the U-shaped structure of the monomers in the fibrils. In the DARR NMR spectrum for pGlu 3 -Aβ (3-40) fibrils ( Figure S1), cross peaks between Phe 19 and Leu 34 are clearly visible, especially between the aromatic ring carbons of Phe 19 and the Cβ signal of Leu 34 . This suggests a close structural relationship between pGlu 3 -Aβ (3-40) and WT Aβ (1-40) fibrils also on the tertiary structure level.
It was shown that a molecular contact between Glu 22 and Ile 31 indicates earlier stages of the fibrillation process in oligomers and protofibrils [30][31][32] , but this contact is absent in mature Aβ fibrils 32 . Based on these observations and other data, a model for the reorganization of the hydrogen bonds from intramolecular for oligomers and protofibrils to intermolecular hydrogen bonds for the mature fibrils has been proposed 30,31 . In our 13 C-13 C DARR NMR spectrum of the peptide that contains these two amino acids 13 C/ 15 N-labeled, such a cross peak indicative of the molecular contact was not observed ( Figure S2). This result also confirms that fibrils of pGlu 3 -Aβ (3-40) exhibit a strong structural similarity to mature Aβ (1-40) fibrils.
Solid-state NMR also offers the possibility to investigate the molecular dynamics of the individual segments of the pGlu 3 -Aβ (3-40) fibrils. Such measurements can provide information about the different domains in fibrillar peptides or proteins and substantially support the structural data 21,33,34 . We measured the motionally averaged 1 H-13 C dipolar couplings for each resolved carbon signal in DIPSHIFT experiments and convert these into molecular order parameters. Figure 4 shows the comparison of the order parameters of the backbone Cα of pGlu 3 results in a somewhat higher order parameter, which corresponds to smaller motional amplitudes of the fluctuations of these residues. One further notable exception is the order parameter of Ile 31 , which is significantly higher in pGlu 3 -Aβ (3-40) compared to WT Aβ . This may suggest some importance of this residue for the formation of oligomers and protofibrils as Ile 31 appears to be involved in intramolecular contacts of intermediates, but not in mature fibrils. Interestingly, the correlations of the measured order parameters to the only available datasets for mature Aβ (1-40) fibrils 21 and Aβ (1-40) protofibrils 35 are not as good as observed for the chemical shifts/secondary structure. This may in part reflect the fact that chemical shifts are measured more precisely than motionally averaged dipolar couplings, but could also indicate a dynamic polymorphism that relates to small packing differences of the individual residues in the fibrils.
Conclusion
Overall, we conclude that on the level of the single amino acid, fibrils formed of pGlu 3 -Aβ (3-40) exhibit a strong similarity in the molecular structure compared to WT mature Aβ fibrils. Our data agree with a recent study that reported, on the basis of H/D exchange NMR, FTIR, and CD measurements that modified pGlu 3 -Aβ (3-40) and unmodified Aβ (1-40) comprised similar peptide conformations 6 . In this study, fibrils of pGlu 3 -Aβ (3-40) of shorter length have been reported, which was not confirmed by our EM data. Although the pGlu modification on the N-terminus of truncated Aβ peptides significantly accelerates the fibrillation, the end product of this structure forming process shows an astonishing similarity to the well described structural features of all mature Aβ fibrils 15,16,23 . This suggests once more that the physiological effects of the pGlu peptides must be mediated by transient oligomers, which are very difficult to characterize. However, the N-terminus of the pGlu 3 -Aβ (3-40) fibrils showed dynamical alterations, that may have an effect of the stability of the intermediates as well as the fibrils as speculated before 6 . ThT fluorescence measurements. The fibrillation kinetics of pGlu 3 -Aβ (3-40) was followed by ThT fluorescence intensity measurements. Buffer conditions for fibrillation were the same as above with additional 20 μ M ThT in the incubation solution. Volumes of 150 μ l were pipetted into the wells of a 96-well plate, which was placed in a Tecan infinite M200 microplate reader (Tecan Group AG, Männedorf, Switzerland). The temperature was kept at 37 °C and a kinetic cycle was applied, such that a 2 s shaking time (2 mm shaking amplitude) followed by a 5 min waiting time was repeated four times with one additional 2 s shaking at the end and the subsequent fluorescence measurement. Fluorescence excitation was set to 440 nm and emission was measured at 482 nm. The fluorescence intensity was measured in increments of 30 min for an overall time period of 65 h. For comparison, also the kinetics of WT Aβ (1-40) fibrillation was recorded under the same conditions. Data was analyzed using procedures reported in the literature 36 . Electron microscopy. The fibril morphology was checked by electron microscopy (EM). Fibril solutions were diluted 1:1 with pure water and 1 μ l droplets of this solution were applied on formvar coated copper grids, allowed to dry for about 1 h and negatively stained with 1% uranyl acetate in pure water. Scanning transmission electron micrographs were recorded using Zeiss SIGMA, (Zeiss NTS, Oberkochen, Germany) equipped with a STEM detector and Atlas Software.
Sample preparation.
Solid-state MAS NMR spectroscopy. For NMR measurements, fibril solutions were ultracentrifuged at ~200,000× g for 4 h at 4 °C. The pellets were lyophilized, rehydrated to 50 wt% H 2 O, homogenized by several freeze-thaw cycles and finally transferred into 3.2 mm MAS rotors. All MAS NMR experiments were conducted on a Bruker 600 Avance III NMR spectrometer (Bruker BioSpin GmbH, Rheinstetten, Germany) at a resonance frequency of 600.1 MHz for 1 H, 150.9 MHz for 13 C, and 60.8 MHz for 15 N using a triple channel 3.2 mm MAS probe. Typical pulse lengths were 4 μ s for 1 H and 13 C and 5 μ s for 15 N. 1 H-13 C and 1 H-15 N CP contact time were 1 ms at a spin lock field of ~50 kHz. The relaxation delay was 2.5 s. 1 H dipolar decoupling during acquisition with a radio frequency amplitude of 65 kHz was applied using Spinal64. The MAS frequency was 11,777 Hz. 13 C chemical shifts were referenced externally relative to TMS. Scientific RepoRts | 6:33531 | DOI: 10.1038/srep33531 13 C-13 C DARR NMR spectra and 13 C-15 N correlation spectra were acquired simultaneously using dual-acquisition 20 . In the same experiment, a two dimensional 13 C-13 C DARR NMR spectrum with a mixing time of 500 ms with 128 data points and four identical 15 N- 13 Cα correlation spectra with 32 data points in the indirect dimensions were measured. The 15 N-13 Cα spectra were processed using NMRPIPE software 37 .
To determine 1 H-13 C dipolar couplings, constant time DIPSHIFT experiments 38 were performed. For homonuclear decoupling during dipolar evolution a frequency switched Lee-Goldburg (FSLG) 39 with an effective radio frequency field of 80 kHz was used. The MAS frequency for DIPHSIFT experiments was 5 kHz. After Fourier transformation in the direct dimension the signal intensities of the dephasing curve for each resolved carbon was simulated and the determined coupling was divided by the known rigid limit values to obtain the order parameters 40,41 . The temperature for all NMR experiments was 30 °C. X-ray diffraction measurements. For X-ray diffraction measurements, fibril samples from the MAS rotors were placed on nylon loops (Hampton Research, Aliso Viejo, CA, USA) and mounted onto the goniometer head of a X-ray source (Rigaku copper rotating anode MM007 with 0.8 kW, Tokyo, Japan). The signals were recorded using an image plate detector (Rigaku, Tokyo, Japan) with an exposure time of 180 s at room temperature. Diffraction images were analyzed using ImageJ 42 .
|
2017-10-24T03:49:28.307Z
|
2016-09-21T00:00:00.000
|
{
"year": 2016,
"sha1": "dd47d2c9905bad7a3c5b6953a79ea283550d3f97",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep33531.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "13e9174fc2d2f078dd04af95d8b3c21d59d1b142",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
17848305
|
pes2o/s2orc
|
v3-fos-license
|
Synergetic Hybrid Aerogels of Vanadia and Graphene as Electrode Materials of Supercapacitors
The performance of synergetic hybrid aerogel materials of vanadia and graphene as electrode materials in supercapacitors was evaluated. The hybrid materials were synthesized by two methods. In Method I, premade graphene oxide (GO) hydrogel was first chemically reduced by L-ascorbic acid and then soaked in vanadium triisopropoxide solution to obtain V 2 O 5 gel in the pores of the reduced graphene oxide (rGO) hydrogel. The gel was supercritically dried to obtain the hybrid aerogel. In Method II, vanadium triisopropoxide was hydrolyzed from a solution in water with GO particles uniformly dispersed to obtain the hybrid gel. The hybrid aerogel was obtained by supercritical drying of the gel followed by thermal reduction of GO. The electrode materials were prepared by mixing 80 wt % hybrid aerogel with 10 wt % carbon black and 10 wt % polyvinylidene fluoride. The hybrid materials in Method II showed higher capacitance due to better interactions between vanadia and graphene oxide particles and more uniform vanadia particle distribution.
Introduction
Supercapacitors have gained considerable attention as energy storage devices due to their high storage capability, excellent discharge-charge rates, long life cycle, and low maintenance costs [1].Supercapacitors are divided into two categories based on the mechanism of energy storage-(1) electrochemical double-layer capacitors (EDLCs) based on carbon materials, and (2) pseudocapacitors based on electrode materials composed of redox metal oxides or conducting polymers [2].The capacitance of EDLCs originates from the electrical double-layer formed at the electrode-electrolyte interface.The charge accumulation, in this case, occurs via a non-Faradaic process [2,3].In the case of pseudocapacitors, the capacitance is derived from a Faradaic process whereby the redox reactions of the electrode materials and the electrolyte occur at the electrode surfaces [4,5].
Graphene aerogels consisting of three-dimensional networks of graphene sheets were considered as electrode materials of supercapacitors in recent investigations [6].The hierarchically-porous structures with high surface area attributed to aerogels and the high values of electrical conductivity of graphene sheets were found responsible for high specific capacitance of 128 F/g at a current density of 50 mA¨g ´1 [7].However, higher capacitance than most other EDLCs cannot be achieved due to the limitations imposed by the surface area [2].Some transition metal oxides with various valences were extensively studied for pseudocapacitance produced via redox reactions of the electrode materials and the electrolyte [2].Among the transition metal oxides, V 2 O 5 is considered a promising electrode material due to its large specific capacitance and easy synthesis process [8].Although V 2 O 5 exhibits satisfactory specific capacitance of 214-346 F/g in KCl [3], its applications and development are somewhat restricted due to poor cycle stability caused by the collapse of the structures during the charge-discharge process and the low values of electrical conductivity [8][9][10].These obstacles must be removed to capitalize on the higher specific capacitance values of V 2 O 5 , possibly through identification of a set of synergistic materials.
In this work, such a synergy was obtained by combining V 2 O 5 and graphene aerogels into single hybrid materials.In these materials, graphene contributes a smooth path for charge transport and provides mechanically stronger structures, while the organized V 2 O 5 particles provide a scope for energy storage.This paper presents two facile and green methods for preparation of hybrid aerogels based on V 2 O 5 and reduced graphene oxide (rGO).In addition, the synergetic materials in this work avoid the often-encountered compatibility issues of two entirely different materials in the hybrids [11].
Preparation of Graphene Oxide Dispersions
Graphite powder was oxidized to graphene oxide (GO) via a modified Hummers method reported elsewhere [6].Briefly, 12 mL concentrated H 2 SO 4 was heated up to 90 ˝C in a 100 mL beaker.Then 2.5 g K 2 S 2 O 8 and 2.5 g P 2 O 5 were added into the beaker with gentle stirring until the materials dissolved completely.The solution was cooled down to 80 ˝C and 3 g graphite powder was added into it.The mixture was kept at 80 ˝C for 5 h and then cooled down to room temperature, diluted with 500 mL DI water, and allowed to stand overnight.The solid was filtered, washed with DI water to remove the residual acid, and allowed to dry under ambient conditions.The solid was transferred into 120 mL cold concentrated H 2 SO 4 at 0 ˝C and 15 g KMnO 4 was added to it slowly under stirring.The mixture was kept at 35 ˝C under stirring for 2 h to obtain graphene oxide (GO), after which it was diluted slowly with 250 mL DI water to keep the temperature below 50 ˝C.The mixture was stirred for 2 h, diluted with 700 mL DI water, and mixed with 20 mL 30 wt % H 2 O 2 solution.The mixture containing GO particles was allowed to stand for 12 h and the solid was washed with 1 L 5 wt % HCl solution to remove the metal ions, followed by 1 L DI water to remove the residual acid.GO was dispersed in DI water by ultrasonication for 40 m after which the concentration of GO in the dispersion was adjusted to 7 mg/mL.
Method I: Preparation of Graphene-Templated Hybrid Aerogel
The hybrid aerogel was prepared in this method using a general procedure reported elsewhere [12].GO dispersion was diluted to 2 mg/mL and dispersed using ultrasonication for 40 min.L-ascorbic acid (0.42 g) was added into 70 mL GO dispersion under stirring until it completely dissolved.The mixture was heated to 50 ˝C and allowed to stand for 12 h to form reduced graphene oxide (rGO) gel.The rGO gel was washed with large quantities of DI water over three days to remove the soluble ions and the residual L-ascorbic acid, followed by solvent exchange with acetone for three days to remove water.The graphene gel in acetone was soaked overnight in a solution of 0.8 mL VO(OC 3 H 7 ) 3 in 4.5 mL acetone.This allowed diffusion of VO(OC 3 H 7 ) 3 molecules into the pores of the graphene gel.The gel was then transferred into another vial filled with 3 mL acetone and 1 mL DI water to induce hydrolysis of VO(OC 3 H 7 ) 3 and condensation of vanadium hydroxide into V 2 O 5 gel inside the pores of the graphene gel.The molar ratio VO(OC 3 H 7 ) 3 :water:acetone was kept at 1:60:45.The number of moles (n) of VO(OC 3 H 7 ) 3 was estimated from the relationship n = CVO(OC 3 H 7 ) 3 ¨Vgraphene gel , where C VO(OC3H7)3 is the molar concentration of VO(OC 3 H 7 ) 3 and V graphene gel (=π¨r 2 ¨l) is the volume of the cylinder of graphene gel, with r and l as the radius and the length of the graphene specimen, respectively.
The hybrid gel was allowed to stand for three days at room temperature for aging, after which the gel was washed with acetone for two days to remove the residual water.The hybrid gel was dried using supercritical CO 2 .Three groups of hybrid aerogels were prepared at different concentrations of VO(OC 3 H 7 ) 3 with the molar ratio of VO(OC 3 H 7 ) 3 :H 2 O:acetone kept constant.The concentration of VO(OC 3 H 7 ) 3 was maintained at 0.64 M, 0.82 M and 1 M, and the corresponding hybrid aerogels are denoted in the rest of the paper as rGO-V 2 O 5 -0.64, rGO-V 2 O 5 -0.82, and rGO-V 2 O 5 -1, respectively.
Method II: Preparation of Hybrid Aerogel Via One-Pot Synthesis
The hydrolysis of VO(OC 3 H 7 ) 3 yields vanadium hydroxide.The polar functional groups, such as -COOH and -OH, on graphene surfaces can interact with vanadium hydroxide or its oligomers.The vanadium hydroxide forms an octahedral complex by one H 2 O molecule opposite to the V=O bond, and the hydroxyl group on the graphene oxide surfaces coordinating at the equatorial plane as shown in Figure S1 [13].These coordination bonds facilitate the growth of VO oligomers and enhance their interactions with the graphene sheets.
A typical preparation process is as follows. 1 mL VO(OC 3 H 7 ) 3 and 5 mL acetone were mixed in a vial in an ice bath under vigorous stirring.As gelation of VO(OC 3 H 7 ) 3 occurs rapidly, all of the chemicals were kept in an ice bath for 15 min prior to mixing.A 4 mL GO dispersion was added into the vial and mixed to induce hydrolysis of VO(OC 3 H 7 ) 3 .The concentration of GO in aqueous dispersions was maintained at 2 mg/mL, 4 mg/mL, and 6 mg/mL.The corresponding materials are designated as V 2 O 5 -rGO-2, V 2 O 5 -rGO-4, and V 2 O 5 -rGO-6, respectively, in the rest of the paper.
The resultant liquid was transferred into a cylindrical mold and the gel formed in approximately 1 min.The gel was aged for two days at room temperature in the sealed cylindrical mold.During the aging process, the color of the gel changed from dark red to deep green indicating the presence of V 4+ ions [14].The final gel was washed with acetone five times in 5 h intervals to remove the residual water from the gel.The hybrid aerogel was obtained by supercritical drying of the gel using CO 2 .The as-prepared aerogel was heated in air at 300 ˝C for 90 min to reduce GO into graphene [15].Simultaneously, V 4+ ions underwent oxidation to V 5+ ions.In addition, V 2 O 5 underwent partial crystallization due to the thermal treatment.
Characterization
The morphology of the prepared materials was examined by scanning electron microscopy (SEM; JEOL-7401, Boston, MA, USA) and transmission electron microscopy (TEM; JEOL-1230, Boston, MA, USA).X-ray diffraction (Bruker AXS Dimension D8, Billerica, MA, USA) was carried out to study the crystalline structures of the samples.The chemical structures were analyzed by Raman spectroscopy (Horiba LabRam HR, Kyoto, Japan) and Fourier transform infrared spectroscopy (Nicolet 380, Cleveland, OH, USA).The surface properties were characterized by Micromeritics Tristar II 3020 Analyzer (Micromeritics, Norcross, GA, USA), which include Brunauer-Emmett-Teller (BET) surface area and Barrett-Joyner-Halenda (BJH) pore size distribution.The thermal stability of the prepared samples was determined from thermogravimetric analysis (TGA) (TA Instrument, New Castle, DE, USA) in air with a 10 ˝C/min heating rate and air flow rate of 60 mL/min.
Electrochemical Measurements
The electrochemical performance of the materials was determined using a symmetrical two-electrode system with 1 M Na 2 SO 4 aqueous solution as the electrolyte.The electrode was prepared by mixing 80 wt % hybrid aerogel materials as the active materials, 10 wt % PVDF as the binder, and 10 wt % carbon black as the conducting material into a slurry.The slurry was dropped onto a nickel foam spreading to 1 cm 2 area and the materials were dried at 110 ˝C for 12 h in vacuum to remove the solvent.The coated nickel foam was then compressed under 10 MPa pressure.The weight of each active material was kept in the range of 0.45-0.5 mg.Cyclic voltammetry (CV) tests with potential window of ´1 to 1 V at scanning rates varying from 10 to 100 mV/s and electrochemical impedance spectroscopy (EIS) with frequency range of 0.01 Hz to 100 kHz were performed on the CHI600E electrochemical workstation (CH Instruments, Inc., Austin, TX, USA).The galvanostatic charge/discharge tests were carried out between ´1 to 1 V.The specific capacitance (C) for a single electrode according to the charge/discharge curve was obtained from the following relationship: where I is the applied constant current, m is the mass of the active materials in one electrode, ∆t is the discharging time, and ∆V is the potential window.
Hybrid Aerogels Produced in Method I
For hybrid aerogels produced in Method I, the concentration of VO(OC 3 H 7 ) 3 in solution was varied from 0.64 M to 1 M.The SEM image of rGO aerogel (Figure 1a) shows uniform morphology of randomly-arranged sheet-like structures of graphene.The SEM image of V 2 O 5 aerogel presented in Figure 1b displays 3-D networks of rod-like interconnected vanadia particles, which is further verified by the TEM image in Figure 1c.The image in Figure 1c suggests that the building blocks of V 2 O 5 aerogel were rod-like structures of typical length approximately 300 nm and diameter several tens of nanometers.Such rod-like structures with typical length 1 µm were reported earlier [16].The TEM image of the hybrid aerogel rGO-V 2 O 5 -0.82 in Figure 1d indicates the presence of both rod-like structure of V 2 O 5 as in Figure 1c, and the layers of thin sheets of graphene, as seen in Figure 1a.We attribute such a hybrid aerogel structure to the specific gelation process used in Method I. window of −1 to 1 V at scanning rates varying from 10 to 100 mV/s and electrochemical impedance spectroscopy (EIS) with frequency range of 0.01 Hz to 100 kHz were performed on the CHI600E electrochemical workstation (CH Instruments, Inc., Austin, TX, USA).The galvanostatic charge/discharge tests were carried out between −1 to 1 V.The specific capacitance (C) for a single electrode according to the charge/discharge curve was obtained from the following relationship: where I is the applied constant current, m is the mass of the active materials in one electrode, Δt is the discharging time, and ΔV is the potential window.
Hybrid Aerogels Produced in Method I
For hybrid aerogels produced in Method I, the concentration of VO(OC3H7)3 in solution was varied from 0.64 M to 1 M.The SEM image of rGO aerogel (Figure 1a) shows uniform morphology of randomly-arranged sheet-like structures of graphene.The SEM image of V2O5 aerogel presented in Figure 1b displays 3-D networks of rod-like interconnected vanadia particles, which is further verified by the TEM image in Figure 1c.The image in Figure 1c suggests that the building blocks of V2O5 aerogel were rod-like structures of typical length approximately 300 nm and diameter several tens of nanometers.Such rod-like structures with typical length 1 µm were reported earlier [16].The TEM image of the hybrid aerogel rGO-V2O5-0.82 in Figure 1d indicates the presence of both rod-like structure of V2O5 as in Figure 1c, and the layers of thin sheets of graphene, as seen in Figure 1a.We attribute such a hybrid aerogel structure to the specific gelation process used in Method I.The composition of the as prepared hybrid aerogels was quantitatively determined from the TGA traces generated in air.In Figure S2a, the TGA traces revealed the following trends: gradual weight loss at just above room temperature due to a loss of moisture; weight reduction at close to 200 ˝C due to the removal of bound water; and the more dramatic weight loss between 300 ˝C and 350 ˝C due to the decomposition of the organic groups on graphene [17].The steep decline in specimen weight at 400-430 ˝C was due to burning off of graphene [11].The weight gain after 430 ˝C is attributed to oxidation of V 4+ ions to V 5+ ions.The differences of the residual weights of the hybrid aerogel and that of the V 2 O 5 aerogel with the same amount of vanadium isopropoxide in the formulations were used to determine the approximate graphene content in the hybrid aerogels.The graphene contents were found to be 29 wt %, 24 wt %, and 17 wt % respectively for hybrid aerogels rGO-V 2 O 5 -0.64, rGO-V 2 O 5 -0.82, and rGO-V 2 O 5 -1.
The crystal structure of the samples, especially the basal spacing of graphene particles, was inferred from the X-ray diffraction (XRD) patterns as shown in Figure 2. The basal spacing of GO was found to be 0.91 nm which is much larger than that of natural graphite (0.33 nm), indicating that chemical oxidation of graphite to GO was complete in these materials and that the interlayer distance was expanded [18].In the case of rGO aerogel, the XRD pattern shows a broad peak at around 24 In addition, the peak at 9.8 ˝corresponding to (001) plane of GO is absent in Figure 2a.This indicates that the method of reduction of GO to graphene using L-ascorbic acid was successful.The composition of the as prepared hybrid aerogels was quantitatively determined from the TGA traces generated in air.In Figure S2a, the TGA traces revealed the following trends: gradual weight loss at just above room temperature due to a loss of moisture; weight reduction at close to 200 °C due to the removal of bound water; and the more dramatic weight loss between 300 °C and 350 °C due to the decomposition of the organic groups on graphene [17].The steep decline in specimen weight at 400-430 °C was due to burning off of graphene [11].The weight gain after 430 °C is attributed to oxidation of V 4+ ions to V 5+ ions.The differences of the residual weights of the hybrid aerogel and that of the V2O5 aerogel with the same amount of vanadium isopropoxide in the formulations were used to determine the approximate graphene content in the hybrid aerogels.The graphene contents were found to be 29 wt %, 24 wt %, and 17 wt % respectively for hybrid aerogels rGO-V2O5-0.64,rGO-V2O5-0.82, and rGO-V2O5-1.
The crystal structure of the samples, especially the basal spacing of graphene particles, was inferred from the X-ray diffraction (XRD) patterns as shown in Figure 2. The basal spacing of GO was found to be 0.91 nm which is much larger than that of natural graphite (0.33 nm), indicating that chemical oxidation of graphite to GO was complete in these materials and that the interlayer distance was expanded [18].In the case of rGO aerogel, the XRD pattern shows a broad peak at around 24°.In addition, the peak at 9.8° corresponding to (001) plane of GO is absent in Figure 2a.This indicates that the method of reduction of GO to graphene using L-ascorbic acid was successful.The XRD pattern of the rGO-V2O5-0.82hybrid aerogel revealed only two reflection peaks at around 7.4° and 27.9°, and the low intensity of the peaks suggests a small degree of crystalline order [19].In view of previous work [20,21], the (001) peak at 7.4° corresponds to the layer stacking of V2O5 (d = 1.19 nm) [22,23].The absence of a graphene peak in the XRD patterns of hybrid aerogels indicates that restacking of graphene sheets did not occur in hybrid aerogels and that graphene was present mostly as single sheets [24].The FTIR spectra shown in Figure S3 suggests the chemical structures of GO, rGO, and the hybrid aerogels.
Raman spectroscopy was used to determine the quality of graphene by analyzing the intensity ratio of the D-and G-bands.The D-band results from the disorder-induced mode reflecting structural defects and the G-band is associated with the E2g mode from sp 2 -hybridized carbon domains [25].The intensity ratio ID/IG reflects the disorder degree of carbon materials [26].The Raman spectra of GO and rGO aerogel are shown in Figure 2c.There are two obvious peaks at around 1357 cm −1 and 1608 cm −1 in the Raman spectrum of GO corresponding to the D-band and G-band, respectively.The Raman spectrum of rGO aerogel also shows the D-(at 1343 cm −1 ) and G-band (at 1585 cm −1 ).The ID/IG ratio of rGO aerogel is 1.24 which is greater than that of GO (1.15) indicating a reduction of the average size of the sp 2 domains [27] when the oxygen groups were removed during the reduction process of GO.In addition, the width of the half-maximum of the G-band was reduced, which suggests a high graphitization degree of rGO aerogel [28].
The presence of platelet-type graphene particles and rod-like vanadia particles in the hybrid aerogels presents several interesting scenarios in terms of solid surface area and pore size distribution.The XRD pattern of the rGO-V 2 O 5 -0.82 hybrid aerogel revealed only two reflection peaks at around 7.4 ˝and 27.9 ˝, and the low intensity of the peaks suggests a small degree of crystalline order [19].In view of previous work [20,21], the (001) peak at 7.4 ˝corresponds to the layer stacking of V 2 O 5 (d = 1.19 nm) [22,23].The absence of a graphene peak in the XRD patterns of hybrid aerogels indicates that restacking of graphene sheets did not occur in hybrid aerogels and that graphene was present mostly as single sheets [24].The FTIR spectra shown in Figure S3 suggests the chemical structures of GO, rGO, and the hybrid aerogels.
Raman spectroscopy was used to determine the quality of graphene by analyzing the intensity ratio of the D-and G-bands.The D-band results from the disorder-induced mode reflecting structural defects and the G-band is associated with the E 2g mode from sp 2 -hybridized carbon domains [25].The intensity ratio I D /I G reflects the disorder degree of carbon materials [26].The Raman spectra of GO and rGO aerogel are shown in Figure 2c.There are two obvious peaks at around 1357 cm ´1 and 1608 cm ´1 in the Raman spectrum of GO corresponding to the D-band and G-band, respectively.The Raman spectrum of rGO aerogel also shows the D-(at 1343 cm ´1) and G-band (at 1585 cm ´1).The I D /I G ratio of rGO aerogel is 1.24 which is greater than that of GO (1.15) indicating a reduction of the average size of the sp 2 domains [27] when the oxygen groups were removed during the reduction process of GO.In addition, the width of the half-maximum of the G-band was reduced, which suggests a high graphitization degree of rGO aerogel [28].
The presence of platelet-type graphene particles and rod-like vanadia particles in the hybrid aerogels presents several interesting scenarios in terms of solid surface area and pore size distribution.Barrett-Joyner-Halenda (BJH) pore size distributions of rGO aerogel and rGO-V 2 O 5 -0.82 hybrid aerogel are presented in Figure S4a,b.These were obtained from nitrogen adsorption-desorption isotherms shown in Figure S5.As is evident, the isotherms in Figure S5 are type IV, indicating mesoporous structures.Both samples show mesopores with distinct pore size distribution, as seen in Figure S4.In the case of rGO aerogel, the predominant pores are 4 nm and 60 nm, while those for rGO-V 2 O 5 -0.82 hybrid aerogel are 4 nm and 40 nm, the latter due to the presence of V 2 O 5 .The BET specific surface area of rGO-V 2 O 5 hybrid aerogels are generally lower than that of rGO aerogel (641 m 2 /g), e.g., 396, 348, 299 m 2 /g, respectively, for hybrid aerogel specimens rGO-V 2 O 5 -0.64, rGO-V 2 O 5 -0.82, and rGO-V 2 O 5 -1.Note that the specific surface area of V 2 O 5 aerogel (254 m 2 /g) is much smaller than that of rGO aerogel.
The specific capacitance of the electrodes fabricated using V 2 O 5 aerogel, rGO aerogel, and V 2 O 5 -rGO hybrid aerogels were determined from galvanostatic charge/discharge test data.The values of specific capacitance are shown in Figure 3a.We find that the specific capacitance of the rGO-V 2 O 5 -0.82 hybrid aerogel calculated from charge/discharge curves are much higher (203 F/g), compared to the rGO aerogel (105 F/g) and the V 2 O 5 aerogel (132 F/g).In view of this, the electrochemical performance the rGO-V 2 O 5 -0.82 hybrid aerogel was further investigated.In Figure 3b, the CV curve of the rGO aerogel exhibits an almost rectangular shape suggesting the EDLC characteristic [3].In the case of the rGO-V 2 O 5 -0.82 hybrid aerogel, a pair of redox peaks are observed with a larger area than the V 2 O 5 aerogel and the graphene aerogel under the same scanning rate.These indicate that the specific capacitance of the rGO-V 2 O 5 -0.82 hybrid aerogel is higher than both the rGO and V 2 O 5 aerogels along with a Faradaic redox reaction occurring during the charge/discharge process (Figure 3c) [3].Furthermore, for the hybrid aerogel system, the redox peak is more well-defined, which implies that the presence of graphene improves the utilization of V 2 O 5 and, therefore, contributes pseudo-capacitance to the overall capacitance.The CV curves of the rGO-V 2 O 5 -0.82 hybrid aerogel at varied scan rates (Figure 3d) show a regular and symmetric shape to zero-current line indicating an excellent electrochemical reversibility due to the synergetic effect of EDLC and pseudo-capacitor.The Nyquist plot shown in Figure 3e reveals that rGO aerogel shows an almost ideal EDLC feature with a nearly vertical straight line along the imaginary axis in the low frequency region.Additionally, for the rGO-V 2 O 5 -0.82 hybrid aerogel, a stronger vertical line than the V 2 O 5 aerogel are observed, which is closer to the line for the rGO aerogel.Moreover, the charge transfer resistance (R CT ) measured by the diameter of the semicircle in the high-frequency region also reflects the capacitive property [29].The calculated values of R CT for the rGO aerogel, the V 2 O 5 aerogel, and the rGO-V 2 O 5 -0.82 hybrid aerogel are 0.4 Ω, 3.7 Ω, and 1.7 Ω, respectively.These results indicate that the combination of V 2 O 5 and graphene improves the conductivity of the electrode materials and minimizes the ion transportation path.
The cycle stability was tested at 10 A/g of charge/discharge for 1000 cycles (Figure 3f).It can be seen that the degradation of specific capacitance of the rGO aerogel starts from 100 cycles with a consistent, but small, reduction until 1000 cycles exhibiting a characteristic of EDLC.Moreover, the V 2 O 5 aerogel shows a quite poor cycle stability due to the structural damage caused by the mechanical stress induced by the ion intercalation-de-intercalation phenomena during the charge-discharge process [30] and dissolution of vanadium oxide [31].However, the cycle stability of the rGO-V 2 O 5 -0.82 hybrid aerogel obviously improved, e.g., 73% of initial capacitance.This was achieved due to the presence of graphene nanosheets that were able to withstand the structural changes during the charge-discharge process.
Hybrid Aerogels from One-Pot Synthesis via Method II
The SEM and TEM images (Figure 1e,f) of the heat-treated V2O5 aerogel reveal a randomlyarranged rod-like morphology with lengths less than 100 nm and diameters of several nanometers.These dimensions are much lower than those of the V2O5 aerogel and can be attributed to crystallization at the time of thermal treatment.The V2O5-rGO-6 hybrid aerogel displays a morphology of crumpled graphene sheets where the growth of V2O5 nanofibers is apparent (Figure 1g,h).We note that GO particles were well-dispersed in water and the V2O5 nanofibers possibly grew on the surfaces of graphene sheets.The morphology of the V2O5-rGO-6 hybrid aerogel further confirms the interaction of VO oligomers with GO sheets in the hydrolysis step.
Figure S2b shows the TGA traces of the samples conducted in air with a heating rate of 10 °C/min.The weight loss of the V2O5 aerogel before 150 °C is due to the evaporation of absorbed moisture.In the case of the V2O5-rGO hybrid aerogels, there is gradual weight loss from around room temperature to about 400 °C, suggesting that such loss was due to the evaporation of moisture and decomposition of the oxygen-containing groups of GO.In addition, for all hybrid aerogels, a sharp weight reduction occurred in the range of 400 °C to 480 °C, due to the burning of graphene [32].The graphene weight percent in hybrid aerogels V2O5-rGO-2, V2O5-rGO-4, and V2O5-rGO-6 was estimated to be 12%, 17%, and 23%, respectively.
The X-ray diffraction patterns of the heat-treated V2O5 aerogel and V2O5-rGO hybrid aerogels (Figure 4a,b) were examined to determine if the heat treatment step used for thermal reduction of GO also caused substantial changes in the crystal structures of the hybrid materials.In Figure 4b, the XRD patterns of a series of V2O5-rGO hybrid aerogels exhibit (200), (001), ( 110), (301), and (310) reflection peaks corresponding to orthorhombic crystalline V2O5 (JCPDS No. 41-1426) at the same position as in the case of V2O5 aerogel (Figure 4a).These indicate that the V2O5-rGO hybrid aerogels were partly crystalline due to the thermal treatment.Additionally, in comparison to V2O5 aerogel, no additional peaks are detected in the hybrid aerogels, which indicates high purity of the V2O5 parts in the aerogel.The intensity of the XRD reflection peaks shows reduction with an increase of the amount of graphene in the V2O5-rGO hybrid aerogels while the peak positions remain the same.This implies that the V2O5 layered structure was maintained despite the presence of graphene, although, graphene
Hybrid Aerogels from One-Pot Synthesis via Method II
The SEM and TEM images (Figure 1e,f) of the heat-treated V 2 O 5 aerogel reveal a randomly-arranged rod-like morphology with lengths less than 100 nm and diameters of several nanometers.These dimensions are much lower than those of the V 2 O 5 aerogel and can be attributed to crystallization at the time of thermal treatment.The V 2 O 5 -rGO-6 hybrid aerogel displays a morphology of crumpled graphene sheets where the growth of V 2 O 5 nanofibers is apparent (Figure 1g,h).We note that GO particles were well-dispersed in water and the V 2 O 5 nanofibers possibly grew on the surfaces of graphene sheets.The morphology of the V 2 O 5 -rGO-6 hybrid aerogel further confirms the interaction of VO oligomers with GO sheets in the hydrolysis step.
Figure S2b shows the TGA traces of the samples conducted in air with a heating rate of 10 ˝C/min.The weight loss of the V 2 O 5 aerogel before 150 ˝C is due to the evaporation of absorbed moisture.In the case of the V 2 O 5 -rGO hybrid aerogels, there is gradual weight loss from around room temperature to about 400 ˝C, suggesting that such loss was due to the evaporation of moisture and decomposition of the oxygen-containing groups of GO.In addition, for all hybrid aerogels, a sharp weight reduction occurred in the range of 400 ˝C to 480 ˝C, due to the burning of graphene [32].The graphene weight percent in hybrid aerogels V 2 O 5 -rGO-2, V 2 O 5 -rGO-4, and V 2 O 5 -rGO-6 was estimated to be 12%, 17%, and 23%, respectively.
The X-ray diffraction patterns of the heat-treated V 2 O 5 aerogel and V 2 O 5 -rGO hybrid aerogels (Figure 4a,b) were examined to determine if the heat treatment step used for thermal reduction of GO also caused substantial changes in the crystal structures of the hybrid materials.In Figure 4b, the XRD patterns of a series of V 2 O 5 -rGO hybrid aerogels exhibit (200), (001), ( 110), (301), and (310) reflection peaks corresponding to orthorhombic crystalline V 2 O 5 (JCPDS No. 41-1426) at the same position as in the case of V 2 O 5 aerogel (Figure 4a).These indicate that the V 2 O 5 -rGO hybrid aerogels were partly crystalline due to the thermal treatment.Additionally, in comparison to V 2 O 5 aerogel, no additional peaks are detected in the hybrid aerogels, which indicates high purity of the V 2 O 5 parts in the aerogel.The intensity of the XRD reflection peaks shows reduction with an increase of the amount of graphene in the V 2 O 5 -rGO hybrid aerogels while the peak positions remain the same.This implies that the V 2 O 5 layered structure was maintained despite the presence of graphene, although, graphene influenced its crystal formation.The XRD patterns only show V 2 O 5 reflection peaks in the V 2 O 5 -rGO hybrid aerogels, which indicates that V 2 O 5 nanofibers prevented restacking of graphene nanosheets [22].influenced its crystal formation.The XRD patterns only show V2O5 reflection peaks in the V2O5-rGO hybrid aerogels, which indicates that V2O5 nanofibers prevented restacking of graphene nanosheets [22].Figure 4c shows FTIR spectra to verify if the thermal reduction method successfully reduced GO to graphene.The characteristic peaks in FTIR spectra at 569, 834, and 1018 cm −1 of the V2O5 aerogel correspond to triply-coordinated oxygen (chain oxygen) bonds in vanadium oxide, doublycoordinated oxygen (bridge oxygen) bonds, and stretching vibration of terminal oxygen bonds (V=O), respectively [24].For V2O5-rGO-6 hybrid aerogel, the vanadium oxide characteristic peaks shifted, indicating an interaction of V2O5 nanofibers and graphene nanosheets [33].The bond at 1599 cm -1 is observed clearly due to the skeletal vibration of graphene [34].
In Raman spectra shown in Figure 4d,e, the peaks at 144, 284, 406, 522, 699, and 995 cm −1 can be observed in both the V2O5 aerogel and the V2O5-rGO-6 hybrid aerogel, which correspond to the following signatures: skeleton bending vibration of the V-O-V bonds (144 cm −1 ), bending vibrations of the V=O bonds (284 cm −1 ), bending vibrations of bridge oxygen bonds (406 cm −1 ), stretching vibrations of triply-coordinated oxygen bonds (522 cm −1 ), stretching vibrations of doublycoordinated oxygen (699 cm −1 ), and in-phase stretching vibrational of V=O bonds (995 cm −1 ) [13].The two peaks at 1359 and 1601 cm -1 in the V2O5-rGO-6 hybrid aerogel belong to the D-and G-bands of graphene, which suggests a complete reduction of GO and the presence of graphene.
The BJH pore distribution (Figure S4c) of V2O5-rGO hybrid aerogels exhibits slightly different distribution from the V2O5 aerogel, especially the V2O5-rGO-6 hybrid aerogel shows a peak at 40 nm.This pore size distribution of the two types of samples might have some effect on the electrochemical property because the mesopores provide more accessibility for ion transportation [35].The BET-specific surface areas of the samples are listed in Tables 1 and 2.
The BET-specific surface area of the V2O5-rGO-6 hybrid aerogel (392 cm 2 /g) is much larger than the V2O5 aerogel (213 cm 2 /g), which resulted from the presence of high surface area graphene nanosheets.This has the potential to provide much higher capacity for energy storage.Note in this case that the surface area of V2O5 aerogel listed in Table 2 is lower than that reported in Table 1.After annealing, the pore structure of the V2O5 aerogel changed due to the oxidation of VOx and partial crystallization of V2O5.Thus, the BET-specific surface area of the heat-treated V2O5 aerogel reduced to 213 m 2 g −1 .For a supercapacitor, especially for hybrid electrodes, the surface area serves as a Figure 4c shows FTIR spectra to verify if the thermal reduction method successfully reduced GO to graphene.The characteristic peaks in FTIR spectra at 569, 834, and 1018 cm ´1 of the V 2 O 5 aerogel correspond to triply-coordinated oxygen (chain oxygen) bonds in vanadium oxide, doubly-coordinated oxygen (bridge oxygen) bonds, and stretching vibration of terminal oxygen bonds (V=O), respectively [24].For V 2 O 5 -rGO-6 hybrid aerogel, the vanadium oxide characteristic peaks shifted, indicating an interaction of V 2 O 5 nanofibers and graphene nanosheets [33].The bond at 1599 cm ´1 is observed clearly due to the skeletal vibration of graphene [34].
In Raman spectra shown in Figure 4d,e, the peaks at 144, 284, 406, 522, 699, and 995 cm ´1 can be observed in both the V 2 O 5 aerogel and the V 2 O 5 -rGO-6 hybrid aerogel, which correspond to the following signatures: skeleton bending vibration of the V-O-V bonds (144 cm ´1), bending vibrations of the V=O bonds (284 cm ´1), bending vibrations of bridge oxygen bonds (406 cm ´1), stretching vibrations of triply-coordinated oxygen bonds (522 cm ´1), stretching vibrations of doubly-coordinated oxygen (699 cm ´1), and in-phase stretching vibrational of V=O bonds (995 cm ´1) [13].The two peaks at 1359 and 1601 cm ´1 in the V 2 O 5 -rGO-6 hybrid aerogel belong to the D-and G-bands of graphene, which suggests a complete reduction of GO and the presence of graphene.
The BJH pore distribution (Figure S4c) of V 2 O 5 -rGO hybrid aerogels exhibits slightly different distribution from the V 2 O 5 aerogel, especially the V 2 O 5 -rGO-6 hybrid aerogel shows a peak at 40 nm.This pore size distribution of the two types of samples might have some effect on the electrochemical property because the mesopores provide more accessibility for ion transportation [35].The BET-specific surface areas of the samples are listed in Tables 1 and 2.
The BET-specific surface area of the V 2 O 5 -rGO-6 hybrid aerogel (392 cm 2 /g) is much larger than the V 2 O 5 aerogel (213 cm 2 /g), which resulted from the presence of high surface area graphene nanosheets.This has the potential to provide much higher capacity for energy storage.Note in this case that the surface area of V 2 O 5 aerogel listed in Table 2 is lower than that reported in Table 1.After annealing, the pore structure of the V 2 O 5 aerogel changed due to the oxidation of VO x and partial crystallization of V 2 O 5. Thus, the BET-specific surface area of the heat-treated V 2 O 5 aerogel reduced to 213 m 2 ¨g´1 .For a supercapacitor, especially for hybrid electrodes, the surface area serves as a significant factor for charge storage that provides space for Faradaic redox reactions to produce pseudocapacitance.In this way, the large specific surface area of the V 2 O 5 -rGO hybrid aerogel favors capacitance.The specific capacitance was obtained from the data of charge-discharge curves and calculated from Equation (1).The galvanostatic charge/discharge curves of the different V 2 O 5 -rGO hybrid aerogels are shown in Figure 5a.It can be seen that the specific capacitance of the V 2 O 5 -rGO hybrid aerogel is generally higher than the V 2 O 5 aerogel (98 F/g).In addition, an increase in specific capacitance is observed for materials with higher content of graphene in the V 2 O 5 -rGO hybrid aerogels.The specific capacitance of a series of V 2 O 5 -rGO hybrid aerogels, respectively, for the V 2 O 5 -rGO-2, V 2 O 5 -rGO-4, and V 2 O 5 -rGO-6 hybrid aerogels are 163, 202, and 272 F/g, which are summarized in Figure 5b.This tendency is led by a synergetic effect of V 2 O 5 and graphene, in which graphene enhances the conductivity and increases the specific surface area of the V 2 O 5 -rGO hybrid aerogels in order to promote better utilization of V 2 O 5 .
C 2016, 2, 21 9 of 12 significant factor for charge storage that provides space for Faradaic redox reactions to produce pseudocapacitance.In this way, the large specific surface area of the V2O5-rGO hybrid aerogel favors capacitance.The specific capacitance was obtained from the data of charge-discharge curves and calculated from Equation (1).The galvanostatic charge/discharge curves of the different V2O5-rGO hybrid aerogels are shown in Figure 5a.It can be seen that the specific capacitance of the V2O5-rGO hybrid aerogel is generally higher than the V2O5 aerogel (98 F/g).In addition, an increase in specific capacitance is observed for materials with higher content of graphene in the V2O5-rGO hybrid aerogels.The specific capacitance of a series of V2O5-rGO hybrid aerogels, respectively, for the V2O5-rGO-2, V2O5-rGO-4, and V2O5-rGO-6 hybrid aerogels are 163, 202, and 272 F/g, which are summarized in Figure 6b.This tendency is led by a synergetic effect of V2O5 and graphene, in which graphene enhances the conductivity and increases the specific surface area of the V2O5-rGO hybrid aerogels in order to promote better utilization of V2O5.Since the specific capacitance of the V2O5-rGO-6 hybrid aerogel is the highest among the three materials studied, further investigation of its electrochemical performance was conducted.The cyclic voltammetry curves of both samples are shown in Figure 5c.The shape of CV curves of both materials is symmetrical, suggesting an electrochemical reversibility.These curves deviated from an ideal Since the specific capacitance of the V 2 O 5 -rGO-6 hybrid aerogel is the highest among the three materials studied, further investigation of its electrochemical performance was conducted.The cyclic voltammetry curves of both samples are shown in Figure 5c.The shape of CV curves of both materials is symmetrical, suggesting an electrochemical reversibility.These curves deviated from an ideal rectangle indicating the Faradic pseudocapacitance.A pair of redox peaks of potentials at around 0.32 V and 0.30 V resulted from the transformation between V 5+ and V 4+ .However, the CV curve of the V 2 O 5 -rGO-6 hybrid aerogel shows undefined redox peaks, which are possibly caused by the EDLC contribution by graphene.This result further confirms the synergetic effect of the double-layer capacitance and pseudocapacitance [36].The area of the CV curve of the V 2 O 5 -rGO-6 hybrid aerogel is much larger than that of the V 2 O 5 aerogel, and this also confirms a higher specific capacitance of the V 2 O 5 -rGO-6 hybrid aerogel.The CV curves of the V 2 O 5 -rGO-6 hybrid aerogel at different scan rates are shown in Figure 5d in order to further study the capacitive performance.As the scan rate increases, the quasi-rectangular shape of the CV curves is affected, although the shape of the curves remains mirror-symmetric with zero-current line even at the high scan rate of 100 mV/s.This result implies a high rate performance and good capacitive reversibility [37].Note that the specific capacitance of the V 2 O 5 -rGO-6 hybrid aerogel is much higher than that of graphene-templated rGO-V 2 O 5 -0.82 hybrid aerogel (203 F¨g ´1) produced in Method I with a similar amount of graphene, even if V 2 O 5 is partly crystalline due to the thermal reduction for the V 2 O 5 -rGO-6 hybrid aerogel.
It can be concluded from the data presented up to this point that the preparation method strongly influenced the structure-properties relationships and the final electrochemical performance.In Method I, the capillary force contributed to selective localization of V 2 O 5 on the graphene nanosheets.Additionally, the unavoidable diffusion gradient quite possibly led to a heterogeneous dispersion of V 2 O 5 within the rGO aerogel produced by Method I.In Method II, the recombination of V 2 O 5 and graphene in hybrid aerogels was easily accomplished and enhanced by the coordination bonds between V 2 O 5 and graphene nanosheets that are much stronger than the capillary force in Method I.Meanwhile, the uniform mixing of V 2 O 5 and graphene also produces a homogeneous hybrid structure in Method II.In addition, the BET-specific surface area of the V 2 O 5 -rGO-6 hybrid aerogel is greater than the rGO-V 2 O 5 -0.82 hybrid aerogel, suggesting that there is more capacity for energy storage in hybrid materials produced by Method II.These factors answer why the hybrid aerogels from Method I exhibit inferior electrochemical performance than hybrid aerogels produced by Method II.
The Nyquist plots of EIS of V 2 O 5 aerogel are shown in Figure 5e.The corresponding equivalent circuit is presented in Figure S6.In comparison, the V 2 O 5 -rGO-6 hybrid aerogel has more vertical lines parallel to the imaginary axis than the V 2 O 5 aerogel, which indicates that the electrical conductivity of the hybrid aerogel was improved or the accessibility of ions in the electrolyte was enhanced due to the combination of the structures from graphene and V 2 O 5 and, therefore, exhibited a behavior more closely to that of an ideal supercapacitor.The cycle stabilities of the V 2 O 5 aerogel and the V 2 O 5 -rGO-6 hybrid aerogel were characterized by a charge/discharge test over 1000 cycles at a constant current density of 10 A/g shown in Figure 5f.This indicates clearly that the capacitance retention of the V 2 O 5 -rGO-6 hybrid aerogel was 80% after 1000 operation cycles, while the capacitance retention of the V 2 O 5 aerogel was only 35% of the initial specific capacitance.The low capacitance retention of the V 2 O 5 aerogel can be attributed to the structural damage caused by the insertion and desertion of electrolyte ions during the charge/discharge process [38] and two clear drops after 350th and 700th cycles could be possibly caused by the large structural changes of V 2 O 5 .According to published literature, pure V 2 O 5 shows very poor cycle stability up to only 100 cycles in aqueous electrolyte [19].Obviously, the addition of graphene improved the cycle stability for the materials developed in this work due to its electroactive property.
Conclusions
This study considered two methods for the preparation of hybrid aerogels of V 2 O 5 and rGO.The specific capacitance of hybrid materials prepared by Method I increased to 202 F/g with 23 wt % graphene.Such improved capacitance is due to the synergy derived from the higher electrical conductivity of graphene, the greater contact between the electrode and electrolyte due to higher surface area, and the pseudocapacitance of V 2 O 5 .The hybrid materials prepared by one-pot synthesis route in Method II produced much higher capacitance of 272 F/g at 23 wt % graphene content and exhibited improved cycle stability due to better interaction of V 2 O 5 particles with graphene surfaces and homogeneous dispersion of the precursor materials.
Figure 3 .
Figure 3. (a) Summary of specific capacitance of a series hybrid aerogels prepared by Method I as compared with pure V2O5 and rGO aerogels; (b) cyclic voltammetry curves of the rGO-V2O5-0.82hybrid aerogel as compared with pure V2O5 and rGO aerogels at a scan rate of 10 mV/s; (c) galvanostatic charge/discharge curves of rGO and V2O5 aerogels and the rGO-V2O5-0.82hybrid aerogel at a current density of 1 A/g; (d) CV curves of the rGO-V2O5-0.82hybrid aerogel at a varied scan rate; (e) electrochemical impedance spectra measured at a frequency of 0.01 HZ-100 kHZ for the rGO aerogel, the V2O5 aerogel, and the rGO-V2O5-0.82hybrid aerogel; and (f) cycle stability of the rGO aerogel, the V2O5 aerogel, and the rGO-V2O5-0.82hybrid aerogel.
Figure 3 .
Figure 3. (a) Summary of specific capacitance of a series hybrid aerogels prepared by Method I as compared with pure V 2 O 5 and rGO aerogels; (b) cyclic voltammetry curves of the rGO-V 2 O 5 -0.82 hybrid aerogel as compared with pure V 2 O 5 and rGO aerogels at a scan rate of 10 mV/s; (c) galvanostatic charge/discharge curves of rGO and V 2 O 5 aerogels and the rGO-V 2 O 5 -0.82 hybrid aerogel at a current density of 1 A/g; (d) CV curves of the rGO-V 2 O 5 -0.82 hybrid aerogel at a varied scan rate; (e) electrochemical impedance spectra measured at a frequency of 0.01 HZ-100 kHZ for the rGO aerogel, the V 2 O 5 aerogel, and the rGO-V 2 O 5 -0.82 hybrid aerogel; and (f) cycle stability of the rGO aerogel, the V 2 O 5 aerogel, and the rGO-V 2 O 5 -0.82 hybrid aerogel.
Figure 4 .
Figure 4. (a) XRD patterns of the V 2 O 5 aerogel; (b) XRD patterns of the V 2 O 5 -rGO hybrid aerogels with varying weight compositions; (c) FTIR spectra of the V 2 O 5 and V 2 O 5 -rGO-6 hybrid aerogels; (d,e) Raman spectra of the V 2 O 5 aerogel and the V 2 O 5 -rGO-6 hybrid aerogel.
Figure 5 .
Figure 5. (a) Galvanostatic charge/discharge curves of the V2O5 and V2O5-rGO hybrid aerogels with different ratios at a current density of 1 A/g; (b) summary of specific capacitance of the V2O5 and V2O5-rGO hybrid aerogels; (c) CV curves of the V2O5 and V2O5-rGO-6 hybrid aerogel at a scan rate of 10 mV/s; (d) CV curves of the V2O5-rGO-6 hybrid aerogel at varying scan rates; (e) EIS of the V2O5 and V2O5-rGO-6 hybrid aerogel measured at a frequency of 0.01 HZ-100 kHZ; and (f) cycle stability of the V2O5 and V2O5-rGO-6 hybrid aerogels tested at a current density of 10 A/g.
Figure 5 .
Figure 5. (a) Galvanostatic charge/discharge curves of the V 2 O 5 and V 2 O 5 -rGO hybrid aerogels with different ratios at a current density of 1 A/g; (b) summary of specific capacitance of the V 2 O 5 and V 2 O 5 -rGO hybrid aerogels; (c) CV curves of the V 2 O 5 and V 2 O 5 -rGO-6 hybrid aerogel at a scan rate of 10 mV/s; (d) CV curves of the V 2 O 5 -rGO-6 hybrid aerogel at varying scan rates; (e) EIS of the V 2 O 5 and V 2 O 5 -rGO-6 hybrid aerogel measured at a frequency of 0.01 HZ-100 kHZ; and (f) cycle stability of the V 2 O 5 and V 2 O 5 -rGO-6 hybrid aerogels tested at a current density of 10 A/g.
Table 1 .
BET-specific surface area of the V 2 O 5 , graphene, and V 2 O 5 -rGO hybrid aerogels prepared by Method I.
Table 2 .
BET-specific surface area of the V 2 O 5 aerogel and V 2 O 5 -rGO hybrid aerogels prepared by Method II.
Table 1 .
BET-specific surface area of the V2O5, graphene, and V2O5-rGO hybrid aerogels prepared by Method I.
Table 2 .
BET-specific surface area of the V2O5 aerogel and V2O5-rGO hybrid aerogels prepared by Method II.
|
2016-08-24T23:09:51.855Z
|
2016-08-04T00:00:00.000
|
{
"year": 2016,
"sha1": "69d05f1f249d5cb49665758817e3030986eec534",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5629/2/3/21/pdf?version=1470314640",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "69d05f1f249d5cb49665758817e3030986eec534",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
1021844
|
pes2o/s2orc
|
v3-fos-license
|
Uterine Tumors Resembling Ovarian Sex Cord Tumors – Treatment, recurrence, pregnancy and brief review
Background Uterine Tumors Resembling Ovarian Sex Cord Tumors (UTROSCT) are rare tumors of low malignancy. In the past, these tumors were mainly treated by hysterectomy. More recently, some authors have proposed conservative surgical management for women wishing to preserve fertility. This article is the first to report on organ-preserving treatment in the case of recurrence or disease persistence. Cases We report on three patients with UTROSCT, two of them young, not having completed family planning. One even gave birth to a healthy child after fertility-preserving treatment of a persistent UTROSCT. To our knowledge, this is the first pregnancy reported after surgical treatment of a persistent UTROSCT so far. Conclusion A fertility-sparing approach should always be considered in young women with UTROSCT who wish to preserve their fertility, also in cases of recurrence or disease persistence.
Introduction
The occurrence of Uterine Tumors Resembling Ovarian Sex Cord Tumors (UTROSCT) was first described in 1945 (Morehead and Bowman, 1945). In 1976, a series of 14 cases was added (Clement and Scully, 1976). To date, less than 100 cases of UTROSCT have been published (Morehead and Bowman, 1945;Clement and Scully, 1976;O'Meara et al., 2009;Blake et al., 2014;Jeong et al., 2015;De Franciscis et al., 2016;Berretta et al., 2009;Giordano et al., 2010;Anastasakis et al., 2008;Hillard et al., 2004;Garuti et al., 2009;De Leval et al., 2010;Biermann et al., 2008;Gomes et al., 2015;Lantta et al., 1984). According to the WHO, UTROSCT are classified in the group of endometrial stromal and related tumors. The entity is defined as a "neoplasm resembling ovarian sex cord tumors without a component of recognizable endometrial stroma" (I. A.R.C. 2014, 4th Ed). The classification of UTROSCT found in literature is sometimes unspecified and a distinction between the more aggressive ESTSCLE (endometrial stromal tumors with sex cordlike elements) and UTROSCT is not made. However, this distinction is highly important because of the different behavior of these tumors.
UTROSCT are tumors of low malignant potential. They usually behave in a benign fashion; however, some may recur. The patients typically present with a bleeding disorder and/or a uterine mass.
These tumors are usually well-demarcated myometrial nodules with sharp or infiltrating borders. Some grow as polyps. They are smoother, fleshier and yellow to tan compared to leiomyoma. They may present different histological patterns such as trabecular, glandular, solid, diffuse or mixed. The cytoplasm can be scant or more abundant, often rich in lipid. The nuclei are small, inconspicuous and mitoses are very rare.
In this article, we report on three patients with UTROSCT, two of them young, not having completed family planning. One of them even gave birth to a healthy child after a second extensive fertility-sparing surgical treatment. To our knowledge, this is the first pregnancy reported under these conditions so far (Blake et al., 2014;Jeong et al., 2015;De Franciscis et al., 2016).
Case reports
The first patient, a 24-year-old woman, suffered from abnormal uterine bleeding (hypermenorrhea) and secondary dysmenorrhea.
Contents lists available at ScienceDirect
Gynecologic Oncology Reports j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / g y n o r Ultrasound examination revealed a persistent submucosal mass resembling a leiomyoma in the fundal anterior wall of the uterine corpus. Over a two-year period, the size remained stable. Due to increasing symptoms a hysteroscopy ( Fig. 1) with resection of the submucosal tumor was performed. The histological diagnosis was UTROSCT with expression of calretinin, one of the smooth muscle markers, WT1 and AE1/ AE3 cytokeratin. It did not express inhibin. Three months later a fundal lesion of 8 mm was visible in a pelvic MRI. The patient strongly desired fertility-preserving treatment, so a repeat hysteroscopy with biopsies was performed, showing no histological evidence of UTROSCT. We agreed on regular clinical surveillance visits with imaging by ultrasound or additionally MRI. Six months later, a pelvic MRI suggested disease recurrence with a uterine mass of 15 mm. The patient strongly desired another fertility-preserving surgery. The diagnostic hysteroscopy showed no abnormalities.
In the concurrent open abdominal surgery, the intramural tumor was located by palpation and completely resected. Pathological results were consistent with UTROSCT. The margins were free of tumor and the peritoneal lavage did not exhibit any tumor cells. The patient is under clinical as well as radiological surveillance since the last surgery and has now been disease free for 56 months. The second patient, a 28-year-old woman, was referred to us after the operation of a symptomatic, slowly growing, cystic-solid tumor of 10 cm in the uterine anterior wall (Fig. 2) performed at another hospital. The tumor had been completely resected macroscopically through a lower abdominal incision. However, the surgery was complicated by strong bleeding and unintended opening of the tumor. The histological diagnosis was UTROSCT. Because of the possibility of recurrence, the hospital, where the initial surgery was performed, recommended a subsequent hysterectomy. As the patient did not want to undergo another surgery, regular clinical surveillance visits with additional imaging by MRI were performed. In the surveillance visit two months after surgery, the MRI indicated disease persistence with a tumor mass of 3 × 4 × 5 cm in the anterior uterine wall (Fig. 3). As family planning was not completed, the patient did not wish to undergo a hysterectomy. She was referred to us for a second opinion and requested fertility-preserving surgery. There was no evidence of a macroscopic spread of the disease during open abdominal surgery. A round, yellowish mass the size of 3 × 3 cm (Fig. 4) was found on the anterior wall of the uterus. In contrast to the first case, the palpated texture was identical to the rest of the myometrium. The whole tumor was macro-and microscopically removed. The immunohistochemistry showed positivity for calretinin, one of the smooth muscle markers, WT1 and AE1/AE3 cytokeratin. It was negative for inhibin.
The patient conceived without any problems and gave birth to a healthy child by cesarean section 19 months after the last surgery. Due to completed family planning, an abdominal hysterectomy with simultaneous removal of the distal part of the fallopian tubes on both sides was performed directly after the cesarean section. The entire tissue was free of tumor.
We agreed on the same procedure for oncological surveillance visits as in the first case. In one of the following surveillance visits, 20 months later, sonography revealed a 7 cm-large, polycystic tumor in the small pelvis. MRI indicated the same finding as well as strong activity of the contrast agent. Because of strong suspicion of recurrence, a third laparotomy was performed. The lower abdomen showed peritoneal carcinomatosis. The polycystic tumor originated from the right adnexa, infiltrating a part of the vaginal wall. The tumor, both adnexa, part of the vaginal wall as well as the affected peritoneum were removed. The final pathological report showed an UTROSCT infiltrating the peritoneum, the right fallopian tube, both ovaries and the vaginal wall. A complete resection was achieved.
Due to a lack of guidelines for adjuvant treatment, chemotherapy with 3 cycles of bleomycin, etoposid and cisplatin (BEP) according to the adjuvant treatment of Sertoli-Leydig-cell tumors was considered. However, the patient declined chemotherapy. Because of the estrogen and progesterone positivity of this UTROSCT, an endocrine treatment with the aromatase inhibitor anastrozole was initiated as an alternative.
The patient is still being monitored clinically and by ultrasound every three months and by MRI every six months. She is currently disease free 34 months after the last surgery.
The third patient was a 72-year-old woman with a one-time postmenopausal bleeding. Sonography only showed an atrophied endometrial layer. Diagnostic hysteroscopy with curettage was performed and was unsuspicious. Pathology showed an UTROSCT infiltrating the endometrium. The tumor was positive for calretinin, one of the smooth muscle markers, AE1/AE3 cytokeratin and also for inhibin. A total hysterectomy with adnexectomy was recommended. An MRI of the pelvic region before surgery showed no uterine tumor or infiltration of other organs. No macroscopic tumor was found during surgery and pathology also indicated no further signs of a tumor. The patient is still being monitored clinically and radiologically and is disease free 46 months after the last surgery.
Discussion
A fertility-preserving option for younger women with UTROSCT has only recently been suggested by some authors (O'Meara et al., 2009;Blake et al., 2014;Jeong et al., 2015;De Franciscis et al., 2016;Berretta et al., 2009;Giordano et al., 2010;Anastasakis et al., 2008;Hillard et al., 2004;Garuti et al., 2009). The goal of this article is to show that tumor resection alone, instead of hysterectomy, may be an option for treating UTROSCT. This is viable even in cases of recurrence or disease persistence. However, a complete resection of the whole tumor without harming the external layer is vital, as is generally the case in oncological surgery.
To the best of our knowledge, this article is the first to report on organ-preserving treatment in cases of recurrence or disease persistence. The recurrence of UTROSCT in the first case may have been caused by incomplete resection during the first surgery. The persistent/recurring UTROSCT in the second case was most likely caused by incomplete resection during the first operation and the subsequent spread of tumor cells into the abdominal cavity. Due to the presence of tumors in both ovaries, a hematologic pathway for metastasis could also be discussed.
Follow-up surveillance visits in the reported cases were done regularly, clinically and with imaging by ultrasound or additionally by MRI. The shortest disease-free interval was six months in the first case mentioned in this report. To the best of our knowledge, there are only two cases of recurrent UTROSCT described in literature (O'Meara et al., 2009;Biermann et al., 2008). In these cases, recurrence after three and four years were documented (O'Meara et al., 2009;Biermann et al., 2008). Based on these data, regular and frequent long-term follow-up controls may be recommended, according to the standard gynecological tumor aftercare programs.
Four successful pregnancies following uterus-sparing treatment of UTROSCT (Blake et al., 2014;Jeong et al., 2015;De Franciscis et al., 2016) have been described in the medical literature since 1945. The second case mentioned in this article would be the fifth. However, our case is the first pregnancy described after surgical treatment of a persistent and extensive UTROSCT. Nonetheless, due to the possibility of late local recurrences and the lack of experience with UTROSCT, hysterectomy should be performed after completion of family planning.
According to the literature, chemotherapy with bleomycin, etoposide and cisplatin seems to be an option for adjuvant treatment (O'Meara et al., 2009;Gomes et al., 2015). A (follow up) treatment with anastrozole in tumors with estrogen-/progesterone-receptor positivity should also be considered. Lantta et al. (1984) suggested in 1984 already that UTROSCT should be tested for steroid receptor expression to evaluate a possible hormone treatment. However, due to a lack of cases and data, general recommendations on the type or duration of adjuvant treatment with chemotherapy cannot be made.
In conclusion, based on the cases described here, as well as on the published evidence available, a fertility-sparing approach should always be considered in women with UTROSCT who wish to preserve their fertility. If a complete resection of the tumor is achieved, recurring and persistent UTROSCT can also be treated by uterus-preserving surgery. The resection of the whole tumor as a complete mass is vital to avoid the spreading of tumor cells into the abdominal cavity and thus to reduce the risk of recurrence. Further case reports are needed to prove the safety of organ-preserving treatment in UTROSCT and to establish a treatment protocol.
The authors declare no conflicting interests. Written, informed consent was given by all three patients.
Disclosure
None of the authors have a conflict of interest.
Financial support
None.
|
2018-04-03T03:11:26.218Z
|
2017-01-11T00:00:00.000
|
{
"year": 2017,
"sha1": "d03ebb907fa6b18da6f7ec4f3aa01b76466cf2a1",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.gore.2017.01.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d03ebb907fa6b18da6f7ec4f3aa01b76466cf2a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53242715
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Insecticides induced hormesis on the demographic parameters of Myzus persicae and expression changes of metabolic resistance detoxification genes
Insecticide induced-hormesis is a bi-phasic phenomenon generally characterized by low-dose induction and high-dose inhibition. It has been linked to insect pest outbreaks and insecticide resistance, which have importance in the integrated pest management (IPM). In this paper, hormesis effects of four insecticides on demographic parameters and expression of genes associated with metabolic resistance were evaluated in a field collected population of the green peach aphid, Myzus persicae Sulzer. The bioassay results showed that imidacloprid was more toxic than acetamiprid, deltamethrin and lambda-cyhalothrin. After exposure to sublethal doses of acetamiprid and imidacloprid for four generations, significant prolonged nymphal duration and increased fecundity were observed. Subsequently, mean generation time (T) and gross reproductive rate (GRR) was significantly increased. Moreover, expression of CYP6CY3 gene associated with resistance to neonicotinoids was increased significantly compared to the control. For pyrethriods, across generation exposure to sublethal doses of lambda cyhalothrin and deltamethrin prolonged the immature development duration. However, the expression of E4 gene in M. persicase was decreased by deltamethrin exposure but increased by lambda cyhalothrin. Based on results, demographic fitness parameters were effected by hormetic dose and accompanied with detoxifying genes alteration, hence, which would be evaluated in developing optimized insect pest management strategies.
Pesticide induced hormesis is a biphasic phenomenon resulted by low dose stimulation and high dose inhibition following insecticides exposure 1,2 . Recently, insecticides induced-sublethal effects in various agricultural insect pests have been reported [3][4][5] . It has been shown that hormesis responses can accelerate insect population growth, result in insect pest resurgence [6][7][8] , and benefit resistance development [9][10][11] , which have significance for insect pest and insecticide resistant management 8 .
The green peach aphid, Myzus persicae (Sulzer) (Hemiptera: Aphididae) is an economically significant agricultural crop insect pest across the world that resulted in severe crop damages on over 400 plant species 12 . Therefore, numerous insecticides are used to control this insect and in turn, the resistance to multiple insecticides are developed 13 . The most common resistant mechanisms include target-site mutations and over-expression of detoxification enzymes 14,15 . In M. persicae, metabolic enzymes reported to confer resistance include esterase E4 (or the mediterranean variant FE4), giving broad-spectrum resistance to organophosphates, carbamates and pyrethroids, and cytochrome P450 CYP6CY3, conferring resistance to neonicotinoids 16 . Regarding to esterase mediated pyrethroid resistance in M. persicae, previous studies have indicated that esterase E4/FE4 amplification was related to a certain resistance to deltermethrin 17 . In China, the field populations of M. persicae was detected to develop a high level of resistance to β-cypermethrin and cypermethrin, and high frequency of FE4 amplification was significantly correlated with resistance of M. persicae to these two pyrethroid insecticides 18 low level resistance to neonicotinoids was observed very soon after this class of insecticides was introduced to control M. persicae. Studies of Puinean et al. revealed that over-expression of a P450 gene, CYP6CY3, was related to neonicotinoid resistance 19 . Subsequently, the over-expression of CYP6CY3 was identified in several field populations of M. persicae 16,18 . Hormesis, an evolutionary adaptation of organisms response to environmental stress, has significance in insect pest management due to insects may be exposed to sublethal levels of pesticides. It has been documented that low dose exposure to insecticide can hasten the evolution of pesticide resistance by increasing mutation frequencies 20 . In M. persicae, hormesis effects induced by several insecticides have been reported [3][4][5][20][21][22][23] . In addition to stimulating responses on different insect's life history traits, it is also reported that low dose insecticides exposure alters gene expression which has been associated with metamorphosis and reproductive development in M. persicae 22 . According to reports of Rix et al. exposure to hormetic concentrations of imidacloprid can prime offspring to better withstand subsequent insecticide stress, but not result in mutations in any of the examined nicotinic acetylcholine receptor subunits (nAChR) in a wild green-house population of M. persicae 20 .
In China, M. persicae is one of the most serious pests, which causes great economic losses every year. Recently, Tang et al. reported that the field populations of M. persicae collected from eleven sites including Langfang have developed multiple levels of resistance to permethrins and neonicotinoids, and over-expression of cytochorome P450 CYP6CY3 and esterase E4/FE4 genes was involved in resistance to neonicotinoids and pyrethroids, respectively 18 . Therefore, we studied the homesis effects on the demographic parameters of pyrethroid and neonicotinoid insecticides in M. persicae collected from greenhouse crops in Langfang. Moreover, we were interested to know whether hormetic responses may change the expression of cytochorome P450 CYP6CY3 and esterase E4/ FE4. These results may provide important information for developing optimized integrated pest resistant management strategies in M. persicae.
Sublethal effects of neonicotinoids. Compared to the control group, the hormetic effects of two neonicotinoids, acetamiprid and imidacloprid on M. persicae were assessed after exposure to the sublethal concentrations for one and four generations, respectively. As shown in Table 2, after exposure to the sublethal concentration (LC 20 ) of acetamiprid for one generation, the development time of 4th instar (N4), female longevity and total fecundity were significantly increased among all tested biological parameters of M. persacae, comparing with the control group. After exposure four generations, the duration time of nymph stages (N1, N2, N3) and pre-adult, female longevity, total fecundity and the total preoviposition period (TPOP) was significantly enhanced, respectively. In case of imidacloprid, female longevity, the per day fecundity and total fecundity of M. persicae were increased significantly compared to the control group. Under continuous treatment to sublethal dose of imidacloprid for four generations, the nymphal developmental duration of the third instar and the female longevity, per day fecundity and total fecundity were significantly prolonged, and the total fecundity was significantly increased. Sublethal effects of neonicotinoids on the demoghraphic parameters was calculated and shown in Table 3. Intrinsic rate of increase (r m ) and finite rate of increase (λ) in M. persicae exposed to acetamiprid and imidacloprid were reduced significantly compared to the control group. Likewise, mean generation time (T) and gross reproductive rate (GRR) in the sublethal exposure of both neonicotinoids group was higher than that of the control group. While, there was no significant difference in the net reproductive rate (R 0 ) between M. persicae exposed to two neonicotinoids and the control group. Table 4, all tested biological parameters of M. persacae after exposure to the sublethal concentration (LC 20 ) of deltamethrin for one generation were not affected significantly except longevity. Regarding for exposure four generations, the duration of nymphal stages (N3 and N4) and pre-adult, and the adult preoviposition period (TPOP) was significantly prolonged, respectively. The developmental time of 1st, 3rd instar nymphs, daily fecundity, adult preoviposition period (APOP) and total preoviposition period (TPOP) was significantly increased compared to the control group after exposure to the sublethal concentration (LC 20 ) of lambda cyhalothrin for four generations. When exposure to the sublethal concentration of deltamethrin was continued to F4 generation, the 1 st instar nymph, pre-adult and the adult preoviposition period (TPOP) was significantly increased.
Sublethal effects of pyrethroids. As shown in
Sublethal effects of pyrethroids on the demoghraphic parameters was shown in Table 5. Intrinsic rate of increase (r m ) and finite rate of increase (λ) to both insecticides were reduced significantly compared to the control group. Likewise, mean generation time (T) was higher both at F1 and F4 generations than that of the control group. The gross reproductive rate (GRR) was increased significantly at F1 generation.
Insecticides resistance linked genes expression. In M. persicae, resistance to neonicotinoid and to permethrin insecticides has been associated with over-expression of cytochrome P450 CYP6CY3 and esterase E4/EF4 genes, respectively. In this paper, the hormetic effects of sublethal dose of insecticides on cytochrome P450 CYP6CY3 and esterase E4 genes were detected (Fig. 1). After exposure to imidacloprid for one and four generations, cytochrome P450 CYP6CY3 gene expression was increased to 1.95-and 5.20-fold. In the case of
Discussion
Adaptive mechanism attitude has been chosen by many organisms for their fetus survival and reproduction in stressful surroundings. In the study, we have shown the short and prolonged sublethal neonicotinoid and pyrethroid insecticides exposure to M. persicae significantly resulted in increased reproduction across different generations. We also have disclosed that insecticides hormetic exposure could significantly increase the expression of detoxifying genes involved in insecticide resistance. All these results suggested that hormesis was involved in M. persicae adaptive mechanisms followed by sublethal exposure of neonicotinoid and pyrethroid insecticides 24 . In our study, the nymphal instar period (N1, N2, N3) was significantly prolonged after the sublethal exposure of acetamiprid, while only third instar showed prolonged time period in case of imidacloprid exposure. The level of female longevity, per day fecundity and total fecundity was significantly increased after the sublethal exposure of neonicotinoids both on F1 and F4 generations. Previously, the effects of insecticide induced hormesis in M. persicae have been studied across multiple generations [3][4][5][20][21][22][23] , and also reported in other aphid species 25 , leafhopper species and citrus plant thrips [26][27][28][29] . Similarly, the increased population and survival of M. persicae by the exposure of sublethal concentrations of imidacloprid, azadirachtin and azinphosmethyl, have been reported [30][31][32][33] . Moreover, extended time length for the development of A. glycines Matsumura, Brevicoryne brassicae and A. gossypii to the low doses of imidacloprid confirmed our outcomes [34][35][36][37][38][39] . The phenomenon of extended development after low doses exposure could be linked that exposed aphids needed long-term nutrients enrichment and mass reproduction to cope with chemical or any stressor.
Low doses of pyrethroids application have also shown effects on the developmental durations of insects as reported by Kerns and Stewart 37 . In current study, sublethal exposure of pyrethroids on M. persicae for one and four generations delayed only the development of nymphal instars. There was no significant difference in reproduction between the control and low dose exposure groups. Same lack of stimulatory effects has been shown in A. gossypii when exposed to sulfoxaflor and imidacloprid with LC 20 35,38 .
In our experiments, the intrinsic rate (r m ) and finite rate (λ) both were reduced as compared to control group and it was supported by studies of Zeng et al. 5 , Wang et al. 23 , and Tang et al. 4 . By comparing the demographic parameters of M. persicae, it was shown that the population was effected with different treatments. For evaluation of insecticides effects, it is recommended to study the insect life parameters. Anyhow, the increased generation time (T) and gross reproductive rate (GRR) may suggest that sublethal concentrations, at some extant, could suppress or slow the growth of M. persicae, and these outcomes have also been reported in Bradysia odoriphaga and Hippodamia variegate 39,40 .
In this paper, the expression of esterase E4 and P450 CYP6CY3 genes was changed in M. persicae following low dose exposure to pyrethroids and neonicotinoids. Former studies have demonstrated that sublethal concentrations of insecticides or hormetic doses are involved in the alteration of detoxifying genes expression. Ayyanath et al. 3,22 have quantified the expression of different genes, including Hsp, FPPS I, OSD, ANT and TOL following insecticides low doses exposure on M. persicae. In present study, esterase E4 gene was increased about 2.09-fold after lambda cyhalothrin low dose exposure. However, the relative expression levels of E4 mRNA was inhibited after exposure to deltamethrin, which might be considered as fitness costs to another positive adaptive mechanisms. It has been alleged that hormesis could lead an adaptive mechanism resulted to promote organisms phenotypic plasticity that cope ongoing and deleterious environmental variations. Our results suggested that M. persicae developed different adaptive pattern to low dose exposure of lambda cyhalothrin and deltamethrin. The constitutional and stereo-chemical structure of pyrethroids played an important role during the continuous exposure to pyrethroids in insects 41,42 . Additionally, during the deltamethrin exposure in insects, different mechanisms have been documented, including the elevation of carboxylesterase activity through gene over-expression, point mutations within the carboxylesterase genes which change their substrate specificity, and point mutations in sodium channel 42,43 .
P450-CYP6CY3 gene expression was increased in M. persicae following imidacloprid and acetamiprid low dose exposure at F4. Many studies have shown that the increased expression of CYP6CY3 is associated with increased metabolism which leads to insecticide resistance 19,44,45 . It was expected that insecticides hormetic dose exposure on insect would result in frequent alteration or expression of detoxifying genes. In this paper, exposure to imidacloprid and acetamiprid for four generations induced obviously the over-expression of P450-CYP6CY3, suggesting that hormetic exposure could significantly increase the expression of detoxifying genes involved in M. persicae resistance to pesticides. Therefore, M. persicae resistance to neonicotinoids might be hasten by low dose exposure to this class of insecticides, which would be evaluated in developing optimized insect pest management strategies.
In conclusion, continuous asymmetrical application and degradation of insecticides in fields resulted that frequently insects are exposed to hormetic concentrations of insecticides. Two pyrethroids and neonicotinoids, as commonly used to control aphids, were used with low doses over four generations to study M. persicae demographic parameters and as well as their impact on gene expression. In our toxicity bioassay, the imidcloprid was more toxic as compared to other insecticides. Based on the present study, the developmental stages were delayed by the exposure of hormetic doses and potentially increased the M. persicae reproduction. Anyhow, the potential effects of sublethal doses should be evaluated on natural enemies in a long term way, also to control aphid-borne viruses and potentially application in IPM under field conditions. Previously, it has been concluded that xenobiotics could enhance the production of detoxifying genes and resulted in resistance. The shown study is the only one to disclose the genes induction expression following pyrethroids and neonicotinoids low doses exposure across over four generations, coinciding with upsurges of population reproduction.
Materials and Methods
Ethics statement. Neither permission was required for insect collection, no species used in the study were endangered or protected. 20 ) or control (water treated) and placed in a petri dishes as described above. The adult apterous aphids were released onto treated and control petri dishes. After 24 h exposure, 50 the first instars nymphs (F1) from the treated or control group were randomly selected and individually placed in a separate petri dish to study life parameters 5,18 . Whilst, remaining neonates (F1) were reared for further generations and the leaf discs inside petri dishes were replaced every 2-3 days for both the treated and control group. At the fourth generation (F4), the same method was repeated again to study life parameters with the first instars. During the experiments, the petri dishes were placed inside climatic chamber under the controlled conditions as described above.
Demographic Analysis. The raw data, daily survival of new born nymphs and adults, and their longevity were used to calculate following demographic parameters by using the TWOSEX-MSChart 2015.045 46 . Net reproductive rate: R 0 = ∑ l x m x, the number of times in which an individual population will get multiply per generation; Generation time: T = (∑l x m x )/R 0 , the average time length that separate a female birth of one generation from the next; Intrinsic rate of increase: r m = l n (R 0 )/T, species innate capacity that increase in numbers; Gross reproductive rate: G = ∑m x , the average number of females produced; Finite rate of increase: λ = e r m , the rate in which a population get multiply in one day. Whereas, x is the individual pivotal age in days, l X and m x are the percentage of age specific survival and fecundity at given age x, accordingly.
Total RNA Extraction and cDNA Synthesis. Total 20-30 aphids from each treatment were pooled as one biological replicate to isolate RNA with RNeasy® mini kits (Qiagen, ON, Canada). The quantitative and qualitative analysis (A260/280 > 2.0) for RNA was determined by Nanodrop ND-1000 (NanoDrop Technologies, Wilmington, DE), and also checked through gel electrophoresis analysis (1% gel). The cDNA was then synthesized using Omniscript® Reverse Transcript Kit (Qiagen, ON, Canada) by a microgram of total RNA and stored at −20 °C for later analysis.
Gene expression. Quantitative real-time PCR (qRT-PCR) was analyzed using TransStart ® Top Green qPCR SuperMix (Transgen, Beijing, China) and performed on CFX ConnectTM Real-Time System (BIO-RAD, Singapore). Primers for esterases-E4 (GenBank accession no. X74554) and P450 cytochrome CYP6CY3 (GenBank accession no. HM009309) were listed in Table 6. The β-actin was used as an internal reference gene. The reaction was performed in 50 µL tube according to manufacturer directions of total 20 µL reaction mixture. The reaction contained of a 10 µL of 2 × TransStart ® Top Green qPCR SuperMix, 0.4 µL of each forward and reverse primer, ddH 2 O 8.2 µL and 1 µL template. The cyclic thermal procedure involved an initial denaturation step at 94 °C for 30 s, following 40 cycles at 94 °C for 30 s for 5 s and 60 °C for 30 s and then a dissociation step was performed. The qRT-PCR analysis was included three independent biological replicates for each treatment. The fold change of target genes was calculated using the relative quantitative method (2 −ΔΔCt ) described by Pfaffl 47 .
Gene
Primers sequence (5′-3′) Statistical analysis. Data were statistically analyzed using one-way analysis of variance (ANOVA) followed by least significant difference (LSD) test in SPSS software version 22.0. Results were considered significant when P was < 0.05. For precise estimation, the bootstrap method was designed for M. persicae demographic parameters with 100000 replications and mean values, as well as standard error were calculated in TWOSEX-MSChart 2015.045 46 . For the expression of relative quantities of gene P450-CYP6CY3 and E4-esterase, Tukey comparison were used to test mean differences among treatments across generations and separated by LSD tests.
|
2018-11-10T14:56:57.262Z
|
2018-11-09T00:00:00.000
|
{
"year": 2018,
"sha1": "fb09b6b8b72e11781b99d6cb35103f0cf82ab46d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-35076-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb09b6b8b72e11781b99d6cb35103f0cf82ab46d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247608098
|
pes2o/s2orc
|
v3-fos-license
|
Economics of Rebreeding Nonpregnant Dairy Cows Diagnosed by Transrectal Ultrasonography on Day 25 after Artificial Insemination
Simple Summary An early and accurate pregnancy diagnosis can efficiently be used to improve the reproductive performances in dairy cows by using synchronization timed artificial insemination (TAI) programs. This is the key to shortening the calving interval, which improves profitability in dairy farms. Thus, this study presents the feasibility of two TAI programs coupled with early pregnancy diagnosis in dairy cows, 25 days after artificial insemination (AI). Many studies have reported the pregnancy rate when using various reproductive programs, but few have looked at the financial benefits of doing so. By using this strategy, we can generate a profitability of 89.6 USD/cow/year. The contribution to net value presents a breakdown of the income over feed cost, replacement cost, reproductive-program cost, and calf value. The benefit in favor of the TAI programs for the cows failing to conceive in this study is most likely due to the additional cost of the income over feed and given hormones. Abstract Pregnancy rates of Holstein cows showed a substantial decline in the past years, which caused intensive TAI programs for nonpregnant cows to shorten the period between unsuccessful insemination and the next attempt on the same cow. Although many studies examined the improvement in pregnancy rates following TAI, only a few examined the economic impact of such programs. In this study, we look at the feasibility of reproductive programs that included early pregnancy diagnosis performed by transrectal ultrasonography 25 days after artificial insemination (AI) and TAI of nonpregnant cows. This resulted in the following two TAI programs: a modified OvSynch program with a second PGF2α treatment at 24 h interval (GPPG, n = 100) and a modified OvSynch program with an intravaginal progesterone-release device inserted between days 0–7 (PRID + GPPG, n = 100). Cows included in the TAI programs recorded an improvement in the cumulative pregnancy rate (67% vs. 53%; 69% vs. 53%) compared to those in which this strategy was not applied (p < 0.05). An economic analysis was performed using a decision-support tool to estimate the net present value (NPV; USD/cow/year). The analysis revealed a difference in NPV of 89.6 USD/cow/year between the programs (rebreeding the nonpregnant cows following the TAI program vs. AI at detected estrus). In summary, rebreeding the nonpregnant cows after early negative pregnancy diagnosis (25 days after AI) using this strategy can improve the cumulative pregnancy rate and profitability of dairy farms.
Introduction
For better reproductive control in dairy cows, an accurate early pregnancy diagnosis is essential. Early pregnancy detection helps the veterinarian identify open animals and rebreed them as soon as possible, which contributes to an efficient breeding program. After the voluntary waiting period, the number of days a cow remains nonpregnant has been linked to lower profitability [1]. Early detection of cows that are unable to conceive can thus be used to increase the profitability of dairy-herd reproductive programs [2,3].
Different methods for determining pregnant status in dairy cattle might be used. Return to estrus [4], rectal palpation of the reproductive tract [5,6], reproductive-tract ultrasound examination [2], milk-progesterone testing [7], and tests for pregnancy-associated glycoproteins (PAGs) in blood or milk [8,9] are examples of such approaches. Moreover, Doppler ultrasonography has started being used in research to estimate the functionality of the corpus luteum for early pregnancy diagnosis [10][11][12][13][14]. Currently, most veterinarians use transrectal ultrasonography to diagnose early pregnancy in cows. An ideal early pregnancy test would have high accuracy, high sensitivity and specificity, and be inexpensive to perform, consisting of a simple cow-side test (usable in field conditions), and establishing pregnancy status quickly [15].
Depending on the veterinarian's competence, transrectal ultrasonography is a minimally invasive, accurate, and efficient approach for early pregnancy diagnosis that can be performed as early as day 25 after AI. Transrectal ultrasonography can also provide further information on ovarian structures, identify twins, and establish the viability of the fetus [2].
Pregnancy rates in Holstein cows have steadily declined over time, with a current value of around 35% [16][17][18]. To regulate reproduction in lactating dairy cows within commercial dairies, resynchronization systems to prepare nonpregnant cows for subsequent AI must be created and further assessed under these situations [2]. Some studies have reported the pregnancy rate and time to pregnancy when using various reproductive programs [19][20][21], but few have looked at the financial benefits of doing so [21,22]. The net present value of reinseminating nonpregnant Holstein cows after an early negative pregnancy diagnosis, 25 days after AI, has been reported in a few cases. Thus, the goal of this study was to determine the accuracy of ultrasonography in early pregnancy diagnosis (day 25 after AI) and to calculate the efficacy of two TAI programs for nonpregnant cows in terms of cumulative pregnancy-rate improvement and net present value (USD/cow/year).
Animals and Study Design
The research was performed on 300 Holstein cows artificially inseminated after 60 days in milk (DIM) and divided into three groups: one control (C group, n = 100) and two experimental (GPPG group, n = 100; PRID + GPPG group, n = 100). The cows were housed in free-stall barns with concrete floors covered with mattresses and fed a Total Mixed Ration (TMR) twice per day with ad libitum water access, according to the level of milk production and cow size. To keep the animals healthy, standard management practices, including a cooling system coupled with a weather station for hot months, were followed. During the study period, the farm milked about 780 cows three times a day at 0400, 1200, and 1900, for a daily average of 66 lb milk/cow/day. During the study, cows were balanced between pens by DIM and parity. Calving dates, breeding dates, and DIM were obtained from AfiMilk management software (AfiMilk, Kibbutz Afikim, Israel).
Estrus cows were identified from the AfiMilk (AfiMilk, Kibbutz Afikim, Israel) estrus daily report and each one was examined by an experienced veterinarian. The attempt to mount other cows, chasing herd mates, restlessness, chin resting, sniffing herd mates' vagina and bellowing, congestion, relaxation, and mucus discharge from the vulva were the estrus signs. The manifestation of standing estrus was considered to be a sign of true estrus.
The nonpregnant cows from the GPPG group were subjected to a synchronization TAI protocol to induce ovulation for timed AI (TAI). The TAI protocol started at 25 days after the first AI, which was day 0 of the protocol, with 100 µg Gonadorelin, (Gonavet Veyx, Veyx-Pharma GmbH, Schwarzenborn, Germany). On days 7 and 8 after GnRH, two doses of 500 µg Cloprostenol (PGF Veyx forte, Veyx-Pharma GmbH, Schwarzenborn, Germany) were administered, followed by 100 µg Gonadorelin, (Gonavet Veyx, Veyx-Pharma GmbH, Schwarzenborn, Germany) on day 9 and TAI on day 10 (approximately 16 hr later). The nonpregnant cows from the PRID + GPPG group were subjected to a similar synchronization TAI protocol, with the difference being that intravaginal progesteronerelease device PRID delta (Ceva Santé Animale, Loudeac, France) was used in the day interval 0-7 of the protocol. The nonpregnant cows from the C group were not treated and were AI-inseminated at the next spontaneous estrus.
One experienced veterinarian conducted all ultrasound examinations and hormone injections. The transrectal ultrasonography (iScan 2, Draminski S.A., Olsztyn, Poland) was performed 25 days after the first AI and repeated on day 32 for pregnancy confirmation (the reference test). Ultrasound scanning of the uterus and the ovaries was processed using a 3-7.5 MHz rectal convex probe (Draminski S.A., Olsztyn, Poland) for diagnosis and confirmation of pregnancy. The visualization of anechogenic fluid-filled uterine horn (embryonic vesicle more than 10 mm in diameter) in association with a corpus luteum on the ipsilateral uterine horn was used as a positive indicator of the pregnancy.
Economic Analysis
A 780-cow commercial dairy herd with a production of 26,000 lb milk/cow/year was simulated using the UW-DairyRepro$ decision support tool [22] with the modifications described by Giordano et al. [23] to assess the economic impact of rebreeding nonpregnant cows by using the mentioned TAI programs. The reproductive program simulated for the first AI service was similar to the experiment (heat breeding), whereas the second reproductive-management programs compared in the current study used TAI programs initiated 25 days after the first AI in the nonpregnant cows from experimental groups vs. AI at detected estrus in the C group. The following herd, economic, and reproductive parameters are included: average body weight (1600 lb), involuntary culling (28%/yr), mortality rate (4%), stillbirth (4.9%), milk price (16 USD/cwt), cost feed lactation (0.08 USD/lb DM), the dry period fixed cost (0.06 USD/lb DM), female calf value (USD 200), male calf value (USD 100), the heifer replacement value (USD 1800), salvage value (0.526 USD/lb), the adjusted voluntary waiting period (85 d in C group, 87 and 86 d in the GPPG and PRID + GPPG group), estrus cycle duration (29 days in all three groups), maximum day in milk for breeding (300 days), interbreeding interval for TAI service (35 days), pregnancy rate first service (43% vs. 42, and 41%), pregnancy loss (5%), day in gestation first pregnancy check (25 days), day in gestation second pregnancy check (32 days), day in milk first injection for TAI service (112 d in GPPG group, and 111 d in PRID + GPPG group), estrus detection rate (53%). The cumulative pregnancy rates were set at 53% in the C group, 67% in the GPPG group, and 69% in the PRID + GPPG group. The reproductive program cost used for experimental groups included GnRH at 2.6 USD/dose, PGF at 2.6 USD/dose, PRID delta at 15 USD/unit, labor for hormone injections at 15 USD/h, and AI (including semen unit and labor) at 45 USD/AI. The pregnancy diagnosis cost was set at 100USD /h. The model estimated net present value (NPV; USD/cow/year) differences for the reproductive programs consisting in rebreeding the nonpregnant cows following TAI programs vs. AI at detected estrus.
Statistical Analysis
The accuracy of ultrasound examinations used for early pregnancy diagnosis (25 days after AI) was evaluated according to Broaddus The pregnancy rate at first AI was defined as the percentage of cows that became pregnant after first AI out of the total number of cows in the corresponding group. The cumulative pregnancy rate was defined as the percentage of cows that became pregnant after two rounds of AI out of the total number of cows in the corresponding group, but not more than 35 days after the first AI.
Data were analyzed by least-squares analysis of variance using GLM procedures of SAS (SAS Institute, Cary, NC, USA). The fixed effects in the model included group, parity (primiparous vs. multiparous), and their interaction. A chi-square model was used in the final analysis after the removal of nonsignificant interactions. The differences between the two groups were considered to be statistically significant when p < 0.05.
Results
Pregnancy diagnosis performed 25 days after AI by transrectal ultrasonography showed high accuracy in all three groups (Table 1). Table 1. Results of the early pregnancy diagnosis, cumulative pregnancy rates, and net present value in GPPG and PRID + GPPG groups compared with the C group. Because the pregnancy rate after the first insemination was nearly identical in the experimental and control groups (42, 41% vs. 43%), approximately 60 percent or fewer of the cows failed to conceive. The corpus luteum was found in a large proportion of nonpregnant cows in this study in all three groups compared with the ovarian follicles and cysts which were detected in a small proportion (Table 1). When nonpregnant cows from the experimental groups were subjected to a synchronized TAI program, they had a higher cumulative pregnancy rate in GPPG and PRID + GPPG groups (67% and 69%) compared with the C group (53%, p < 0.05; Table 1).
Variables C Group GPPG Group PRID + GPPG Group
The simulating program used in this experiment, which included the improvement of the cumulative pregnancy rate by rebreeding the nonpregnant cows as a result of the TAI program, showed an NPV of 89.6 USD/cow/year greater for the experimental groups compared with the C group (Table 2).
Discussion
This study presents the NPV of two TAI programs for nonpregnant cows, which shortens the period between an unsuccessful AI and the following attempt.
The pregnancy detection by transrectal ultrasonography performed 25 days after AI yielded results comparable to those obtained by others [25,26]. This suggests that on day 25 after AI, transrectal ultrasonography can accurately identify nonpregnant cows eligible for TAI programs. Some studies [27,28] found low sensitivity on day 25 after AI, possibly due to difficulties in visualizing the gestational vesicle, ambiguities in interpreting the ultrasonography image [29], or the embryonic loss between days 25 and 45 after AI. Some errors can occur, and they can be costly: a false-positive diagnosis can result in the open cows not being inseminated on time, whereas a false-negative diagnosis can result in abortion when prostaglandin is administered. In our study, a small number of cows from both groups had false-negative pregnancy diagnoses due to very small embryonic vesicle dimensions (<10 mm) in the uterus. These cows were recorded as nonpregnant, but were not treated since the pregnancy diagnosis could not be certain. On day 32 after AI, the pregnancy was confirmed. Since the cost of an abortion is higher, rebreeding the cows in such ambiguous cases is not justified. When this reproductive management is applied, we believe that the accuracy of transrectal ultrasonography used for pregnancy diagnosis should be around 90-100%. Other authors suggested giving prostaglandin to cows found open on day 35 after AI for rectal palpation and 28 days for the ultrasound exam [30].
Early pregnancy detection is critical for reducing the calving interval by allowing the veterinarian to identify open cows and rebreed them. The pregnancy rate at first AI did not differ between groups in our study, and the findings are consistent with previous research [19,[31][32][33][34]. The majority of the authors agree that these low pregnancy rates are most likely the result of a combination of factors, including changes in physiology, nutritional management of the transition period, and the selection of traits that may have adverse effects on fertility [16,[35][36][37]. Furthermore, pregnancy rate failures in lactating dairy cattle can be attributed to the lack of a viable embryo due to oocyte quality and fertilization issues or the inability of the uterus to support blastocyst growth and conceptus elongation [24]. As a result of lower pregnancy rates, the dairy industry has made strides in developing estrus synchronization programs for both before and after pregnancy diagnosis.
An interesting aspect of our study is that corpus luteum was found in a high proportion of nonpregnant cows from both groups. Scully et al. [38] found no differences in corpus luteum echotexture measurements between pregnant and nonpregnant cows from day 18 to day 21. With a 90% fertilization rate, embryonic mortality between pregnancy and day 24 is approximately 40%; 70 to 80% of this loss occurs between day 8 and day 16 [39]. Ricci et al. [40] concluded that at least half of the nonpregnant cows who kept their corpus luteum until day 32 after AI were initially pregnant but lost the pregnancy early. However, our study did not evaluate the embryonic losses, because there are several risk factors associated with this, including environmental stress, disease, nutrition, luteal insufficiency, and ovulation of persistent follicles [41][42][43]. As a result, establishing a link between early pregnancy diagnosis and embryonic loss is extremely difficult. Unfortunately, there is little evidence of when an embryonic loss occurs, making it difficult to link it to the mechanism that causes it [44].
The most common measure when cows are found open at pregnancy diagnosis is to apply the treatment with prostaglandin, which results in luteolysis followed by a new estrus, typically about 3-4 days later [45]. Some authors recommended initiating the TAI program even before pregnancy diagnosis [46]. In our study, the TAI programs were initiated on the first day when the nonpregnancy was diagnosed. This reproductive management program, which included starting TAI programs for nonpregnant cows on the day of pregnancy diagnosis (day 25 after AI), resulted in a pregnancy rate of 43.1% (25/58) for the nonpregnant cows from the GPPG group and 47.5% (28/59) for the nonpregnant cows from the PRID + GPPG group, which is comparable to or better than the results obtained by other studies. Pereira et al. [47] obtained pregnancy rates of 26.4% and 20.1% when the TAI protocol was initiated 31 ± 3 days after AI for nonpregnant cows subjected to pregnancy diagnosis by ultrasonography and transrectal palpation, respectively. Moreira et al. [48] found similar results when using an estrus-synchronization protocol beginning 20 days after AI, with a reported pregnancy rate of 20% on day 45 of gestation. Fricke et al. [46] observed a pregnancy rate of 23%, 34%, and 38% for cows enrolled in the synchronization protocol at 19, 26, and 33 days after AI, respectively. El-Zarkouny et al. [49] reached a conception rate of 59.3%, while Bisinotto et al. [50] noted 51.3%, after a protocol with a gestagen insert but also with a double injection of PGF2α on day 7 of the OvSynch program. At the same time, there is controversy regarding the use of tools for early pregnancy diagnosis in combination with estrus-resynchronization programs. Injecting pregnant cows with GnRH 19 days after the first AI does not improve the calving rate compared to starting the Resynch program 26 or 33 days after the first AI [46]. Similar results were obtained when the Resynch program started on day 21 after timed AI (TAI) was used, to initiate a TAI protocol before pregnancy diagnosis [51]. Moreira et al. [48] halted the aggressive estrus-resynchronization program due to the high embryonic loss from day 20 to 27 in pregnant cows treated with GnRH on day 20 after the TAI compared to untreated pregnant cows. In our opinion, the TAI programs should begin on the day of early pregnancy diagnosis (day 25 after AI service) because this strategy can improve the profitability of dairy farms without risks.
The contribution to net value presents a breakdown of the income over feed cost, replacement cost, reproductive-program cost, and calf value. The benefit in favor of the TAI programs for the cows failing to conceive in this study is most likely due to the additional cost of the income over feed and given hormones. This benefit, in favor of the GPPG and PRID + GPPG groups, was assessed as a positive net present value for the farm, and its relationship with additional reproductive-management decisions should be considered. In another study, Pereira et al. [47] began resynchronization 31 ± 3 days after AI and obtained an NPV of 3.65 USD/cow when pregnancy diagnosis was performed on the same day by ultrasonography versus 38 ± 3 days after AI by transrectal palpation. However, the majority of studies using management practices aimed at reducing the interbreeding interval in cows have reported economic benefits, particularly for reproductive programs with lower pregnancy rates [21,22,52]. The contribution to pregnancy rate and net present value could be generated by not supplementing the TAI programs with an activity-monitoring system. Fricke et al. [53] assessed the efficacy of TAI with and without an activity-monitoring system at first service. They found that this combination reduces time to first service by 7.5 to 12.4 days while decreasing the conception rate by 8% when compared with TAI alone. When evaluating the net present value of TAI with or without an activity-monitoring system, they discovered that the difference is only 4.00-8.00 USD/cow/year. The use of an activitymonitoring system to inseminate cows based on increased activity reduced the days to first AI by increasing the AI service rate, whereas cows receiving 100% TAI after completing a TAI program had more P/AI [53]. This suggests that depending on individual herd scenarios, multiple reproductive-management programs may be economically feasible [54]. Thus, in the cattle industry, a variety of strategies based on rebreeding nonpregnant cattle can be used to improve net present values. Because herds differ in reproductive performance [55], management decisions should be based on an economic analysis of observed reproductive outcomes specific to that farm [53].
Conclusions
In this study, we used transrectal ultrasonography for pregnancy detection as an accurate method of identifying the nonpregnant cows eligible for the synchronization TAI programs, 25 days after AI. It is possible to improve the cumulative pregnancy rate and the net present value by rebreeding the nonpregnant cows as soon as possible. Thus, rebreeding nonpregnant cows starting with day 25 after AI can reduce the time in which a cow becomes pregnant and increase the cumulative pregnancy rate (67% vs. 53%; 69% vs. 53%) and the profitability of dairy farms by around 89.6 USD/cow/year.
|
2022-03-23T15:18:36.453Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e0f2e9564176054a2b231a73e28b9959be05f9cd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/6/761/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff9cfc225d342e7fa00cb3101b34e4061158680b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2123565
|
pes2o/s2orc
|
v3-fos-license
|
Fatal Attraction: How Bacterial Adhesins Affect Host Signaling and What We Can Learn from Them
The ability of bacterial species to colonize and infect host organisms is critically dependent upon their capacity to adhere to cellular surfaces of the host. Adherence to cell surfaces is known to be essential for the activation and delivery of certain virulence factors, but can also directly affect host cell signaling to aid bacterial spread and survival. In this review we will discuss the recent advances in the field of bacterial adhesion, how we are beginning to unravel the effects adhesins have on host cell signaling, and how these changes aid the bacteria in terms of their survival and evasion of immune responses. Finally, we will highlight how the exploitation of bacterial adhesins may provide new therapeutic avenues for the treatment of a wide range of bacterial infections.
Introduction
Bacteria are continually evolving mechanisms in order to successfully colonize and survive in many different environmental conditions. For some bacteria these adaptations have enabled them to thrive within the human body. Both pathogenic and commensal bacteria display a wide range of surface bound and secreted molecules that are able to aid their colonization of the host. Arguably, one of the most important characteristics of bacterial colonization is adhesion. Adhesion not only allows bacteria to colonize through OPEN ACCESS simply sticking to host cell surfaces and thus generating a stable platform on which to grow, but is also required for the release of toxins and virulence factors that drive infection. How different bacterial populations use the multiple adhesins present on their surface (Table 1) and how they bind to specific cell receptors located in niche environments within the host can also influence the type of disease caused by a particular organism. The formation of biofilms, which are known to increase antibiotic resistance and reduce clearance from the host, is also highly dependent upon bacterial adhesion molecules. Further to this, adhesion of bacteria to host cell surfaces can affect not only bacterial cell signaling but also lead directly to changes in host cell signaling, enabling bacterial spread and evasion of host immune responses.
It is therefore clear that adhesion remains an integral feature throughout the course of bacterial infections. While the topic of bacterial adhesion and to some extent the effect this has on host cell signaling has been reviewed previously [1,2], in this review we aim to summarize the key points related to the different mechanisms of bacterial adhesion and highlight recent advances in the field, with an emphasis on the effects adhesion can have on host cell signaling and finally how these interactions may be exploited in terms of novel therapies for a broad range of bacterial infections, while avoiding off-target effects on the host.
Integrin and Fibronectin Binding Proteins
Integrins represent a highly conserved group of heterodimeric transmembrane glycoproteins that are essential for many cell-cell and cell-matrix interactions. The collagen binding integrins in particular have been shown to be conserved throughout the metazoan tree of life and form an essential component of multi-cellularity in animals [3][4][5]. Due to this wide spread presence throughout the animal kingdom and the fact that integrin signaling facilitates many essential cell signaling cascades, including those involved in cell adhesion and cytoskeletal organization, many bacterial species have evolved adhesion mechanisms that interact either directly or indirectly with host integrin receptors.
Fibronectin binding proteins (FnBPs) make up a diverse group of surface adhesins that bind to the extracellular matrix (ECM) protein fibronectin. As such, they are a subclass of a large family of bacterial adhesins referred to as microbial surface components recognizing adhesive matrix molecules, or short, MSCRAMMS [6]. In the case of the Gram-positive bacterium Staphylococcus aureus this interaction with fibronectin within the ECM is able to facilitate bacterial binding to the host cell surface by exploiting fibronectins binding to the host cell integrin α5β1 (Figure 1). The binding of S. aureus FnBPA to integrin α5β1 via fibronectin bridging has been shown to facilitate bacterial uptake into host cells [7]. In addition the Streptococcal FnBP Sfbl/F1 has also been shown to mediate invasion of epithelial cells [8,9]. Although the binding of FnBPs to fibronectin has been reported to be a strong interaction (~2.5 nN), possibly due to the fact that a single FnBP can bind up to 9 fibronectin molecules [10,11], the importance of FnBPs during infection when comparing either wild type or FnBP mutant strains in vivo has been variable. It has been suggested that this may be due to the typically wide range of diseases caused by these organisms and the prevalence of additional virulence factors in some circumstances may have redundant roles [12]. However a more recent study has demonstrated that FnBPs are essential for biofilm formation in S. aureus strain LAC, a methicillin resistant clinical isolate [13]. Bacteria can also adhere to and internalize into host cells by direct interaction with integrins. The Yersinia protein invasin facilitates initial adhesion of the bacterium and binds with high affinity to β1-integrin receptors found on the surface of M cells [16]. However, following initial attachment and invasion, the expression of invasin is reduced and adhesion is maintained by the adhesins YadA and Ail which mediate serum resistance and promote tight adherence to ECM proteins fibronectin and collagen ( Figure 1) [17,19]. The mechanism of invasin-induced internalization will be discussed below.
Chaperone-Usher Pili: P Pili and Type I Pili
Chaperone-usher (CU) pili are some of the most well-characterized bacterial adhesins. They form long proteinaceous strands made up of several subunits, which extend from the surface of many Gram-negative as well as some Gram-positive bacteria and can be divided into a "tip" and a helically wound "rod" like domain [20,21]. Due to the fact that certain pili can also be utilized for the transfer of DNA during conjugation, those that are used exclusively for adhesion to host cell surfaces are often referred to as fimbriae. The first fimbria to be described was the P-pilus, which is expressed under the control of the pap operon by uropathogenic E. coli (UPEC) and interacts with the α-D-galactopyranosyl-(1-4)-β-D-galactopyranoside moiety of glycolipids present on upper urinary tract cells via the tip adhesion subunit PapG ( Figure 1). Variations in PapG can also recognize different but related Gal(α1-4)gal receptors differentially distributed within the host as well as within populations and is thought to drive tissue and host specificity [38]. The biogenesis of the P-pilus has been widely studied in molecular detail and is the archetype of chaperone-usher pilus formation. Individual unfolded subunits are transported into the periplasm by the general secretory pathway [39] where they first undergo disulphide bond formation by DsbA. The subunits are then further stabilized and transported by the chaperone PapD to the outer membrane usher PapC which forms and extends the pilus, starting at the tip, via donor strand exchange [21].
Type I pili represent another class of heteropolymeric fimbriae present on the surface of pathogenic E. coli (UPEC and DAEC) and are encoded by the fim operon. Similar to the P-pilus, the type I pili are formed through a CU pathway comprising FimC as the periplasmic chaperone and FimD as the outer membrane usher (Figure 1) [40]. The adhesin tip of the fimbria is formed by the FimH subunit which binds mono-and tri-mannose containing glycoproteins. Structural and biophysical analysis of the type I and P-pili have demonstrated that binding of the tip adhesins to their respective ligands is via a catch bond (a bond whose strength is increased by a force such as shear stress) and that the regulation of binding strength can be controlled by uncoiling of the helically wound rod domain [41]. In addition, recent evidence has also implicated FimH as a key factor in influencing virulence. It has been demonstrated that through alteration of adhesin conformation by point mutations in FimH of Crohn's disease associated adherent-invasive E. coli results in enhanced intestinal inflammation by an unknown mechanism [22].
Type IV Pili
The type IV pili are another group of polymeric surface organelles that are among the most wide spread throughout Gram-positive bacteria, Gram-negative bacteria as well as Archaea and have been previously reviewed in depth elsewhere [26]. Unlike CU pili, the precise biogenesis and adhesion properties of type IV pili are still poorly understood, partly due to the large number of different proteins involved in pilus formation [42] and also the high functional diversity exhibited by many type IV pili including adhesion, aggregation, DNA transfer, electron transfer and motility. Despite this, studies have so far determined that type IV pilus formation involves the translocation of pre-pilins across the inner membrane where pre-pilin peptidase recognizes and cleaves a conserved N-terminal type III signal sequence, thus forming a mature pilin subunit. Upon release from the inner membrane, the pilin subunit is then assembled into a fiber via an ATPase dependent manner along with several accessory protein molecules (Figure 1), [43]. In Neisseria meningitidis the ATPase PilF catalyzes the extension of the pilin fiber and PilT is involved in the retraction of the pilus through the bacterial cell wall while the pilus remains bound to the target surface [44]. This interplay between elongation and retraction has been shown to depend on levels of PilT and force mediated elongation, which can lead to altered interaction between the bacteria and host cells by increasing pilus tension [45]. More recent studies have also highlighted that the number of pili on the surface of N. meningitidis also can alter the interaction and cell signaling of host cells [46,47].
Adhesive Amyloids
Amyloids are insoluble polymeric protein fibril-like structures that share a common cross stacking of folded β-sheets. They were first recognized in human diseases such as Alzheimer's, Huntington's and prion encephalopathies but have since been found to be extremely wide spread in nature and display a broad range of functional diversity [24]. Curli are probably the best described class of functional amyloids and are produced by enteric bacteria such as E. coli, Salmonella, Citrobacter, and Shewanella. Amyloid fibers have also been found in 5%-40% of species isolated from natural biofilms [48]. In E. coli two distinct operons are involved in curli formation, the csgBAC operon and the csgDEFG operon. The csgDEFG operon encodes the soluble transcription regulation subunit CsgD as well as chaperones CsgE and CsgF which co-ordinate with CsgG to form a distinct secretion system. The secretion system then transports curli subunits CsgA and CsgB to the cell surface where CsgB nucleates CsgA into the highly stable fibril polymer (Figure 1). Recent structural evidence has highlighted that CsgG forms an un-gated, non-selective protein secretion channel that along with CsgE restricts the conformational space within the channel by forming an encaging complex. This caging generates an entropic free-energy gradient over the channel and allows for protein translocation across the membrane through an entropy driven, diffusion-based method [49]. The main role of amyloid fiber adhesion for most bacterial species is during biofilm formation in which they help to increase biofilm stability through interactions with host ECM proteins such as fibronectin and laminin and also enhance resistance to protease degradation. Mtp amyloid fibers from Mycobacterium tuberculosis have been shown to bind to laminin in the ECM and contribute to bacterial adhesion and colonization [27].
Autotransporters
The autotransporters are a diverse family of outer membrane and secreted proteins that are found in many Gram-negative bacteria and form a monomeric or trimeric structure. In most cases they facilitate adhesion to host cell surfaces and ECM as well as bacterial aggregation and biofilm formation ( Figure 1). All autotransporters share conserved structural features, including an N-terminal signal sequence which enables secretion of the protein across the inner membrane via the general secretory pathway, a conserved C-terminal translocation domain which inserts into the outer membrane, and a variable passenger domain that can either be free or anchored to the cell surface and influences the adhesive properties of the protein [50,51]. The first trimeric autotransporter to be described was YadA of Yersinia sp. [52]. YadA from different Yersinia sp is thought to adhere to different ECM components [17]. Despite their wide spread and central role in bacterial pathogenesis the precise molecular mechanisms of action for many of the autotransporter proteins are still poorly defined. Recent evidence from the structure of Antigen 43, an autotransporter from uropathogenic E. coli, has highlighted a twisted L-shape β-helical structure that is proposed to form a molecular "Velcro-like" mechanism of self-association facilitating bacterial clumping [25]. A study evaluating the binding interactions of Burkholderia cenocepacia trimeric autotransporters has revealed that homophilic and heterophilic interactions formed by autotransporter BCAM0224 are of a low affinity. This weak adhesion may have biological significance as during colonization of the lung a lower affinity would allow for dynamic interplay between adhesion and movement of the bacteria, thus allowing the pathogen to spread and bind to new sites [53].
Multivalent Adhesion Molecules
The multivalent adhesion molecules (MAMs) are a relatively recent class of bacterial adhesins to be described and participate in high affinity binding during the early stages of infection of a wide range of Gram-negative bacteria [30]. MAMs consist of an N-terminal hydrophobic region, followed by either six (MAM6) or seven (MAM7) mammalian cell entry (MCE) domains (Figure 1). While MAM6 and MAM7 molecules are found exclusively in Gram-negative bacteria, single MCE domain containing proteins are more widely conserved and in addition to Gram-negative bacteria, are also found in Mycobacteria and some Gram-positive bacteria as well as algae and higher-plants. The MCE domain was first described in Mycobacteria where there are four separate operons encoding MCE proteins [29,54]. The vast majority of these are thought to play a role in lipid metabolism [55,56] but Mce1A has been shown to facilitate M. tuberculosis adhesion and internalization into non-phagocytic host cells [28,29]. Differences in Mce1A between M. tuberculosis and M. leprae have been suggested to be a potential mechanism of tissue specific infection of the two species [57]. As mentioned above, in Gram-negative bacteria the number of MCE domains within MAMs is highly conserved to six or seven MCE domains and it has previously been shown that six domains is the minimum number required for efficient binding to host cells [58]. Interestingly, recombinant MAMs with three to five MCE domains in tandem have been found to misfold or result in highly unstable proteins, which reasons why this domain configuration is not seen in nature. However the molecular basis for this observation is still poorly understood. Secondary structure prediction reveals that MAMs are rich in β-strands connected by flexible loop regions; similar to FnBPs. Characterization of Vibrio parahaemolyticus MAM7 binding interactions has revealed that the host ligands for MAM7 adhesion are fibronectin and phosphatidic acid (PA) [30]. While many bacterial receptors have been found to bind fibronectin this is the first bacterial adhesin shown to bind directly to lipid ligands within the host cell membrane. The binding to fibronectin was found to be a moderate affinity with an equilibrium dissociation constant (KD) of 15 μM, however PA binding was found to be much greater with a KD of 200 nM. A more recent study of this interaction has demonstrated that PA is essential for adhesion to host cells and is mediated mainly by key basic residues in MCE-1, 2, 3 and 4, whereas fibronectin is dispensable and merely acts to increase the rate of host cell binding [58]. The interaction with fibronectin was found to require at least 5 MCE domains and that only a 30 KDa N-terminal fragment of fibronectin was needed to facilitate binding. Unfortunately the molecular mechanism of how MAM proteins form protein-protein and protein-lipid interactions simultaneously and the key residues involved are still unknown.
Effect of Bacterial Adhesion on Host Cell Signaling
The ability to attach to host cell surfaces is evidently a key first step in colonization as this can reduce the ability of clearance from the host through shear stress, however, attachment alone is not enough to establish and maintain an infection. Bacteria have evolved mechanisms of manipulating the surrounding host environment and immune response to aid their spread and survival through alteration of host cell signaling. Whilst this ability in the later stages of infection can be attributed to a myriad of secreted effectors, depending on the bacteria and niche environment, there is accumulating evidence that at the initial stages of the infection many species are able to manipulate host cell signaling directly through the process of adhesion.
As mentioned previously, the integrin family of host cell surface receptors are a key target for adhesion of many bacterial species and normally regulate cell-cell and cell-ECM contacts through a wide range of intra-cellular signaling pathways. This central role of integrins in host cell structure and tissue integrity can be altered in different ways by a variety of bacteria, depending on the type of bacteria and the infection caused. The binding of S. aureus FnBPs to host cell β1 integrins via a fibronectin linkage leads to integrin clustering and recruitment of focal adhesion like protein complexes which include cell signaling molecules such as vinculin, paxillin, zyxin, tensin, FAK and c-Src. This results in downstream signaling and a re-organization of the actin cytoskeleton, facilitating invasion of host cells [3]. As well as effects upon the cytoskeleton, β1-integrin binding by Yersinia enterocolitica invasin protein has been shown to be an early trigger for inflammasome activation and interleukin-18 (IL-18) production in intestinal epithelial cells (the main target cell for this pathogen), which suggests that in these circumstances β1-integrin may have evolved a second function as a pathogen recognition receptor. This initial invasin-triggered inflammation is later counteracted by the type III secretion system effector proteins YopE and YopH [59]. The Type IV pilus adhesin, CagL, of Helicobacter pylori has recently been shown to induce gastrin production in gastric epithelial cells by adhesion to β5-integrin/integrin linked kinase complexes and downstream signaling through the epidermal growth factor receptor (EGFR), Rapidly Accelerated Fibrosarcoma (Raf) kinase, mitogen activated protein kinase kinase (MEK), extracellular signal regulated kinase (ERK) pathway, thus increasing the acidity of the stomach which can lead to gastric ulcer formation and gastric adenocarcinoma [31]. A second adhesin of H. pylori, the blood group antigen binding adhesin BabA, which binds human Lewis (b) surface epitopes, has been shown to induce IL-8 production through adhesion mediated activation of the type IV secretion system [32]. A separate study also found BabA adhesion to cause DNA double strand breaks through an unknown mechanism, again highlighting this pathogen as a strong inducer of gastric inflammation and carcinogen [32,60].
Although integrin binding is a common target for many pathogens to alter actin cytoskeletal organization, recent studies have highlighted alternative cell surface molecules that may also result in downstream effects on the cytoskeleton. Phosphatidic acids make up between 1% and 4% of a cell's phospholipid content and are key precursors for other phospholipids, regulate membrane curvature and can affect a broad range of signaling molecules [61][62][63][64][65]. Clustering of the adhesin MAM7 of V. parahaemolyticus at the host cell surface upon binding to phosphatidic acid has recently been shown to mediate activation of the small GTPase RhoA. The activation in RhoA leads to actin rearrangements, resulting in the redistribution of tight junction proteins and disruption of epithelial integrity. This destruction to the epithelial barrier allows V. parahaemolyticus to translocate across polarized epithelial layers [66].
Bacterial adhesins can also elicit immune responses in host tissue, such as the CsgA curlin subunit of Enterobacteriaceae which binds to and activates Toll-like receptor 2 signaling in host cells leading to increased inflammation [67].
The Potential of Adhesion Inhibition as Novel Infection Intervention
The widespread rise of antibiotic resistance in many clinically significant pathogens is a serious threat to global health and new methods to combat infections need to be developed urgently. Ideally, new therapies will target virulence factors associated with bacterial colonization rather than immediate survival, thus allowing infection attenuation and natural clearance. This targeting method may apply less selective pressure upon the bacterium and would conceivably result in a reduction of the amount of antibiotic resistant strains emerging. As this review has highlighted, adhesion plays an early and integral part in bacterial colonization and survival within the host and as such has been a target for many anti-infection studies, especially in the background of antibiotic resistant strains. The idea of anti-adhesion therapy is not new and has been reviewed previously [68,69] with the first deliberate attempt to block FimH adhesion to mannose containing host cell receptors by using mannoside derivatives [70]. However, despite their obvious appeal, anti-adhesion therapies are still not in mainstream use for the treatment of bacterial infections. One reason for this is that bacteria possess multiple adhesion molecules that are expressed in a time-and tissue-specific manner during the course of an infection and this redundancy presents a real challenge for anyone developing an anti-adhesion therapy. A possible way to counteract this would be to use a cocktail of inhibitors that target multiple adhesion molecules and/or use these inhibitors alongside traditional antibiotic therapy. Another challenge in the field of anti-adhesion therapy is the design of high affinity inhibitors that are able to effectively out compete and remove adherent bacteria from the cell surface, while avoiding interference with endogenous host signaling pathways. This will require a deeper understanding of features within bacterial adhesins required for surface attachment and activation of signaling pathways, which will further work on uncoupling these two functions and design inhibitors which specifically outcompete bacterial pathogens, while avoiding off-target effects. Further structural insight into specific adhesin-host interactions along with the design of multivalent display systems will undoubtedly be needed for the development of new anti-adhesion therapies. However, recent studies are beginning to demonstrate the feasibility of anti-adhesion therapy. Uropathogenic E. coli O25b:H4-ST131, a multi-drug resistant strain which causes recurrent urinary tract infections with limited treatment options, has been shown to be susceptible to small molecular weight FimH inhibitor 4'-(a-D-mannopyranosyloxy)-N,3'dimethylbiphenyl-3-carboxamide and results in reduced colonization of the bladder in murine models of urinary tract infections (UTI) even upon treatment of established infections [71]. While FimH antagonists may be limited to treatment of E. coli infection, anti-adhesion therapy targeting bacterial MAMs adhesion to host cells may lead to therapies with broader efficacy. Recombinant MAM7 from V. parahaemolyticus coupled to polymer beads has been shown to inhibit bacterial adhesion in a wide range of Gram-negative infections, including antibiotic resistant strains isolated from the wounds of wounded military personnel [72,73].
Summary
Recent advances have further highlighted the prospect of targeting bacterial adhesion as a viable method to treat a broad range of bacterial infections and with the rise of multidrug resistant bacteria presenting an ever increasing problem the need for the development of novel therapies is of the upmost importance. Although the molecular mechanisms of many bacterial adhesins are known, new adhesin classes have been found in recent years for which more work is still needed to define their molecular interactions. We note that especially interactions between adhesins and carbohydrate-based host cell ligands, while abundantly represented in nature, are still not well understood in terms of the effect these interactions have on host cellular signaling. With new advances in the application of chemical biology approaches to the study of bacterial adhesion, it has become increasingly clear that in many cases, the function of bacterial adhesins transcends beyond physical attachment and has a direct impact on early signaling events during host-pathogen interactions and thus may facilitate bacterial colonization and spread. This information needs to be further utilized to develop more efficient therapies that target bacterial adhesion while avoiding off-target effects on the host.
|
2018-04-03T01:56:40.584Z
|
2015-01-23T00:00:00.000
|
{
"year": 2015,
"sha1": "c9e680bbd4e3188fbeec3b306170f380ba415446",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/16/2/2626/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd6e1cdd47feda60d5ed40c82cdcc23bba07d3ef",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
218085345
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and socio-demographic correlates of psychological health problems in Chinese adolescents during the outbreak of COVID-19
Psychological health problems, especially emotional disorders, are common among adolescents. The epidemiology of emotional disorders is greatly influenced by stressful events. This study sought to assess the prevalence rate and socio-demographic correlates of depressive and anxiety symptoms among Chinese adolescents affected by the outbreak of COVID-19. We conducted a cross-sectional study among Chinese students aged 12–18 years during the COVID-19 epidemic period. An online survey was used to conduct rapid assessment. A total of 8079 participants were involved in the study. An online survey was used to collect demographic data, assess students’ awareness of COVID-19, and assess depressive and anxiety symptoms with the Patient Health Questionnaire (PHQ-9) and the Generalized Anxiety Disorder (GAD-7) questionnaire, respectively. The prevalence of depressive symptoms, anxiety symptoms, and a combination of depressive and anxiety symptoms was 43.7%, 37.4%, and 31.3%, respectively, among Chinese high school students during the COVID-19 outbreak. Multivariable logistic regression analysis revealed that female gender was the higher risk factor for depressive and anxiety symptoms. In terms of grades, senior high school was a risk factor for depressive and anxiety symptoms; the higher the grade, the greater the prevalence of depressive and anxiety symptoms. Our findings show there is a high prevalence of psychological health problems among adolescents, which are negatively associated with the level of awareness of COVID-19. These findings suggest that the government needs to pay more attention to psychological health among adolescents while combating COVID-19. Electronic supplementary material The online version of this article (10.1007/s00787-020-01541-4) contains supplementary material, which is available to authorized users.
Introduction
The 2019 novel coronavirus disease (COVID-19) first broke out in Wuhan, Hubei Province, China, on 31 December 2019, and it was later declared an international public health emergency by the World Health Organization (WHO) [1]. The novel coronavirus disease has spread to 201 countries/ territories outside of China and infected 634,835 patients globally [2] (81,470 in China [3]) as of March 29, 2020. It is worrying that so many people and countries have been affected so quickly. The outbreak of COVID-19 has caused mental health problems among the public and health care workers in China [4,5]. For instance, it has caused public panic and mental health stress [4]. The increasing number of confirmed cases and the increasing number of outbreakaffected provinces and countries have led to the public's fears that they may be infected. In particular, adolescents are a vulnerable group that is presenting with more and more complex issues [6].
Mental health is considered to be the most essential condition for a good quality of life. Adolescents with good mental health can bring their happiness and selfconfidence into adulthood, thus providing the ability to cope with adversity [7]. Mental health disorders account for 16% of the global burden of disease and injury among people aged 10-19 years [8]. It is estimated that 10-20% of children and adolescents throughout the world are troubled by mental health problems [9]. Globally, depression is the fourth leading cause of disease and disability among adolescents aged 15-19 years, and the 15th for those aged 10-14 years. Meanwhile, anxiety is the ninth leading cause of disease and disability for adolescents aged 15-19 years and sixth for those aged 10-14 years [10]. Studies have reported high detection rates of mental problems in Chinese children and adolescents, ranging from 10.7 to 27.6% [11][12][13][14][15]. Various emotional or behavioral problems affect at least 30 million Chinese children and adolescents under 17 years of age [16]. Further studies showed that the incidence of behavioral and emotional problems was 17.6% among Chinese school children and adolescents aged 6-16 [17]. The mental health problems of adolescents include conduct disorders, emotional disorders, self-harm, eating disorders, and hyperkinetic disorders [10,[17][18][19]. Additionally, there is more and more evidence that the prevalence of adolescent emotional disorders is increasing [20]. For example, the 12-month prevalence of major depressive episodes in adolescents increased from 8.7% in 2005 to 11.3% in 2014 in the United States [21].
Stressful events are potent adverse environmental factors that can predispose individuals to psychiatric disorders, in particular depression [22][23][24]. In addition, studies have shown that during an epidemic outbreak, the public experiences negative emotional responses, such as anxiety and depression symptoms [25,26]. Current studies have shown that COVID-19 causes moderate-to-severe symptoms of anxiety and depression in about one-third of adults for Chinese people [27]. The National Health Commission has released guidelines to promote psychological crisis intervention for patients, people under medical observation, medical workers, and civilians during the COVID-19 outbreak [28]. However, it is not clear what is the occurrence and distribution of depressive and anxiety symptoms in adolescents. Therefore, it is necessary to quickly assess depressive and anxiety symptoms related to emergencies for civilians, especially adolescents [29].
The objective of the current study was to assess the prevalence of two specific mental symptoms, anxiety and depression, and their socio-demographic correlates among adolescents in the Chinese population during the COVID-19 outbreak.
Design and subjects
We conducted this cross-sectional study using an online survey to assess mental health problems from March 8 to March 15, 2020. Junior and senior high school students in China aged 12-18 years were invited to participate in the online survey through the Wenjuanxing platform (https :// www.wjx.cn/app/surve y.aspx). In total, 8140 participants took part in the survey. After removing the data of participants with incomplete questionnaires, 8079 participants from 21 provinces and autonomous regions were included in the analysis. These regions can represent the overall conditions of China. There is no significant difference in the infection rate of COVID-19 in other regions except Hubei. We divided the participants into from Hubei region and from other regions. The province of Hubei has a population of 59,270,000 and includes the city of Wuhan; Hubei province incurred the highest rate of infections and deaths in China.
Approval for the study was obtained from the Ethics Committee of Beijing HuiLongGuan Hospital. All the participants provided online informed consent to participate in the study.
Assessment tools and procedure
A data collection sheet was designed to collect basic socio-demographic information and students' awareness of COVID-19 (COVID-19 knowledge, prevention and control measures, projections of COVID-19 trend), and two specific mental symptoms, depressive and anxiety symptoms, were assessed through the online survey. The questions about the awareness of COVID-19 asked participants to select responses from a self-made questionnaire. For the first questions, respondents were asked about their familiarity with information about prevention and control of COVID-19, with responses ranging from 1 ("very unfamiliar") to 10 ("very familiar"). In the second question, respondents were asked if they had taken all the optional prevention and control measures against COVID-19 to avoid infection, and the response range was from 1 ("very consistent") to 10 ("very inconsistent"). In the third question, respondents were asked about their attitudes towards the projections of COVID-19 trend, ranging from 1 ("very pessimistic") to 10 ("very optimistic").
Depressive symptoms were assessed by the Patient Health Questionnaire (PHQ-9) [30][31][32][33], which consists of nine items. The PHQ-9 is a simple, highly effective self-assessment tool for depression. Participants are asked to report the presence of nine problems, including depression and interest decline, in the last 2 weeks on a 4-point scale ranging from "nearly every day" (3 points) to "not at all" (0 points) [31,33]. The scores for symptom severity were 5-9 for mild, 10-14 for moderate, and 15-19 moderately severe, 20-27 for severe. The PHQ-9 has a good internal consistency, with a Cronbach's alpha coefficient between 0.80 and 0.90 [30][31][32][33]. Reliability and validity in the general population, as well as in patients with mental disorders, have been demonstrated [34,35]. PHQ-9 has been widely used to assess depressive symptoms in adolescents [36,37].
Anxiety symptoms were assessed by the Chinese version of the Generalized Anxiety Disorder scale (GAD-7) [38,39], which measures seven symptoms. Participants are asked how often they were bothered by each symptom during the last 2 weeks. The response options are "not at all," "several days," "more than half the days," and "nearly every day," scored as 0, 1, 2, and 3, respectively. The scores for symptom severity were 5-9 for mild, 10-14 for moderate, and 15-21 for severe [38,39]. Good retesting reliability and validity for GAD-7 have been confirmed in Chinese people [40]. The Cronbach's alpha is between 0.90 and 0.92 [38,41]. The scale has been used in many studies to assess anxiety symptoms in adolescents [36,37,42].
Statistical analysis
The dataset was analyzed using SPSS version 24.0 (IBM SPSS, IBM Corp., Armonk, NY, USA). For demographic data, Chi-squared tests were used to analyze categorical variables. The scores for COVID-19 knowledge, prevention and control measures, and projections of the COVID-19 trend fit the normal distribution, so we used an independent-samples t test to compare the difference between the groups with and without depressive symptoms. The same method was used to compare the difference between the groups with and without anxiety symptoms. Logistic regression was used to analyze the predictors of depression and anxiety symptoms. With versus without depressive symptoms and with versus without anxiety symptoms represent dichotomous dependent variables. The level of significance was set at 0.05 (two-sided).
Results
A total of 8140 junior and senior high school students (12-18 years old, median = 16) were invited to participate in the online survey; 8079 fulfilled the study inclusion criteria and completed the assessments, giving a response rate of 99.3%. Table 1 shows the socio-demographic characteristics and their associations with depressive and anxiety symptoms. Our results showed differences in depressive and anxiety symptoms among students from different regions. Univariate analysis found that the proportion of depressive symptoms among students in cities was lower than that in rural areas (37.7% versus 47.5%), as was the proportion of anxiety symptoms (32.5% versus 40.4%). The proportion of male students with depressive and anxiety symptoms was lower than that of female students (41.7 versus 45.5%; 36.2% versus 38.4%). Depressive and anxiety symptoms differed between grades. With increasing grade (from junior grade one to three and from senior grade one to three), the proportion of students with depressive and anxiety symptoms increased. Table 2 shows the proportion of students with different levels of depressive and anxiety symptoms. Mild and moderate depressive and anxiety symptoms were most common. The rate of mild depression was 26.4%, while that of moderate depression was 10.1%; meanwhile, the rate of mild Table 3 presents the relationship between COVID-19 cognition and depressive and anxiety symptoms. The scores for COVID-19 knowledge, prevention and control measures, and projections of the COVID-19 trend were higher among students without depressive and anxiety symptoms than in the students with depressive and anxiety symptoms. Table 4 presents the results of multivariable logistic regression analysis. In the multivariable model, female gender was the higher risk factor for depressive and anxiety symptoms (OR DE = 1.15, 95% CI 1.05-1.26; OR AN = 1.10, 95% CI 1.001-1.21). With regard to provinces, we found that Hubei province was a risk factor for depressive and anxiety
Discussion
In this large-scale, cross-sectional epidemiological study, the prevalence of depressive and anxiety symptoms in middle and high school students of China was 43.7% and 37.4%, respectively. In addition, the prevalence of comorbid depressive and anxiety symptoms among the students was 31.3%. The prevalence of depressive symptoms was higher than the figures found in Sweden (8.8%) and Japan (14.9%) in the absence of epidemics [43]. In China, adolescents had a higher incidence of depressive symptoms during COVID-19 than adults [27]. The prevalence of depressive symptoms is significantly influenced by sociocultural and economic contexts [44,45]; therefore, it needs to be assessed in different countries and regions. According to a pre-COVID-19 meta-analysis, the general prevalence of depressive symptoms among Chinese children and adolescents was 15.4% [46]. The reported prevalence of anxiety disorder varied widely in previous studies. The lowest rate reported was 2.6% in American 11-year-olds [47], and the highest was 41.2% in Japanese 7-9-year-olds [48]. One study found that the prevalence of anxiety disorders among Chinese children and adolescents was 6.06% [49]. Studies have shown that the incidence of anxiety symptoms among Chinese high school students ranges from 13.7 to 24.5% [50,51]. Our current results were similar to those of the public perception of anxiety at the peak phase during H1N1 [52]. Clearly, the prevalence of depressive and anxiety symptoms in adolescents was higher than in the general population in China in the early and peak periods of COVID-19. Thus, although the infection rate of COVID-19 in China is leveling off [53] (the infection rate of COVID-19 was 11.4 × 10 -4 in Hubei province and 0.03 × 10 -4 -0.2 × 10 -4 in other provinces [54]) during our survey, the rate of depressive and anxiety symptoms among adolescents was still high. This is a warning that we should not ignore the psychological health problems of young people just because the epidemic has eased. In the follow-up work, we should also pay attention to the changes of depressive and anxiety symptoms in these children. Both genetic factors and external environmental factors (stressful life events in particular) are considered to be involved in the onset of depression [22][23][24]. In the face of stressful events, everyone will develop anxiety and depressive symptoms. As a sensitive group, adolescents are particularly worthy of attention.
In pace with the wide spread of COVID-19 outside China, our findings provide important guidance for the development of psychological support strategies in China and other affected areas. At present, the epidemic has been well controlled in China, but it is still spreading outside China [2]. Thus, health care systems and the public must be well prepared for medical treatment and psychological issues [55]. Our findings have clinical and policy implications. First, health authorities need to identify high-risk groups according to social population information to carry out early psychological intervention. Our socio-demographic data show that female students have suffered from greater psychological impact, as well as higher levels of stress, anxiety, and depressive symptoms, during the COVID-19 outbreak. This finding is consistent with previous epidemiological studies that found that women were at a higher risk of depression [56]. A similar study among students found that female students were more likely to be anxious [57]. Furthermore, we found that the proportion of anxiety and depressive symptoms among students living in rural areas is significantly higher than that in urban areas, which is closely related to their poor economic situation. This is also consistent with previous studies, with some studies finding that the rate of emotional disorder is nearly twice as high among the poor as among the wealthy [44,58]. Additionally, senior grade three students had higher levels of anxiety and depression than senior grade one students and the highest rate among all the students. As senior grade three students face the most important test of their lives (the college entrance examination), the COVID-19 outbreak disrupts their normal pace of learning, leading to increased pressure; it can be inferred that the academic pressure causes more pressure on students [59,60]. Junior three students facing high school entrance examinations have the same problem. In China, students in junior three must have good grades to get into a good high school, they also had more anxiety and depressive symptoms. Moreover, our findings suggest that the level of knowledge and the prevention and control measures for COVID-19 may have protective psychological effects in the early stages of the epidemic. It can be seen that strengthening the publicity of COVID-19 knowledge and precautionary measures adopted to prevent the spread of COVID-19 can reduce the anxiety and depression levels of the public. This is consistent with previous studies finding that wearing a mask and practicing hand hygiene reduces the level of anxiety and depression during the epidemic [27]. However, press/ media coverage can also adversely affect anxiety and depressive symptoms, false information and false reports about COVID-19 can aggravate anxiety and depressive symptoms in the general public [61]. The latest and most accurate information, such as the number of people who have recovered, and the progress of medicines and vaccines, can reduce anxiety levels [27]. Therefore, the government and health authorities should provide accurate information on the epidemic situation, refute rumors in time, and reduce the impact of rumors on the public emotional state. Strengthening prevention and control measures can not only block the spread of disease but also provide a sense of security, thus bringing potential psychological benefits. Therefore, governments and health authorities should ensure that infrastructure is in place to produce and provide adequate quantities of masks, hand sanitizer, and other personal hygiene products during the COVID-19 epidemic. Positive and optimistic attitude towards the development of COVID-19 epidemic was also a protective factor against depressive and anxiety symptoms. The epidemiology of infection rates and deaths likely affects depressive and anxiety symptoms. During the H1N1 pandemic, public anxiety was at its worst at the height of the epidemic, and has declined as the epidemic has eased [25,26].
Due to the outbreak of COVID-19, major cities in China have shut down schools at all levels indefinitely. Although education authorities have developed online portals and web-based applications to provide lectures or other teaching activities, the uncertainty and potential negative effects on academic development will have adverse effects on students' psychological health. Besides, students are also required to report daily health conditions, limit their travel, thus entailing that they are isolated at home, which can lead to anxiety and depression. Because students are currently taking online classes at home, and are receptive to smartphone apps [62], health authorities can consider providing online or smartphone-based psychological interventions. Online platforms can also provide online psychological support for students who stay at home most of the time during the epidemic and can reduce the risk of virus transmission due to face-toface contact. In addition, when conducting online teaching, teachers should also pay attention to the assessment of students' anxiety and depressive symptoms, communicating with their parents in a timely manner so as to implement effective intervention.
This study has two limitations. One possible bias could have led to underestimating the prevalence of anxiety and depression. This sample was a non-probability sample, which is a sample of voluntary participants. For some areas with a severe epidemic situation, anxiety and depression rates may be higher. However, due to the influence of anxiety and depression, students may not be willing to participate in the questionnaire survey [63], and there may be a certain deviation in the response population. Furthermore, due to the online questionnaire being a self-report evaluation, the indicated levels of anxiety and depression may not always be consistent with the evaluation of mental health professionals.
In conclusion, our results show there is a high prevalence of psychological health problems among adolescents, which is negatively associated with the level of knowledge about and the prevention and control measures for COVID-19. These findings suggest that the government needs pay more attention to psychological health among adolescents while combating COVID-19. Fortunately, the Chinese government has provided psychological health services through various channels, including hotlines, online consultation, and outpatient consultation [28], but more attention should be paid to depression and anxiety, especially among adolescents.
|
2020-05-03T13:53:19.026Z
|
2020-05-03T00:00:00.000
|
{
"year": 2020,
"sha1": "6f5b610debdc77bd37f06ce4280b8704426299af",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00787-020-01541-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f5b610debdc77bd37f06ce4280b8704426299af",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
79969559
|
pes2o/s2orc
|
v3-fos-license
|
Postoperative Septal Abscesses According to the Techniques of the Septoplasty
Background and Objectives: Recently, the swinging door and grafting techniques have been heavily used for straightening and holding the caudal septum. However, reconstructive septoplasties require more extensive dissection of septal structures. Extensive anatomical dissection and complicated procedures may affect the probability of postoperative bleeding and infection. Materials and Method: We retrospectively reviewed the records of 141 consecutive patients who underwent septal surgeries from February 2013 to December 2015. The patients were classified into two groups according to surgical technique: those who underwent submucous resection with or without endoscopy were classified as the “resection” group, while those who underwent the swinging door or batten graft technique were classified as the “reconstruction” group. The resection and reconstruction groups were matched using the propensity score. The incidence of postoperative septal abscesses (PSAs) was analyzed between the two groups. Results: For the two groups, 36 patients were matched with 36 patients (1:1) using the propensity score. Of the 72 patients, PSAs developed in 5 patients (6.9%). One patient was in the resection group (2.8%), while the other four patients were in the reconstruction group (11.1%). However, the incidence of PSAs was not significantly higher in the reconstruction group according to Fisher’s exact test (p=0.164). Conclusion: Reconstructive septoplasty resulted in more septal abscesses than resection, but the difference was not significant.
INTRODUCTION
Septoplasty is one of the most common surgeries in otolaryngology.Submucous resection (SMR) is still practiced as a prevalent technique of resecting a deflected segment of the septum.With the introduction of endoscopy, more precise SMR is possible under superior vision. 1)However, not all septal deviations can be corrected with this resection technique.A reconstruction technique must be used when correcting the framework of the cartilaginous septum, where is placed approximately a 1 cm dorsal and 1 cm caudal of the septum, termed "L-strut."Resecting the L-strut area of the septum may sacrifice stability of the nose.Recently, swinging door or grafting techniques have been prevalently used to straighten and hold the caudal septum. 2)3)But, these reconstructive septoplasties require more extensive dissection of septal structures.Bilateral mucosa elevation is a necessary procedure in reconstructive septoplasty, but it was avoided in the past for fear of reducing the survival of the cartilaginous septum.
Septal infections as a postoperative complication are not commonly reported, and they usually start from a preceding septal hematoma. 4)Bleeding entrapped between the mucosal flap and septal cartilage can soon become infected and hinder blood supply to the cartilage, which causes nasal obstruction, swelling of the septum, pain, and fever.If a septal abscess is not properly treated, septal perforation, cavernous sinus thrombophlebitis, and saddle nose deformity can occur. 4)5)xtensive anatomical dissection and complicated procedures may affect the chance of postoperative bleeding chance and the infection rate.Therefore, this study aims to compare the incidence of septal infection between resection and reconstructive septoplasty techniques.
MATERIALS AND METHODS
Retrospectively, 148 consecutive patients were recruited who underwent septal surgeries from February 2013 through December 2015 at the Eulji Medical Center, Daejeon, Korea, by a single surgeon (M. S. Choi).This study was reviewed and approved by the Institutional Review Board of the Eulji Medical Center (EMC 2015-10-002).In total, 141 patients of 148 who were followed up for more than 3 months were included in the study.They were classified into two groups by surgical techniques: an SMR with or without an endoscopy was classified as the "resection" group and a swinging door or batten graft was classified as the "reconstruction" group.Retrospectively, medical records and the incidence of postoperative septal abscesses (PSAs) were analyzed between the groups.In addition, clinical records of the patients who developed PSAs were analyzed.
Propensity scoring and match
In order to control the comfounding factors such as, Age, Sex, comorbidity (Diabetes, Hypertension), Allergy, Revision surgery, combined rhinoplasty, and operating time, logistic regression between two groups was performed and the Propensity scores were obtained.Two groups were matched one to one ratio with similar Propensity score.Each resection and reconstruction groups composed of 36 patients.
Statistical analysis
The statistical analysis was performed using SPSS 12.0 (SPSS, Inc., Chicago, IL, USA).The incidence of PSAs was analyzed between the two groups, using Fisher's exact test.The Propensity scoring match was validated by using a Paired t-test.A value of p<0.05 was accepted as statistically significant.
Brief surgical techniques
Prior to all septal surgeries, both nostrils were disinfected with betadine-soaked cotton, and both nasal cavities were irrigated with a diluted betadine solution (approximately 100-200 cc).Immediately after surgery and daily, 3 g of cephazedone (Kukje, Inc., Seongnam, Gyeonggido, Korea) was injected intravenously until packings were removed (nasal cavities were usually packed 2 days after surgery).There were no differences in medication for each septal surgery.A drain site was made to the unilateral mucoperichondrial flap by stab incision placed posterior to the L-strut area when there were no tears in both mucosal flaps during elevation.If severe mucosal damage occurred, a thin silastic sheet was applied to the damaged side and secured with vicryl 4-0 (Ethicon, Inc., Somerville, NJ, USA) without tension, then packed lightly with Vaseline-soaked gauze or Merocel sponges (Medtronic-Xomed, Inc., Jacksonville, FL, USA).When the septal surgeries were accompanied with open rhinoplasty, a marginal and a columellar incision was made and a skin envelop was elevated.
SMR
A hemitransfixion incision or a Killian incision was made to the one side (usually concave side) of nasal septum.After elevation of the unilateral mucoperichondrial flap, resection of the deflected segment of cartilage and bone was performed, preserving the L-strut area (Fig. 1).The Killian incision site was left open, and the hemitransfixion incision site was sutured with ethilon 5-0 (Ethicon, Inc., Somerville, NJ, USA).
Endoscopic septoplasty
Generally, a Killian incision was made to the concave septal mucosa under endoscopy, and a horizontal incision adjacent to the lesion was made occasionally when addressing only septal spurs.Elevation of unilateral mucosal flaps was performed being careful not to tear the mucosa.The resection procedure was the same as the SMR except with using an endoscope (Fig. 1).An incision site was left open.
Swinging door technique
A hemitransfixion incision was made to the concave side of the nasal septum.Through this incision, bilateral mucosal flaps were elevated.To swing superiorly, a cartilaginous septum was firstly separated from the bony septum, preserving at least 1 cm from the dorsum, then separated inferiorly from the maxillary crest and an anterior nasal spine (ANS) (Fig. 1).Precise excision of vertical excess was performed to allow the deviated septum to be realigned to the midline.Finally, the newly positioned septum was secured tightly to the periosteum of the ANS or to the ANS itself held by sharp punch with polydioxanone (PDS) 4-0 sutures (Ethicon).In some cases, cartilaginous septum not excised of excess portion was flipped over the ANS, which act as a doorstop.The incision site was sutured by ethilon 5-0.Finally, two point quilting sutures were applied to the septum by vicryl 4-0.
Caudal batten graft
A hemitransfixion incision was made to the concave side of nasal septum.Through this incision, bilateral mucosal flaps were elevated.A rectangular shape of cartilage or bone was harvest, preserving the L-strut.A cartilaginous septum was completely separated from the maxillary crest and the ANS, and it was tightly secured midline, as in the swinging door technique.A bone graft from the perpendicular plate of ethmoid or cartilage from the septum was placed on the concave caudal septum to hold the septum straight, and it was sutured by PDS 4-0 or 5-0 (Fig. 1).The incision site was sutured by ethilon 5-0.Finally, two-point quilting su-tures were applied to the septum by vicryl 4-0.
RESULTS
A total of 141 eligible patients were enrolled this study and followed up for more than 3 months.The resection group adopting an SMR with or without endoscopy included 83 males and 13 females, with a mean age of 37.9 years (range: 17 to 78 years).The reconstruction group adopting a swinging door or caudal batten graft included 43 males and 2 females, with a mean age of 36.5 years (range: 17 to 63 years).Of the 141 patients, the resection group and reconstruction were matched 36 patients to 36 (1:1) by the similar Propensity score.After matching with the Propensity score, two groups showed no significant differences of sex, age, hypertension, diabetes, allergy, revision surgery, combined rhinoplasty, and operative time (Table 1).The Propensity scores between newly matched groups were not significantly different by Paired t-test (p-value=0.235).
Of the 72 patients, PSAs were developed in 5 patients (6.9%).One patient was in the resection group (2.8%), and four patients were in the reconstruction group (11.1%) (Fig. 2).But, the incidence of PSAs was not significantly higher in the reconstruction group by Fisher's exact test (p=0.164).
Information regarding the eight patients who developed PSAs is summarized in Table 2. Bacteria isolations were identified in half of the eight patients with PSAs (50%).Methicillin-resistant Staphylococcus aureus (MRSA) was isolated in one patient (case 1).Serratia marcescens, Pseudomonas aeruginosa, and Staphylococcus aureus were isolated in the wound culture of three patients (Table 2).The duration of the time to diagnose the septal infections was longer in the reconstruction group (approximately 24.8 days) than in the resection group (approximately 6 days).In addition, the color of fluid entrapped in the septum was pu-rulent when the infection was diagnosed later (>10 days) but was turbid when the infection was diagnosed earlier (<10 days) (Table 2).Granulation tissue on the mucosa of the septum was identified in four patients (50%) (Fig. 3).When incising around the granulation tissue, purulent discharge was drained in all four patients.
DISCUSSION
PSAs have been considered uncommon.Furthermore, there were few reports in the literature about the incidence of PSAs, and they reported different incidences of it.Yoder and Weimer reported only 5 postoperative infections (0.48%) in 1040 patients undergoing either septoplasty or septorhinoplasty using an endonasal approach. 6)Makitie et al. reported 12 postoperative septal abscesses (12%) in 100 patients undergoing septoplasty under local anesthesia by resecting the septal deformities and by reconstructing the septum. 7)Our study revealed eight PSAs (5.7%) in 141 septal surgeries.However, the incidence of septal infections was different according to the surgical techniques of the septoplasty.The incidence of PSAs was low (2.8%) in the resection group and high (11.1%) in the reconstruction group, which is similar to the findings by Makitie et al.However, there are some reports indicating very low rates of septal infections even when using reconstructive septoplasty, such as batten graft, cutting and suture technique, and extracorporeal septoplasty. 8)9)To the best of our knowledge, there was no English literature comparing PSAs according to the surgical techniques of the septoplasty.
The authors performed logistic regression analyses and Propensity score matching to minimize comfounding fac- tors such as, age, sex, operation time, combined surgeries, allergy, revision surgery, diabetes mellitus, and hypertension.
Although there was no significant difference, the frequency of PSAs in the reconstructive group was higher than resection.The reasons for this are supposed as follows.The first is an extensive anatomical dissection; it is essential for reconstructive septoplasty to elevate bilateral mucosal flaps and to disarticulate to the maxillary crest and ANS, which is not essential for SMR.The location of incisions is different between reconstruction septoplasty and resection.PSAs mainly occur due to a preceding postoperative hematoma.When disarticulating the septum from the maxillary crest, sometimes, significant bleeding due to injury of the greater palatine artery can occur, which requires electrocauterization.In addition, bleeding is common around the ANS, and an incision site close to the skin is an essential site to dissect extensively for reconstructive septoplasty.
The second is the amount of dead space left after surgery.The amount of dead space is also closely related with the degree of the anatomical dissection.A quilting suture to the septum does not guarantee complete coaptation of the mucosa to the septum.Dead spaces can exist between the newly realigned septum and maxillary crest.However, the dead space is only between the unilateral mucosal flap and cartilaginous septum in the SMR.In our study, the septal infections were mostly found on both sides across the septum in reconstructive septoplasty but were found unilaterally in the SMR (Table 3).Furthermore, a small amount of bleeding or pus pooled in the dead spaces after reconstructive septoplasty may be difficult to find in the early period.A caudal septum that is grafted with cartilage or bone normally appears mildly swollen, which can make it difficult to differentiate a normal postoperative shape of the caudal septum or one due to septal infection.However, in SMR cases, septal infections can be detected easily because of the distinct imbalance of the septal mucosal swelling compared with the normal side, when inspected with an anterior rhinoscope.In our study, it took approximately 25 days in reconstructive septoplasty and approximately 6 days in SMR to detect and diagnose septal infections.
The incidence of postoperative septal abscesses or hematoma can be reduced if the surgical site drains well.In our 8 cases of PSAs, all septal infections developed in the caudal septum.Commonly, an incidental mucosal tear can occur when elevating the mid-portion of a septal mucosa that has severe curvature or spur.Sometimes, an intentional stab incision can be made to the intact mid-portion mucosa for drainage.However, a mucosa of the caudal portion of septum is so thick as not to tear usually during eleva- tion, and it is not a usual site to place an intentional stab incision.The authors recently placed a stab incision and inserted a thin silastic drain to the inferior portion of the caudal septum before the end of the surgery and removed the drain 5 to 7 days after surgery.We performed additional surgery to batten the weakened L-strut with the auricular cartilage to case 3, 4, and 5 patients to strengthen the septal support after controlling infections with incision and drainage and infusion of susceptible antibiotics.The remaining five patients with infections were treated the same as above without additional surgery.All eight patients with infections were followed up over 12 months and were confirmed to have no nasal obstruction and no deformity of nose shape.Septal infections can progress to abscesses and cause necrosis of septal cartilage that can yield perforation of the septum and saddle nose deformity, without proper treatment.In those cases, supportive or reconstructive surgery using an auricular or rib cartilage is necessary after controlling infections. 10)n our study, MRSA was isolated in one patient (case 1).Abuzeid et al. also reported one case of MRSA after septorhinoplasty with culture of postoperative septal abscess, and they recommended applying intranasal mupirocin ointment 5 days before septorhinoplasty for high-risk groups (e.g., hospital care workers, immunocompromised people, and the elderly). 11)S. aureus is normal flora of the skin and a commonly isolated species after septorhinoplasty.S. aureus can be isolated in the anterior nares of approximately 60% of the total population. 11)Other researchers reported that S. aureus is the most common pathogen isolated in PSAs. 7)12)In our study, S. aureus was isolated in only one patient (12.5%).There is a report that diabetes mellitus can increase colonization of S. aureus. 13)Serratia marcescens (belonging to Enterobacteriaceae) was cultured in one patient in our study (Table 2).Serratia marcescens is considered a hospital-acquired infection, along with MRSA.
There was a report using blood culture that bacteremia can be occur transiently.It is 0% cultured before septorhinoplasty but 15-16.9%cultured from the end of the surgery until packings are removed.14)Another study reported that bacteremia was common in open septorhinoplasty (13.3%) than septolasty (3.3%) after surgery. 15)It has been controversial to use of prophylactic antibiotics to reduce infection in nasal surgeries.Andrews et al., in a randomized study of 164 patients undergoing complex septorhinoplasty, reported that infection rates between the prophylactic (7%) and postoperative (11%) use of antibiotics groups were not significantly different. 16)However, they recommended the use of prophylactic antibiotics for patients undergoing complex septorhinoplasty. 16)ne patient (case 8) complained of facial pain and headache when followed 9 days after the SMR.The patient discontinued antiplatelet drugs for 5 days before surgery and took it again after surgery for prevention of cardiovascular accidents.When incising the original incision site of the septum, an old hematoma with mild foul odor was drained, and the symptoms of the patient disappeared.Recently, because of the increased average life span, antiplatelet drugs or anticoagulant agents have been widely prescribed to prevent or treat ischemic vascular diseases. 17)Thus, attention should be paid to patients with a hemorrhagic tendency after septoplasty.
Granulation tissue found on the septum after septoplasty can be an important hallmark of septal infections (Fig. 3).Granulation tissue was found in four of our cases that were diagnosed later (>10 days after surgery) with an infection (Table 2).It bled easily and purulent discharge was drained in all four cases when incised around the granulation tissue.
CONCLUSION
Reconstructive septoplasty showed more septal abscesses than resection, but there was no significant difference.
Fig. 2 .
Fig. 2. Comparison of postoperative septal abscesses rates between resection and reconstructive groups matched by the Propensity score.
Fig. 3 .
Fig. 3. Granulation tissue that bled easily was seen on the mucosa of the septum by the right side.Purulent discharge was drained when incised around the granulation tissue on the septum.
Table 2 .
Description of patients who developed postoperative septal infections
|
2019-03-17T13:10:36.530Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "126e2c3a9ba7ea6bab0f0f74f1911b7566f1a7ee",
"oa_license": "CCBYNC",
"oa_url": "http://synapse.koreamed.org/Synapse/Data/PDFData/0131JR/jr-24-74.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "126e2c3a9ba7ea6bab0f0f74f1911b7566f1a7ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227039235
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of BPI Expression on Escherichia coli F18 Infection in Porcine Kidney Cells
Simple Summary Escherichia coli frequently causes bacterial diarrhea in piglets. Vaccine development and improved feeding and animal management strategies have reduced the incidence of bacterial diarrhea in piglets to some extent. However, current breeding strategies also have the potential to improve piglet resistance to diarrhea at a genetic level. This study sought to advance the current understanding of the functional and regulatory mechanisms whereby the candidate gene bactericidal/permeability-increasing protein (BPI) regulates piglet diarrhea at the cellular level. Abstract The efficacy and regulatory activity of bactericidal/permeability-increasing protein (BPI) as a mediator of Escherichia coli (E. coli) F18 resistance remains to be defined. In the present study, we evaluated lipopolysaccharide (LPS)-induced changes in BPI gene expression in porcine kidney (PK15) cells in response to E. coli F18 exposure. We additionally generated PK15 cells that overexpressed BPI to assess the impact of this gene on Toll-like receptor 4 (TLR4) signaling and glycosphingolipid biosynthesis-related genes. Through these analyses, we found that BPI expression rose significantly following LPS exposure in response to E. coli F18ac stimulation (p < 0.01). Colony count assays and qPCR analyses revealed that E. coli F18 adherence to PK15 cells was markedly suppressed following BPI overexpression (p < 0.01). BPI overexpression had no significant effect on the mRNA-level expression of genes associated with glycosphingolipid biosynthesis or TLR4 signaling. BPI overexpression suppressed the LPS-induced TLR4 signaling pathway-related expression of proinflammatory cytokines (IFN-α, IFN-β, MIP-1α, MIP-1β and IL-6). Overall, our study serves as an overview of the association between BPI and resistance to E. coli F18 at the cellular level, offering a framework for future investigations of the mechanisms whereby piglets are able to resist E. coli F18 infection.
Introduction
Bactericidal/permeability-increasing protein (BPI) is expressed at high levels in a range of animal cell and tissue types, with particularly pronounced expression being evident in neutrophils [1,2]. BPI exerts a range of antibacterial functions that enable it to protect against certain diseases in humans by neutralizing lipopolysaccharide (LPS) and killing Gram-negative bacteria [3]. Fan et al. [4] previously Tiangen Biotech Co., Ltd. (Beijing, China), while reverse transcription and real-time fluorescence quantitative kits (AceQ Universal SYBR qPCR Master Mix) were obtained from Vazyme Biotech Co., Ltd. (Nanjing, China).
BPI Overexpression in PK15 Cells
A BPI overexpression lentiviral vector (pGLV5-BPI) and a corresponding negative control (pGLV5-NC) were prepared by GenePharma (Suzhou, China). Prior to lentiviral transduction, PK15 cells were plated at 5.0 × 10 5 cells/well in 12-well plates in DMEM containing 10% FBS and were grown at 37 • C in a 5% CO 2 incubator until 80% confluent, at which time four replicate samples were each transduced with the pGLV5-BPI or pGLV5-NC lentiviral vectors. Cells were incubated for 48 h following transduction, at which time positive cells were identified via fluorescence microscopy. Puromycin (10 µg/mL every 24 h) was then used to select for pGLV5-BPI-positive cells, and qPCR was used to confirm successful BPI overexpression in these cells.
LPS and E. coli F18 Stimulation
Cells of the blank, pGLV5-NC-positive and pGLV5-BPI-positive cells were plated in 12-well plates (5.0 × 10 5 per well) until 80% confluent, they were induced with 0.1 µg/mL LPS for 0, 2, 4, 6, 8, 12, 24 and 36 h, and then three replicates were performed per group. Total cellular RNA was extracted to detect Changes in BPI gene expression, and cell culture supernatants were collected for ELISA analysis.
Standard porcine E. coli strains carrying F18ab and F18ac fimbriae were inoculated into Luria-Bertani (LB) medium for 12 h in a 37 • C with constant agitation Bacteria were then collected via centrifugation at 3000 rpm for 10 min, washed three times with PBS and diluted to 1.0 × 10 9 colony-forming units (CFU/mL) in cell culture medium.
Colony Counting
Cells of the pGLV5-BPI and pGLV5-NC groups were inoculated to the wells of 12-well cell culture plates at a density of 5.0 × 10 5 cells/well and cultured until the cells reached approximately 80%. Diluents were made from the precipitation residues of the two E. coli strains, and 1 mL of the diluent was added to each well, with three replicates per group. The plates were incubated in a 5% CO 2 incubator at 37 • C for 2 h. After discarding the culture medium, the cells were washed thrice with PBS and immediately treated for 20 min with 0.5% Triton X-100 (prepared with ultrapure water). After serially diluting the culture ten times, LB agar plates were coated with the culture and incubated overnight at 37 • C. Finally, bacterial count was determined by counting the number of colonies on the plate coated with 1000× the bacterial diluent using ImageJ software. The final number of bacteria adhering to the plate (CFU/mL) was equal to the number of colonies on the plate × 10 3 .
Fluorescence Quantitative Polymerase Chain Reaction
Cells of the pGLV5-NC-positive and pGLV5-BPI-positive groups were seeded at 5.0 × 10 5 cells/well in 12-well plates until 80% confluent, at which time 1 mL of either of the two experimental E. coli strains was added to each well. Cells were then incubated for 1 h at 37 • C, after which supernatants were discarded, and cells were washed three times with PBS. Total DNA was then isolated using a DNA extraction kit. After extraction, this DNA was used as the amplification template, and qPCR primers were designed based on the PILIN gene of E. coli F18ab and F18ac, and the porcine β-ACTIN gene, which were detected by fluorescence quantitative polymerase chain reaction (PCR). [20]. All analyses were conducted in triplicate.
Primer Design
qPCR primers for BPI, TLR4, MyD88, CD14, TNF-α, IL-1β, FUT1, FUT2 and PILIN were designed with Primer Premier 5.0 based upon sequences in GenBank. GAPDH and β-ACTIN served as reference controls. All primer synthesis was conducted by Sangon Biotechnology (Shanghai, China), and the corresponding sequences are shown in Table 1.
RNA Extraction and Preparation
TRIzol was used to extract total RNA based upon provided protocols, after which formaldehyde denaturing gel electrophoresis was conducted to gauge RNA integrity, and a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) was employed to assess RNA concentration and purity.
Isolated RNA was then reverse transcribed to produce cDNA in reactions containing 2 µL of 5× qRT SuperMix II, 500 ng of total RNA and up to 10 µL of RNase-free H 2 O. Thermocycler settings were as follows: 25 • C for 10 min, 50 • C for 30 min and 85 • C for 5 min. After preparation, cDNA was stored at 4 • C.
qPCR
All qPCR reactions were conducted in a 20-µL volume composed of 2 µL of cDNA, 0.4 µL of each primer (10 µmol/L), 10 µL of 2 × AceQ Universal SYBR qPCR Master Mix and 7.2 µL of ddH 2 O. Thermocycler settings were as follows: 95 • C for 5 min; 40 cycles of 95 • C for 5 s, 60 • C for 30 s. Melting curves were then used to confirm amplified product specificity. Three independent experimental replicates were conducted for all analyses.
Cytokine ELISAs
We obtained culture supernatants of pGLV5-NC-positive and pGLV5-BPI-positive groups at appropriate time points after LPS stimulation and measured the levels of proinflammatory cytokines (IFN-α, IFN-β, IL-6, MIP-1α and MIP-1β) via ELISA based on the provided kit instructions.
LPS and E. coli F18 Induce BPI Upregulation in PK15 Cells
We began by treating PK15 cells with 0.1 µg/mL LPS for 0, 2, 4, 6, 8, 12, 24 and 36 h in order to evaluate the impact of such treatment of BPI expression, revealing that this gene was rapidly upregulated following stimulation for 4 h (Figure 1a). When cells were instead stimulated with E. coli F18ab or F18ac (1.0 × 10 9 CFU/mL), we found that the F18ac strain markedly enhanced BPI expression (p < 0.001), whereas F18ab strain stimulation had no impact on the expression of this gene (p > 0.05) ( Figure 1b).
LPS and E. coli F18 Induce BPI Upregulation in PK15 Cells
We began by treating PK15 cells with 0.1 µg/mL LPS for 0, 2, 4, 6, 8, 12, 24 and 36 h in order to evaluate the impact of such treatment of BPI expression, revealing that this gene was rapidly upregulated following stimulation for 4 h (Figure 1a). When cells were instead stimulated with E. coli F18ab or F18ac (1.0 × 10 9 CFU/mL), we found that the F18ac strain markedly enhanced BPI expression (p < 0.001), whereas F18ab strain stimulation had no impact on the expression of this gene (p > 0.05) (Figure 1b).
Preparation of BPI-Overexpressing PK15 Cells
Next, we confirmed that we were able to successfully transduce PK15 cells with the pGLV5-BPI and pGLV5-NC lentiviral vectors, as confirmed based on the presence of detectable green fluorescent protein within these cells (Figure 2a). Subsequent qPCR analyses confirmed that cells transduced with the pGLV5-BPI plasmid exhibited significant BPI overexpression (5556-fold higher than in control cells; Figure 2b). These findings thus indicated that we had successfully prepared BPIoverexpressing PK15 cells, which were then used for a series of experiments.
Preparation of BPI-Overexpressing PK15 Cells
Next, we confirmed that we were able to successfully transduce PK15 cells with the pGLV5-BPI and pGLV5-NC lentiviral vectors, as confirmed based on the presence of detectable green fluorescent protein within these cells (Figure 2a). Subsequent qPCR analyses confirmed that cells transduced with the pGLV5-BPI plasmid exhibited significant BPI overexpression (5556-fold higher than in control cells; Figure 2b). These findings thus indicated that we had successfully prepared BPI-overexpressing PK15 cells, which were then used for a series of experiments.
The Impact of BPI Overexpression on E. coli F18 Adhesion to PK15 Cells
A colony counting assay revealed that the overexpression of BPI was sufficient to markedly suppress the adhesion of E. coli F18ab (Figure 3a) and E. coli F18ac (Figure 3b) to PK15 cells (p < 0.01 and p < 0.001, respectively). In line with this, qPCR analyses of PILIN gene confirmed that BPI overexpression significantly impaired E. coli F18ab (Figure 3c) and E. coli F18ac (Figure 3d) adhesion to these PK15 cells (p < 0.01 and p < 0.001, respectively). As such, these findings indicate that BPI overexpression can interfere with E. coli F18 adherence to PK15 cells.
The Impact of BPI Overexpression on E. coli F18 Adhesion to PK15 Cells
A colony counting assay revealed that the overexpression of BPI was sufficient to markedly suppress the adhesion of E. coli F18ab (Figure 3a) and E. coli F18ac (Figure 3b) to PK15 cells (p < 0.01 and p < 0.001, respectively). In line with this, qPCR analyses of PILIN gene confirmed that BPI overexpression significantly impaired E. coli F18ab (Figure 3c) and E. coli F18ac (Figure 3d) adhesion to these PK15 cells (p < 0.01 and p < 0.001, respectively). As such, these findings indicate that BPI overexpression can interfere with E. coli F18 adherence to PK15 cells.
The Impact of BPI Overexpression on TLR4 and Glycosphingolipid Biosynthesis-Globo Series Pathway-Related Gene Expression
Next, we assessed the expression of the glycosphingolipid biosynthesis-globo series pathway genes FUT1 and FUT2 in control or BPI-overexpressing PK15 cells, in which we also evaluated the expression of TLR4 signaling pathway-related genes (TLR4, TNF-α, IL-1β, CD14 and MyD88). We found that BPI overexpression did not impact TLR4, CD14, TNF-α, IL-1β, MyD88, FUT1 or FUT2 expression (p > 0.05). We also found that MyD88, FUT1 and FUT2 were highly expressed in PK15 cells (Figure 4).
The Impact of BPI Overexpression on the Upregulation of TLR4-Related Proinflammatory Cytokines
Lastly, we evaluated the production of BPI-overexpressing or control PK15 cells in response to LPS treatment (0.1 µg/mL for 0-36 h). ELISAs revealed that supernatant levels of proinflammatory cytokines such as IFN-α, IFN-β, IL-6, MIP-1α and MIP-1β initially rose and then declines over time. Furthermore, we found that BPI overexpression significantly reduced the LPS-induced upregulation of the TLR4-related proinflammatory cytokines IFN-α, IFN-β, IL-6, MIP-1α and MIP-1β in these cells ( Figure 5).
The Impact of BPI Overexpression on the Upregulation of TLR4-related Proinflammatory Cytokines
Lastly, we evaluated the production of BPI-overexpressing or control PK15 cells in response to LPS treatment (0.1 µg/mL for 0-36 h). ELISAs revealed that supernatant levels of proinflammatory cytokines such as IFN-α, IFN-β, IL-6, MIP-1α and MIP-1β initially rose and then declines over time. Furthermore, we found that BPI overexpression significantly reduced the LPS-induced upregulation of the TLR4-related proinflammatory cytokines IFN-α, IFN-β, IL-6, MIP-1α and MIP-1β in these cells ( Figure 5).
Discussion
E. coli F18 can infect porcine cells and is characterized in part by the presence of high levels of cell wall-associated LPS [22]. Prior genetic analyses have demonstrated that the ability of piglets to
Discussion
E. coli F18 can infect porcine cells and is characterized in part by the presence of high levels of cell wall-associated LPS [22]. Prior genetic analyses have demonstrated that the ability of piglets to resist E. coli F18 infection is tied to both innate immunity and the expression of the E. coli F18 receptor by intestinal epithelial cells [23][24][25]. PK15 cells have commonly been utilized as a model cell line for analyses of pathogenic E. coli adhesion and associated immune responses [26,27]. As such, we stimulated PK15 cells with LPS or with porcine E. coli F18, revealing that both of these treatments resulted in significant BPI upregulation. Glycosphingolipid biosynthesis-globo series pathway genes (FUT1, FUT2) are involved in the formation of the E. coli F18 receptor, and its expression level is closely related to resistance to E. coli F18 in piglets [28,29]. TLRs recognize different microbial components, sense microbial populations in the intestinal tract, initiate proinflammatory signaling pathways to resist the invasion of pathogenic microorganisms and play an important role in immune regulation in the process of resisting E. coli F18 infection [30,31]. To further assess the mechanistic role of BPI in the context of E. coli F18 infection, we next generated BPI-overexpressing PK15 cells. While such overexpression did not alter the mRNA level expression of FUT1, FUT2 or TLR4 signaling pathway-related genes (TLR4, CD14, TNF-α, IL-1β, IFN-α and MyD88), it did markedly impair E. coli F18 adhesion to these cells in vitro. We also found that BPI overexpression suppressed LPS-induced IFN-γ, IFN-α, IFN-β, IL-6, MIP-1α and MIP-1β expression. Balakrishnan et al. [32] have previously shown that LPS can increase BPI protein levels within cells. At baseline, proinflammatory cytokine production is minimal, whereas it is rapidly induced at high levels in response to LPS stimulation. BPI can bind to the conserved LPS lipid A/inner core, and it can thereby inhibit its ability to interact with TLR4 and to thereby activate proinflammatory signaling [33,34]. We therefore speculate that BPI inhibits the LPS-mediated induction of the host cellular immune response by forming a complex with LPS, thereby enabling cells to resist E. coli F18 infection.
We confirmed the link between BPI expression and E. coli F18 infection at the cellular level in porcine PK15 cells in vitro, underscoring the role of BPI as an inhibitor of inflammatory responses at least in part owing to its ability to neutralize LPS, thereby enabling these cells to better resist infection by this ETEC strain. Our data further demonstrate that BPI can markedly reduce E. coli adherence to PK15 cells in vitro. Future GST pull-down, co-immunoprecipitation (Co-IP) and gene knockout experiments will be necessary in order to fully understand the mechanisms whereby BPI influences E. coli F18 receptor molecules in this experimental context. It is also important to note that IFN-α/β are key antiviral cytokines that can induce an antiviral state in both infected and uninfected adjacent cells [35,36] by upregulating a range of IFN-stimulated genes with diverse antiviral activities [37,38]. As we found that BPI regulates IFN-α/β production by PK15 cells, this suggests that it may additionally serve as a potential regulator of antiviral immunity in these porcine cells. However, future experimental work will be needed to test this possibility.
Conclusions
In conclusion, we found that the overexpression of porcine BPI significantly reduced the adhesion of E. coli F18 to porcine kidney cells in vitro, although it had no impact on the expression of TLR4 or glycosphingolipid biological signaling pathway-related genes in these cells. In addition, BPI overexpression was sufficient to markedly suppress the LPS-induced upregulation of TLR4 signaling-related proinflammatory cytokines including IFN-α, IFN-β, MIP-1α, MIP-1β and IL-6.
Conflicts of Interest:
The author declares that there is no potential conflict of interest.
|
2020-11-19T09:17:54.462Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3b59800ac85d748ff61c80d4052d7bd876771766",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/10/11/2118/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e90a8482b49a40811794df830e80e54fdbdca6b8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119635118
|
pes2o/s2orc
|
v3-fos-license
|
A dynamical transition and metastability in a size-dependent zero-range process
We study a zero-range process with system-size dependent jump rates, which is known to exhibit a discontinuous condensation transition. Metastable homogeneous phases and condensed phases coexist in extended phase regions around the transition, which have been fully characterized in the context of the equivalence and non-equivalence of ensembles. In this communication we report rigorous results on the large deviation properties and the free energy landscape which determine the metastable dynamics of the system. Within the condensed phase region we identify a new dynamic transition line which separates two distinct mechanism of motion of the condensate, and provide a complete discussion of all relevant timescales. Our results are directly related to recent interest in metastable dynamics of condensing particle systems. Our approach applies to more general condensing particle systems, which exhibit the dynamical transition as a finite size effect.
INTRODUCTION.
The understanding of metastable dynamics associated to phase transitions in complex many-body systems is a classical problem in statistical mechanics. It is rather well understood on a heuristic level, characterizing metastable states as local minima of the free energy landscape and the transitions between them occurring along a path of least action, corresponding to the classical Arrhenius law of reaction kinetics [1]. Since the classical work by Freidlin and Wentzell on random perturbations of dynamical systems [2], there have been various rigorous approaches in the context of stochastic particle and spin systems summarized in [3,Chapter 4] and [4]. A mathematically rigorous treatment of metastability remains an intriguing question and is currently a very active field in applied probability and statistical mechanics [5][6][7][8][9]. Most recently, potential theoretic methods [10] have been combined with a martingale approach to establish a general theory of metastability for continuous-time Markov chains [8,11,12]. The dynamics of condensation in driven diffusive systems has recently become an area of major research interest in this context. There have also been recent results on the inclusion process [13,14] and systems exhibiting explosive condensation [15,16], however the zero-range process remains one of the most studied systems.
Zero-range processes (ZRPs) are stochastic lattice gases with conservative dynamics introduced in [17], and the condensation transition in a particular class of these models was established in [18][19][20][21]. Many variants of this class have been studied in recent years including a non-Markovian version with slinky condensate motion [22,23], see also [24,25] for a recent reviews of the literature. If the particle density ρ exceeds a critical value ρ c the system phase separates into a homogeneous fluid phase at density ρ c and a condensate, which concentrates on a single lattice site and contains all the excess mass. The dynamics and associated timescales of this transition have been described heuristically in [26]. For large but finite systems, due to ergodicity, the location of the condensate changes on a slow timescale and converges to a random walk on the lattice in the limit of diverging density [27]. Recent extensions of these rigorous results include a non-equilibrium version of the dynamics [9,12], and a thermodynamic scaling limit with a fixed supercritical density ρ > ρ c [28].
While the motion of the condensate is the only metastable phenomenon in the above results, a slight generalization studied in [29][30][31] exhibits metastable fluid states at supercritical densities, which are a finite-size effect and do not persist in the thermodynamic limit. The model we study here was first introduced in [32] motivated by experiments in granular media, and is a zero-range process where the jump rates scale with the system size. This leads to an effective long-range interaction, and it is well known that these can give rise to metastable states that are persistent in the thermodynamic limit [1]. The condensation transition in this ZRP is discontinuous with metastable fluid and condensed states above and below the transition density, respectively, and the model has a rich phase diagram.
As a first main contribution of this work we identify a new dynamic transition within the condensed phase region which separates two distinct mechanism of motion of the condensate. Secondly, we provide a complete discussion of all relevant timescales using a comprehensive approach in the context of large deviation theory, which proves to be a powerful tool for the characterization and study of phase transitions for nonequilibrium systems [1,5]. All results we report here are based on rigorous work which is presented in more detail in [33], and are applicable in a more genreal context. To our knowledge, this constitutes the first example of a condensing particle system that exhibits extended regions in phase space with coexisting metastable states, and is an important step to extend recent rigorous results on the condensate dynamics in such systems.
DEFINITIONS AND NOTATION.
We consider a zero-range process on a one dimensional lattice of L sites with periodic boundary conditions. Particle configurations are denoted by vectors of the form η = (η x ) L x=1 where η x is the number of particles on site x, which can take any value in {0, 1, 2, . . .}. In the ZRP particles jump off a site x with a rate that depends only on the number of particles on the departure site, and then move to another site y according to a random walk probability p(x, y), which we take to be of finite range and translation invariant.
The rate at which a particle exits a site is denoted by g L (η x ), where the system size dependence of the jump rates is indicated by the subscript. We consider simple jump rates introduced in for some c > 1 and a > 0.
It is well known that ZRPs exhibit stationary distributions which factorize over lattice sites, see for example [25,32,34]. It is convenient to introduce a prior distribution (or reference measure) which is stationary, that will be used to characterize the canonical and grand-canonical distributions after proper renormalization. The prior distribution is also size-dependent and given by where the empty product is taken to be unity and the normalisation factor is Z L = n 0 w L (n)e −n L . The above weights w L are stationary for the ZRP, and the additional factor e −ηx is a convenient choice so that they can be normalised, which allows the interpretation of free energies as large deviation rate functions (cf. [1]).
Since the dynamics are irreducible and conserve the total particle number, on a fixed lattice starting from any initial condition with a fixed number N of particles the systems is ergodic. In the long time limit the distribution will converge to the corresponding canonical distribution P L,N := P L (.| x η x = N ), which is given by a conditional version of the reference measure,
RESULTS
The large scale behavior and condensation transition can be characterized as usual by the canonical free energy, which is defined as Note that this is the large deviation rate function for the total number of particles under the reference measure P L (cf. also [1]) and is also the relative entropy of the canonical measures with respect to the reference distribution. Explicit computation can be done using the grand-canonical and a restricted grand-canonical ensemble which we outline in the appendix. In the following we simply report the main results. There exists a transition density ρ trans characterized in (8), below which the system is typically in a fluid state with all particles distributed homogeneously. For ρ > ρ trans , the system is in a condensed state, and phase separates into a single condensate site containing of order (ρ − ρ c )L particles and a fluid background at density ρ c . As usual, this is characterized by the free energy decomposing into a contribution f fluid from the fluid and f cond from the condensed phase. It is given by Phase diagram with the following phase regions: (I) For ρ < ρ c + a there is a unique fluid state and particles are distributed homogeneously. (II) For ρ c + a < ρ < ρ trans an additional metastable condensed state exists. (III) For ρ > ρ trans the condensed state becomes stable, and the fluid state remains metastable for all densities. The new transition density ρ dyn (12) characterizes a change in the mechanism for condensate motion, which is explained in Fig. 3. Typical stationary configurations for fluid and condensed states are shown on the right.
As derived in (20) in the appendix, which is the relative entropy of a geometric distribution with density ρ with respect to the reference measure. This geometric distribution can be interpreted as the fluid phase as is discussed in the appendix. The critical background density in the condensed phase is given in (18) is determined by the reference probability of a single site containing mL + o(L) particles, since the condensate has no associated entropy. Note that even though fluid and condensate coexist in the condensed state, there is no free energy contribution from the interface since the stationary distributions factorize, and the combinatorial factor of L possible positions for the condensate location only contributes on a subexponential scale. This lack of surface tension also implies that the condensed phase consists of a single site, in contrast to other systems with non-product stationary distributions [35,36].
The bevaviour of the free energy is dominated by typical stationary configurations, which are illustrated in Fig. 1 along with the phase diagram of the model. Since the background density ρ c < ρ trans is strictly smaller than the transition density the phase transition is a discontinuous transition, as opposed to condensation in ZRPs without size-dependent rates which was already reported in [32].
Metastable states.
The phase diagram in 1 also contains information about metastable states. They can be identified as local minima of the large deviation rate function I ρ (m) for the maximum occupation number M L (η) = max x η x , as is shown in Fig. 2. This characterizes the exponential rate of decay of the canonical probability to observe a maximum of size mL + o(L), i.e. In order to calculate I ρ (m), we first find joint large deviations of the maximum and the density under the prior distribution described by a rate function I(ρ, m). Precisely, we can show that the following limit exists for ρ > 0 and m ∈ [0, ρ] The limit is independent of details of the sequences N/L and M/L so long as m ∈ (0, ρ], if M/L → 0 we require that M is not too small. We find that for each ρ > 0 and m ∈ [0, ρ] the joint rate function for the density and maximum satisfies and the iteration in the second case closes after finitely many steps for each ρ < ∞ and m ∈ [0, ρ]. The first term f cond (m) is the contribution of the maximum to the rate function and the second term is the contribution due to the bulk of the system. The infimum in the second line of (6) arise since a large deviation outside the range m < a or m > (ρ − ρ c ), which is always atypical and never locally stable, may be realised by configurations with more than one macroscopically occupied site. Given I(ρ, m), the canonical free energy and large deviations of the maximum under the canonical distributions are straightforward to compute I(ρ, m) and where again N/L → ρ and M/L → m. Note that f (ρ) is simply a contraction over the most likely value for M L and gives the normalization of the rate function I ρ .
Below ρ c + a there is a unique minimum of I ρ (m) at m = 0 which corresponds to the fluid phase. Above ρ c + a there is another local minimum at m = ρ − ρ c which corresponds to the condensed state. The fluid state exists for all densities ρ and parameter values a 0, c 1, and is stable for ρ < ρ trans and metastable above (cf. Fig. 2). The transition density ρ trans is then characterized by both local minima of I ρ (m) being of equal depth, i.e. I ρtrans (0) = I ρtrans (ρ − ρ c ) = 0 .
With the above results, this is equivalent to The dynamic transition. Above ρ trans a typical stationary configuration is phase separated with the condensate on a single site. Analogous to previous results [27], due to translation invariance and ergodicity on large finite systems, the condensate will change location due to fluctuations. For large densities ρ > ρ dyn the typical mechanism for this relocation is to stay phase separated and grow a second condensate (see Fig. 3 IIIb), which is the same mechanism as identified in other super-critical ZRPs, see for example [27,28]. This mechanism exhibits an interesting spatial depencence on the underlying random walk probabilities p(x, y), which leads to a non-uniform motion of the condensate. For densities ρ trans < ρ < ρ dyn the typical mechanism is to dissolve the condensate and enter an intermediate metastable fluid state (see Fig. 3 IIIa). Since the system relaxes to a translation invariant metastable fluid state before the condensate reforms, the condensate reforms at a site uniformly at ramdom, independent of the geometry of the lattice. This is very different from mechanism (IIIb) where the intermediate state is a saddle point with two condensates of equal height (cf. Fig. 4). In both cases, the life time of intermediate states is negligible compared to the timescale on which the condensate moves and on this timescale the transition happens instantaneously. To derive this transition, with the same approach as above we can calculate the canonical large deviations of the maximum and the second most occupied site M where N/L → ρ, M 1 /L → m 1 and M 2 /L → m 2 . This rate function essentially gives rise to a free energy landscape for the maximum and second most occupied site.
In order for the condensate to move, the system must go via a state in which the maximum and second most occupied site differ in occupation by at most a single particle. In order to reach the diagonal m 1 = m 2 from a condensed state with m 1 > 0 and m 2 = 0 there are two relevant paths as shown in Fig. 4. The first one is along the axis m 2 = 0 towards the metastable fluid state with m 1 = m 2 = 0 following the black line (mechanism IIIa), and the second one is along the red line with m 1 +m 2 = ρ−ρ c , growing a second condensate reaching the diagonal at the the local minimum of the blue curve, which is a saddle point in the full landscape (mechanism IIIb). The associated heights of the saddle points are given by Plugging in (4) to (7) it is easy to see that ∆ cond 1 (ρ) is increasing from 0 for ρ ρ c + a (see Fig. 2 right). Since ∆ cond 2 (ρ) is constant, this implies that there is a dynamic ρ (x, x). The path along the x-axis I (2) ρ (x, 0) is shown as a full black line, and is chosen in mechanism (IIIa). The dashed red line shows the path to the diagonal by growing a second condensate with constant bulk density ρ c , chosen in mechanism (IIIb). ∆ cond 1 , ∆ cond 2 denote the respective exponential costs for the paths (11).
transition at a density ρ dyn characterized by In this formalism we can also include as the depth of the fluid minimum, which provides another characterization of ρ trans as ∆ fluid (ρ) = ∆ cond 1 (ρ) as illustrated in Fig. 2 on the right. After a straightforward computation this leads to ρ dyn = ρ trans + a and therefore ρ dyn > ρ c + 2a .
Note that the saddle point at m 1 = (ρ − ρ c − a), m 2 = a corresponding to ∆ cond 2 only exists if ρ > ρ c + 2a and the system can sustain two macroscopically occupied sites. So while mechanism IIIa exists for all densities ρ > ρ c + a and therefore for ρ > ρ trans , mechanism IIIb exists only for ρ > ρ c + 2a (see Fig. 2 right). This is larger than ρ trans for a large enough, but always below ρ dyn , when mechanism IIIb becomes typical.
Although a complete rigorous description of the metastable motion of the condensate in this system is still an open problem, the exponential timescales associated with the corresponding activation times are directly related to the saddle point heights (as predicted by the Arrhenius law). Also, the dynamics are expected to concentrate in the thermodynamic limit on the least action path (see [2,5]). The expected lifetime of the fluid state, condensed state, and time to observe condensate motion are defined by Here the expectations E f l ρ , E cd ρ are with respect to the dynamics with system size L and density ρ started from a configuration in the fluid and condensed states, respectively. The exponential growth of the life times with the system size is then related to the saddle point structure as follows, This behaviour and the dynamic transition are confirmed in simulations shown in Fig. 5 for symmetric nearest neighbour dynamics in one dimension. In general, using the techniques of [8] the limiting motion of the condensate can be proved rigorously for reversible dynamics. For non-reversible ergodic dynamics the results are still expected to hold but are harder to prove, and additional restrictions may apply. First results on non-reversible condensate motion have just recently been achieved in [9,12]. (11). Right: ρ = 2.4 > ρ dyn , mechanism (IIIa), relocation time grows with exponential factor ∆ cond 2 (11).
The mean particle density is fixed by the conjugate parameter µ ∈ R called the chemical potential. Note that the equality on the right hand side follows since the reference measure factorizes over lattice sites and the marginals on each site are identical. The grand-canonical distributions are well defined for all µ ∈ (−∞, 1). For fixed L, as µ → 1 the normalisation Z L (µ) and the average particle density η 1 P L,µ diverge.
The grand-canonical pressure is given by the point wise limit of the scaled cumulant generating function, The density can be computed as R(µ) = ∂ µ p(µ) and, as discussed in previous work [25,32], the critical density is given by Although p(µ) does not exist above µ c = 1 it can be extended analytically up to 1+log c. It turns out that this extended pressure is exactly the one associated to the grandcanonical distributions conditioned on no site containing more than aL particles. These restricted grand-canonical distributions can be interpreted as metastable fluid states (see [33] for details), and their pressure is given by p fluid (µ) = e µ (c − e −1 ) c − e µ−1 where µ < 1 + log c .
The free energy of the fluid phase is then given by the Legendre-Fenchel transform of the pressure, which is explicitly given in (4). There are further interesting questions related to the equivalence of canonical and grand-canonical ensembles, which are discussed rigorously in [33].
|
2014-08-28T21:21:34.000Z
|
2014-08-28T00:00:00.000
|
{
"year": 2014,
"sha1": "1e630e2752d43c0665b30f844dcff907824ea845",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1408.6864",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1e630e2752d43c0665b30f844dcff907824ea845",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
234767906
|
pes2o/s2orc
|
v3-fos-license
|
Disparities between IgG4-related kidney disease and extrarenal IgG4-related disease in a case–control study based on 450 patients
We aimed to compare the demographic, clinical and laboratory characteristics between IgG4-related kidney disease (IgG4-RKD+) and extrarenal IgG4-related disease (IgG4-RKD−) in a large Chinese cohort, as well as describing the radiological and pathological features of IgG4-RKD+. We retrospectively analyzed the medical records of 470 IgG4-related disease (IgG4-RD) patients at Peking University People’s Hospital from January 2004 to January 2020. The demographic, clinical, laboratory, radiological and pathological characteristics between IgG4-RKD+ and IgG4-RKD− were compared. Twenty IgG4-RD patients who had definite etiology of renal impairment including diabetes, hypertension and etc. were excluded. Among the remained 450 IgG4-RD patients, 53 were diagnosed with IgG4-RKD+ . IgG4-RKD+ patients had older age at onset and at diagnosis. Male to female ratio of IgG4-RKD+ patients is significantly higher. In the IgG4-RKD+ group, the most commonly involved organs were salivary gland, lymph nodes and pancreas. It was found that renal function was impaired in approximately 40% of IgG4-RKD+ patients. The most common imaging finding is multiple, often bilateral, hypodense lesions. Male sex, more than three organs involved, and low serum C3 level were risk factors for IgG4-RKD+ in IgG4-RD patients. These findings indicate potential differences in pathogenesis of these two phenotypes.
All patients underwent radiology examinations consisting of Computed Tomography (CT), or Magnetic resonance imaging (MRI), and some patients also received 18F-fluorodeoxyglucose PET-CT.
We fixed all tissue biopsy samples in formalin and embedded in paraffin, then stained them with hematoxylin and eosin (H&E) and immunocytochemistry (IHC). IHC was performed using the avidin-biotin complexperoxidase method with monoclonal antibody to human IgG4 (Zymed, Carlsbad, CA; dilution 1:50 or 1:100 depending on staining laboratory) on sections from paraffin-embedded tissue.
Pathological findings. In total, 6 patients with IgG4-RKD+ underwent renal biopsy; all of the 6 (100%) patients had TIN. TIN in 1 case (16.6%) was associated with glomerular disease. Membranous nephropathy was the cause of glomerular disease in this case.
All the patients with TIN had a lymphoplasmocytic (LPC) infiltrate with fibrosis. The LPC infiltration was diffuse in 3 patients and patchy in the other 3 patients. In all of the 6 patients with TIN, IgG4 staining demonstrated > 10 IgG4+ plasma cells per high-power field (in the most concentrated area), and all these patients fulfilled the Raissian criteria for IgG4-TIN 14 . www.nature.com/scientificreports/ Radiological findings. We used contrast-enhanced CT to identify radiological abnormalities in IgG4-RKD+ patients except those with renal dysfunction. Fifty of the 53 IgG4-RKD+ patients exhibited characteristic findings of the kidney radiology. Among them, 14 patients were presented with more than one kind of lesion. The most common finding is multiple, often bilateral, hypodense lesions in 31 (58.5%) IgG4-RKD+ patients, which are called small cortical hypodense nodules (Fig. 4A), followed by thickening of the renal pelvic wall in 18 (34.0%) IgG4-RKD+ patients (Fig. 4B), and ureteric obstruction and hydronephrosis related to RPF in 9 (17.0%) patients who had also other specific kidney lesions of IgG4-RKD+ (Fig. 4C). Besides, diffuse patchy involvement, tumor-like less-enhanced mass and rim-like lesion were observed in 8 (15.1%), 2 (3.8%) and 1(1.9%) patient, respectively.
Discussion
In this study, we compared the demographic, clinical, and laboratory characteristics of 53 IgG4-RKD+ patients and 397 IgG4-RKD− patients, as well as describing the radiological and pathological findings in patients with IgG4-RKD+. To our best knowledge, this is the largest case-control study of IgG4-RKD+ and IgG4-RKD− phenotypes. IgG4-RD is manifested by typical clinical features, including tumor-like lesions, dense infiltration with IgG4positive plasma cells, and extensive fibrosis of multiple organs. There is a great variability of disease manifestations for IgG4-RD, and the identification of different IgG4-RD subgroups is crucial, as a consequence of significant disparities in the characteristics of IgG4-RD regarding different organs 6,15 . In our study, the frequency of kidney involvement in IgG4-RD patients was 11.8%, which is lower than that of Japan (23.7%) and Mexico (24.6%) 6,16 , but similar to that of UK study 11 . The heterogeneity of IgG4-RKD+ definition and ethnicity in various studies may explain for this diversity. The following organs were found more commonly involved in IgG4-RKD+ patients, including salivary gland, lymph nodes and pancreas. In addition, multi-organ involvement was common in IgG4-RKD+ patients. Therefore, it is necessary to perform a general checkup to obtain a comprehensive view of the patients especially for patients with IgG4-RKD+.
Male sex, involvement of three or more organs, and low serum C3 level were risk factors for IgG4-RKD+ in IgG4-RD patients. IgG4-RD is a multi-organ immune-mediated condition that could influence almost any organ system in the body. More involved organs may suggest the higher activity of disease. In most cases, IgG4-RKD+ is diagnosed in the context of known extrarenal IgG4-RD or active status of IgG4-RD. With progressive renal decline or detection of characteristic radiological features when evaluating extrarenal IgG4-RD, kidney involvement became evident 14,17 , which may explain for the association between involvement of 3 or more organs and IgG4-RKD+. Renal involvement may appear as an intrinsic kidney disease (IgG4-RKD+) or as a consequence of ureteric obstruction from retroperitoneal fibrosis (IgG4-RPF). IgG4-RPF often concentrated in the periaortic region and ureters can be entrapped, leading to hydronephrosis and renal injury. Therefore, it is necessary to distinguish between the kidney lesions from IgG4-RKD+ and IgG4-RPF. Similar to previous studies 18 , male predominance in IgG4-RKD+ may be explained by the viewpoint that female patients were more likely to present with superficial organ involvement, while male patients with internal organ involvement. www.nature.com/scientificreports/ There are some interpretations for the association of low serum C3 levels and IgG4-RKD+. The first descriptions of IgG4 TIN was previously described as "idiopathic hypocomplementemic tubulointerstitial nephritis" with extensive tubulointerstitial deposits 19 . Only about 16-34% of IgG4-RD patients have low serum complement levels, despite that hypocomplementemia is a feature of IgG4-RD. Nevertheless, more than 50% of patients with active IgG4-TIN have low concentration of complement 14,17 . Therefore, hypocomplementemia is considered an 20 . Therefore, this study added insights into the hypocomplementemia in IgG4-RKD+ patients. Low serum C3 level was found a risk factor for the development of IgG4-RKD+. Hypocomplementaemia is not characteristic feature of most IgG4-RD patients, which often suggests the existence of IgG4-RKD+, thus scrutiny is necessary.
In previously reported studies, kidney function in IgG4-RKD+ patients varies from normal to renal failure, and the development of renal dysfunction also varies from relatively acute to slowly progressive 13,14,16,17,21 . In our cohort, the renal function was impaired manifesting as the reduced eGFR, elevated blood urea nitrogen, elevated serum creatine and abnormal specific renal tubular function test. It could be attributed to IgG4-related TIN in the patients, or to the glomerular disease. IgG4-related TIN occurred in the vast majority of IgG4-RKD+ patients, and MGN was reported less than that. Consistent with the rates in previous studies, IgG4-related TIN occurred in all the 6 IgG4-RKD+ patients who had received renal biopsy. In the urinalysis in IgG4-related TIN, we could find typically mild to moderate proteinuria, as well as occasionally the presence of white blood cells 17 , which was also accordant with our result. For IgG4-RD patients, it is necessary to carry out urine routine test and renal function test (both glomerular and tubular function), in order to timely detect glomerular and renal tubular lesions.
Main abnormalities on renal imaging were revealed in a total of 42 (79.2%) of IgG4-RKD+ patients: multiple low-density nodules, hydronephrosis and thickening of renal pelvic wall. Similar with previous studies, there were also some other imaging manifestations in our cohort, including diffuse patchy involvement of the bilateral kidneys and rim-like lesion of the kidney 11,22 . CT was the most common mode of renal imaging, including PET-CT, which is increasingly used. PET-CT could contribute to excluding malignancy with little radiative damage. Moreover, PET-CT is helpful for discovering the involvement of some silent lesions, however, its cost should also be taken into account.
One of the limitations of this study is its retrospective nature, indicating that some affected organs may be neglected, although most patients have undergone general examinations, including FDG-PET. Moreover, only a small number of IgG4-RKD+ cases were diagnosed based on renal biopsy. Compared to biopsy from superficial tissue, there would be higher risk of iatrogenic trauma when patients receive deep kidney biopsy, thus some patients would not accept it. In addition, although our study has the relatively largest sample size yet, the number of patients is still small. We should cautiously interpret the results.
Conclusion
In summary, we have specified demographic, clinical, and laboratory differences between IgG4-RKD+ patients and IgG4-RKD− patients. IgG4-RKD+ patients had older age at onset and older age at diagnosis. The male to female ratio of IgG4-RKD+ patients is significantly higher. The most commonly involved organs of IgG4-RKD+ patients were salivary gland, lymph nodes and pancreas. Male sex, involvement of three or more organs, and low
|
2021-05-19T06:17:02.488Z
|
2021-05-17T00:00:00.000
|
{
"year": 2021,
"sha1": "242b708d78db3872e755b42ec5911573eb373fce",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-89844-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51a97351b4b573ef472de8667c700e8b926e0a64",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235419487
|
pes2o/s2orc
|
v3-fos-license
|
Vocal cord granuloma after transoral thyroidectomy using oral endotracheal intubation: two case reports
Background Transoral thyroidectomy can be performed using nasal or oral intubation. Recently, we encountered two cases of vocal cord granuloma that were suspected to result from intraoperative compression by the oral endotracheal tube. Cases presentation Two women underwent transoral endoscopic thyroidectomy with oral endotracheal tubes fixed at the mouth angle. Their initial postoperative recovery was uneventful, but they developed hoarseness 2 months after the surgery. Subsequent strobolaryngoscopy revealed vocal cord granulomas at the side of contact of the endotracheal tube. One patient received medication and voice therapy, and her granuloma shrank significantly one month later. The other patient underwent granuloma resection. Thereafter, the symptoms improved in both the patients. Conclusions Oral intubation with tube placement at the mouth angle might result in the formation of vocal cord granulomas. Therefore, we suggest positioning the tube at the midline to avoid excessive irritation on one side of the vocal cord.
Background
Transoral thyroidectomy via the vestibular approach is a scar-free surgery that provides excellent cosmetic results while yielding surgical outcomes equivalent to those of traditional open surgery [1]. This approach uses three trocars placed in the oral vestibular area to perform thyroidectomy. However, the endoscopic instruments sometimes collide with the camera owing to the limited space in the oral cavity. To avoid this overcrowding, nasal intubation, instead of oral intubation, was used in the initial design of the surgical procedure described by Anuwong [2].
Nevertheless, nasal intubation might cause epistaxis, especially in patients with a narrow nasal cavity or deviated nasal septum, which might further impede the passage of the endotracheal tube and increase the risk of nasal injury [3]. Endotracheal tube compression might also result in the formation of pressure sores in the nasal ala [4]. Therefore, some surgeons attempted using oral intubation and found it to be a feasible alternative while performing transoral thyroidectomy [5][6][7][8][9].
At our institution, we routinely use oral intubation during transoral endoscopic thyroidectomy to avoid nasal complications and ensure patient comfort (Fig. 1). However, we recently encountered two cases of vocal cord granuloma that probably resulted from intraoperative compression by the oral endotracheal tube during transoral endoscopic thyroidectomy. Herein, we describe the two cases and suggest a modified position for the oral endotracheal tube to avoid such a complication.
Case 1
A 27-year-old woman (height, 162 cm; weight, 51 kg) with a 5-year history of Graves' disease presented with a 2.6-cm left thyroid nodule during her regular follow-up. Fine-needle aspiration cytologic examination suggested the nodule was a papillary carcinoma. She had no known history of gastroesophageal reflux disease or other systemic diseases, and her job did not require excess voice usage.
She underwent transoral endoscopic total thyroidectomy with central neck lymph node dissection. General anesthesia with oral intubation was performed using an electromyogram (EMG) tube (internal diameter = 6.0 mm) (Medtronic, Jacksonville, FL, USA) for intraoperative neuromonitoring (IONM). The endotracheal tube was fixed at the mouth angle ( Fig. 1). Transoral endoscopic thyroidectomy was performed according to the procedure described by Anuwong et al. [10]. During the surgery, both the recurrent laryngeal nerves were identified visually, and their function was confirmed via IONM. The patient's postoperative course was uneventful except for occurrence of transient hypoparathyroidism, which resolved 1 week later. Her voice was fine without any hoarseness, and she was discharged on postoperative day 4. The final pathologic examination confirmed the diagnosis of papillary carcinoma with lymph node metastasis.
Her condition was unremarkable during the postoperative outpatient follow-ups at 1 week and 1 month. However, she developed hoarseness, forced voice, and voice fatigue two months after the surgery. Strobolaryngoscopy revealed symmetrical and movable vocal cords but also showed a granuloma over the left vocal process ( Fig. 2A). She was prescribed oral prednisolone and a proton pump inhibitor along with voice therapy. Her symptoms improved thereafter, and the granuloma appeared to have shrunk significantly as seen on the 1-month follow-up strobolaryngoscopy (Fig. 2B).
Case 2
A 47-year-old woman (height, 168 cm; weight, 80 kg) presented with a 3.3-cm right thyroid nodule with accompanying mild compression symptom. Fine-needle aspiration cytologic examinations were performed two times, but both failed to reveal a diagnosis.
She underwent transoral endoscopic right thyroidectomy under general anesthesia with oral endotracheal intubation (tube internal diameter = 7.0 mm). IONM was also implemented and showed a positive signal from the recurrent laryngeal nerve. After the surgery, the patient regained an intact voice and showed a smooth recovery. She was discharged on postoperative day 3. At the 1week follow-up, her condition remained unremarkable, but a pathologic examination revealed nodular goiter.
Nevertheless, 2 months after the surgery, the patient started noticing hoarseness with voice fatigue. Strobolaryngoscopy revealed a contact granuloma over the posteromedial aspect of the left vocal cord (Fig. 3). She denied any voice abuse and had no history of gastroesophageal reflux disease. Although she was recommended conservative treatment with medication, she opted for granuloma excision. Her symptoms improved after the surgery.
Discussion and conclusions
To the best of our knowledge, the occurrence of vocal cord granuloma after transoral thyroidectomy has not been reported in the literature. Among 70 consecutive patients who underwent transoral endoscopic thyroidectomy at our institution, two patients (2.9 %) developed vocal cord granuloma. Their initial postoperative recovery was uneventful, but they developed hoarseness 2 months later. Further examination revealed vocal cord granulomas located in the posterior aspect of the left vocal cord in both the patients.
We speculated that granuloma formation was related to intubation trauma. We had routinely fixed the oral endotracheal tube at the patient's left mouth angle during the surgery (Fig. 1). The connecting tube would run in a slightly upward direction to avoid collision with the left lateral trocar and then link to the anesthesia ventilator, which was placed on the patient's left side. In this setting, the posterior part of the left vocal cord would bear the most pressure, thereby, increasing the risk of granuloma formation on that side.
Prior to surgery, we routinely performed videolaryngoscopy to check if the electrode on the EMG tube was appropriately in contact with the vocal cord after the patient was intubated and placed in the neck extension position. In both the present cases, the patients showed no lesions on their vocal cord during this laryngoscopic inspection. Moreover, they had no history of gastroesophageal reflux disease, and their jobs did not require excessive voice usage. Therefore, the vocal cord injury was more likely to have occurred during the surgery, not before it.
Other intraoperative factors that might be associated with granuloma formation in our cases included longer operative time (290 and 240 min for case 1 and case 2, respectively) and tracheal irritation caused by surgical manipulation, especially during thyroid dissection and specimen retrieval.
After encountering these two cases, we decided to move the fixation site of the endotracheal tube from the mouth angle to the midline (Fig. 4). This change in position would enable the compression pressure of the endotracheal tube to be distributed on both sides of the vocal cord, and not solely borne on one side, which might lower the risk of granuloma formation. Care was also taken to properly fix the endotracheal tube along its natural curvature without excessive bending. After implementing this modification, we encountered no more cases of vocal cord granuloma among patients undergoing transoral endoscopic thyroidectomy.
Chai et al. also advocated for midline placement of the oral endotracheal tube to improve the movement of endoscopic instruments on the tube side of the lateral working port [11]. They reported that there was no limitation on the range of motion after changing the tube position [11].
Using nasal endotracheal intubation during transoral thyroidectomy may be another way to avoid vocal cord granuloma formation. However, care should be taken to prevent nasal complications. For example, epistaxis may be avoided by softening the endotracheal tube prior to intubation. In patients with a deviated nasal septum, the larger nostril should be used for tube insertion. The endotracheal tube should be properly positioned to avoid pressure sores on the nasal ala.
Some surgeons might wonder whether the midline placement of the endotracheal tube would hinder the movement of the middle trocar, which is used for inserting a 30°endoscope. In fact, the camera port stays at a higher position than the one the endotracheal tube is at (Fig. 5). Furthermore, the camera port has to be lifted and tilted down during the surgery so that the endoscope can be used to look down from the chin to the neck. Therefore, the chance of collision between the endoscope and the oral endotracheal tube is minimal. If this is still a concern, a surgeon can also use the armored endotracheal tube, which is more flexible and easier to bend to fit the curvature of the nose and forehead. This would also ensure the orotracheal tube is further away from the camera port. Another option is to fix the endotracheal tube in the paramedian position between the central camera port and the lateral working port.
Laryngeal injury due to direct pressure exerted by the endotracheal tube may result in mucosal ulceration and inflammation that lead to granuloma formation [12]. Such granulomas might initially be clinically silent but might become symptomatic weeks later [13]. Both our patients developed hoarseness 2 months after the thyroidectomy. The management of vocal cord granuloma was mainly conservative and included irritation removal and medication, which is the recommended course of treatment [12,14]. Surgical excision is rarely required and is reserved for severe cases [12]. Our first patient showed a significant improvement in symptoms, and her granuloma shrunk after conservative treatment. However, the second patient opted for surgical excision, and she also showed substantial improvement.
In summary, we reported two cases of vocal cord granuloma resulting from compression by the oral endotracheal tube during transoral endoscopic thyroidectomy. Positioning the tube at the midline rather than at the mouth angle might decrease the risk of granuloma formation. Position of the trocars and oral endotracheal tube. The middle camera port stays at a higher position than that of the oral endotracheal tube (arrow). In addition, the camera port is lifted and tilted down during the surgery so that the endoscope can be used to look down from the chin to the neck. Therefore, the chance of collision between the endoscope and the oral endotracheal tube is minimal
|
2021-06-14T13:44:19.413Z
|
2021-06-14T00:00:00.000
|
{
"year": 2021,
"sha1": "113aa74eeb3faf862b67004a936fd50a309bc364",
"oa_license": "CCBY",
"oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/s12871-021-01393-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "113aa74eeb3faf862b67004a936fd50a309bc364",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252200012
|
pes2o/s2orc
|
v3-fos-license
|
Continual learning benefits from multiple sleep mechanisms: NREM, REM, and Synaptic Downscaling
Learning new tasks and skills in succession without losing prior learning (i.e., catastrophic forgetting) is a computational challenge for both artificial and biological neural networks, yet artificial systems struggle to achieve parity with their biological analogues. Mammalian brains employ numerous neural operations in support of continual learning during sleep. These are ripe for artificial adaptation. Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks: (1) a veridical memory replay process observed during non-rapid eye movement (NREM) sleep; (2) a generative memory replay process linked to REM sleep; and (3) a synaptic downscaling process which has been proposed to tune signal-to-noise ratios and support neural upkeep. We find benefits from the inclusion of all three sleep components when evaluating performance on a continual learning CIFAR-100 image classification benchmark. Maximum accuracy improved during training and catastrophic forgetting was reduced during later tasks. While some catastrophic forgetting persisted over the course of network training, higher levels of synaptic downscaling lead to better retention of early tasks and further facilitated the recovery of early task accuracy during subsequent training. One key takeaway is that there is a trade-off at hand when considering the level of synaptic downscaling to use - more aggressive downscaling better protects early tasks, but less downscaling enhances the ability to learn new tasks. Intermediate levels can strike a balance with the highest overall accuracies during training. Overall, our results both provide insight into how to adapt sleep components to enhance artificial continual learning systems and highlight areas for future neuroscientific sleep research to further such systems.
I. INTRODUCTION
Learning new tasks and skills in succession without overwriting or interfering with prior learning (i.e., "catastrophic forgetting") is a computational challenge for both artificial and biological neural networks. And yet, while the latter overcome this with ease (ex., a child does not forget what s/he learned in class yesterday by learning new things today), current artificial networks struggle to achieve parity. Several approaches for this kind of "Continual Learning" problem [1] have been developed in the A.I. space, including: dynamic architectures that can grow network capacity, weight regularization-based approaches that mitigate catastrophic forgetting by constraining the update of previous weights [2], [3], and interleaved replay of training examples from previous tasks [4]. In contrast to rote replay (e.g., using a memory buffer to store oneto-one copies of experience), generative replay [4] does not store exact copies of specific examples from previous tasks, but instead trains a network to retain higher-level/compressed representations, from which it can create de novo synthetic training samples.
A general challenge with all strategies for continual learning lies in their ability to scale. Dynamic architectures and replay approaches must ensure that the network's (or replay buffer) size remains manageable. Generative replay approaches specifically can produce ever-growing generator networks that are susceptible to catastrophic forgetting. Weight regularization does not extend well to more challenging tasks like classincremental learning (which requires learning from one subset of classification targets to persist in the face of constant retraining on incrementally presented new subsets) [5]. Regularization methods may not scale to large numbers of tasks, as weight regularization parameters continue to grow.
Although continual learning clearly poses daunting challenges for artificial systems, given the abundance of biological solutions that can be adapted into artificial ones, there is ample opportunity to improve artificial systems by leveraging these as blueprints.
Mammalian brains in particular have evolved a wide array of specialized processes to combat catastrophic forgetting, and several of the operations that occur most saliently during sleep are attractive candidates for artificial implementation. These include: 1) a veridical memory replay process linked to non-rapid eye movement (NREM) sleep; 2) a generative memory replay process linked to REM sleep; and 3) a synaptic downscaling process which has been proposed to modulate signal-to-noise ratios and support neural upkeep [6]. Much is understood about these processes from a neuroscience perspective, and it is worth briefly reviewing the neuroscience behind these processes to shed light on their appealing properties and provide intuition about their operating principles. One of the most widely studied and ubiquitous learning processes visible in the brain during sleep is memory replay. Memory replay has been observed during sleep in animals [7], [8] and in humans [9], and is perhaps most convincingly demonstrated in the hippocampus of rodents engaged in spatial learning (hippocampus-dependent) tasks. These studies leverage the useful properties of hippocampal place cells, neurons that fire vociferously when the rodent is located in that cell's receptive field [10]. Together, a sequence of place cell firings encodes a specific trajectory through space, and trajectories observed during waking behavior can be detected in subsequent bouts of sleep via correlation analyses [8]. In this way, the neural representation of specific information known to the experimenter and encoded in the animal's neural circuitry can be precisely quantified and studied during sleep.
Memory replay during NREM sleep connects novel learning supported by the hippocampus to distant afferents in the cortex [11], and has been proposed to support the consolidation of long-term memory into distributed cortical stores [11], [12]. In NREM sleep, replay is veridical, meaning, place cell trajectories activated during learning are replayed in exactly the same sequences as those observed during wakefulness [13]. Veridical replay facilitates network plasticity in mammalian brains. Likewise, in artificial ones, it could be exploited to update network weights several times over rather than only during initial exposure to the represented experience/example. This type of process can also be implemented by generating samples at intermediate neural network layers instead (i.e., restricting operations to only higher-level, abstract representation layers, rather than fully recreating synthetic examples for submission to early input layers; this is effective while furthermore being more efficient). [14].
In stark contrast to the hippocampo-cortical outflow seen during NREM sleep, during REM sleep, the flow of information is reversed and hippocampal output to the cortex is suppressed [15], [16]. This frees the cortex to process and reorganize knowledge without interference not only from the environment but also from the hippocampus [17]. In mammals, this is likely achieved mechanistically by REM's distinctive neuromodulatory tone (low acetylchline and norepinephrine) which favors the spread of neuronal activity beyond that observed during wakefulness [18]. This increased connectivity and opportunity for novel connections provides a possible physiological basis for the finding that sleep generates insight in humans [19], which in turn alludes to another benefit that could be tapped by artificial analogues. Viewed from the perspective of continual learning, REM sleep is a state ideally suited to generate novel possible experiences by manipulating and reorganizing elements experienced over the lifetime. This generative replay likely supports further memory consolidation in mammals, and presents a mechanism for artificial neural networks to revisit representations of experience learned in the distant past and generate novel feature combinations that share the statistical properties of previously experienced examples.
A fundamental sleep process that remains to be explored in artificial neural networks is synaptic downscaling. Synaptic downscaling has been observed in animals to be sizedependent (i.e., scaled by the size of interfacing synaptic boutons [20]), and has been hypothesized to homeostatically regulate the metabolic costs incurred by synaptic connections formed during wakefulness, and recycle unneeded synapses for future use [6], [21]. Superficially a purely metabolic function, this process has been proposed to have the critically important effect of fine-tuning signal-to-noise (SNR) ratios in neural networks. Regulating weight updates in a similar way may be a useful addition to artificial neural networks tasked with continual learning scenarios.
There have been other approaches for continual learning that utilize aspects of sleep, such as implementing oscillatory phase coding of unsupervised spike-timing-dependent plasticity during training [22], modeling hippocampal consolidation into a medial prefrontal cortex generator model [23], and creating detailed thalamo-cortical models of replay and slow-wave sleep for sequence learning [24]. However, the integration of a synaptic downscaling process into an artificial neural network that also implements two types of memory replay (veridical and generative) has not yet been investigated.
To address this gap, in this work we train an artificial neural network on a continual learning task that includes models of all three of these sleep processes: 1) NREM veridical memory replay, 2) REM generative replay, and 3) synaptic downscaling. To address the first, NREM veridical replay is modeled by interleaving processed sensory input examples from the current task during training. For the second, REM generative replay, we incorporate a model of replay [14] that generates statistically matched composites of processed sensory input features experienced in previous tasks to use during continued training. The model learns latent representations of object classes, from which it can further learn by generating and evaluating the composed results. Third and finally, to model the size-dependent downscaling of synaptic boutons that has been observed during biological sleep [20], we incorporate magnitude-based pruning as a first order approximation of the process. Magnitude-based pruning not only provides a simplified downscaling approach with few hyperparameters, but has also been shown to be generally effective for model compression (even compared to more complex sparsity-inducing methods) [25]. Combining all three of these facets of sleep is, to the best of our knowledge, a novel approach to continual learning, and alludes to more gains that may be realized by adopting this biofidelic approach.
In sum, in this study we investigate the joint performance of three sleep-inspired neural processes implemented in a neural network that trains on a challenging CIFAR-100 classincremental continual learning benchmark task. We then evaluate the network's ability to perform image class prediction from any of the sets of previously trained tasks (i.e., the sum total of its "lifetime" experience). In light of the observed performance gains, our results indicate that not only is synaptic downscaling a useful approach, but that in general adopting this frame of reference, i.e., turning to mammalian sleep and modeling the multiple neural processes that occur there, is a useful perspective to assume when seeking new ways to improve the performance of future artificial neural networks and intelligent systems.
II. METHODS
An initial model of tripartite artificial sleep for network training is implemented by extending and integrating approaches that have been used separately for continual learning and neural network model compression. While there are many types of continual learning benchmark tasks, class incremental learning, wherein a network needs to classify examples from any previously learned task (as opposed to performing classification for a single, previously learned task) is more challenging than other scenarios (e.g., incremental learning [14]) and more akin to human learning, which occurs across a lifetime (i.e., the sum total of experience)). Hence we selected this approach to use here.
A network implementing tripartite artificial sleep ( Fig. 1), needs to be able to generate past training examples (replay) as well as be able to perform classification on the current task. In this work, a model of the NREM veridical replay process is implemented by utilizing feature representations in intermediate network layers of the current task's training data for weight optimization. A model of REM generative replay is implemented as generative "hidden" replay, where a generator / auto-encoder generates training input samples and the classifier output is used for REM training labels. The classifier training is performed using the NREM / REM input samples and class labels, while the generator training is performed using the NREM / REM input samples in tandem with classifier training. Synaptic downscaling during the sleep / wake cycle has been observed in animals to be size-dependent, and this is modeled here in a first order approximation of size-dependent scaling (setting a cutoff threshold for weight zeroing). With the implemented synaptic downscaling, small weights are down-scaled completely (zeroed out) and larger weights are not down-scaled at all. This is implemented once on each new task (set of ten classes) at the beginning of training (on that task).
The two-process (veridical/generative replay) base network architecture that we modified for tripartite artificial sleep has been previously used to investigate continual learning [14], and consists of (1) a set of five pre-trained convolution layers that take raw images as input and output a vector of image features, h, (2) a symmetric variational autoencoder (VAE), which consists of (a) an encoder network that maps h to a vector of stochastic latent variables, z, and (b) a decoder network which generates an estimated reconstructed image feature vectorĥ, and (3) a softmax classification output layer, which receives input from the last layer of the VAE encoder network (Fig. 1). The five convolution layers have 16, 32, 64, and 254 channels respectively with 3×3 kernels and a padding of 1. All layers have a stride of 2, except for the first layer which has no downsampling. The input into the convolution layers is a 32×32 RGB image, and the output, h, is vector of 1,024 flattened image features. The encoder and decoder VAE networks each consist of two fully-connected layers of 2,000 ReLU units. The stochastic latent variable layer, z, has 100 Gaussian units. The softmax output layer has a unit for each class label to be predicted.
For the split CIFAR-100 class incremental continual learning task and replay-based optimization [14], the CIFAR-100 dataset [26] is split into 10 tasks with 10 classes each. For training, the loss function, L, that is optimized during a task, is a combination of classification loss, L C , and generator loss, L G . Prior to training for task N , the stochastic latent variable, z, is sampled to generate image feature sampleŝ h j , j ∈ {1...N − 1} from previous tasks. Corresponding classifier softmax output samplesŷ j are generated by passinĝ h j as input to the encoder network. The classification loss is composed of where L C current is the cross entropy loss calculated for the current task and L C replay is the distillation loss calculated for samples from the previous tasks. The generator loss is composed of where L G current is calculated based on image features, h, from the current task images and L G replay is calculated based on generated image features samplesĥ j . Optimization is performed for 10,000 iterations per task with the ADAMoptimizer (β 1 = 0.9, β 2 = 0.999). The convolutional layers were pre-trained on a classification task with non-overlapping images (CIFAR-10).
A simplified model of synaptic downscaling introduced here is incorporated into network training by setting a fraction, p, of the smallest weights in each trainable layer to zero for each task prior to weight optimization. To gain insight into the functional importance of the modeled sleep processes, network training variants are evaluated with varying levels of p ∈ {0, 0.25, 0.5, 0.75, 0.9}, as well as with and without the modeled REM generative replay.
III. RESULTS
For the class-incremental continual learning benchmark, as the task number increases, task difficulty increases both because there are more valid classifier outputs and because there have been more iterations of training updates since early task images have been provided to the network. An overview of testing accuracy across different components of tripartite artificial sleep (Fig. 2), highlights the degradation in model accuracy when no generative replay is utilized. The average testing accuracy, µ N (i), is quantified for each training iteration, i, during the current task, N as where a C N (i) is the test accuracy for the current task, N , on the set of classes introduced during task C. Without generative replay, the accuracy on the current task is high, but there is a complete collapse on accuracy from previous tasks (Fig. 3). The average testing accuracy on the previous classes is measured as µ prev a C N (i) and correspondingly the average testing accuracy on the current task is measured as µ current With generative replay, model accuracy is increased relative to applying only veridical replay, however, during the course of task training, the overall testing accuracy still decreases (Fig. 2). The decrease in overall accuracy during training occurs because even though the accuracy on classes introduced during the current task increases, the accuracy on classes introduced during previous tasks invariably decreases. With synpatic downscaling, one of the most striking observed trends is that for higher downscaling levels on later tasks, there is a prolonged period of higher replayed accuracy (Fig. 3). Overall, analysis of the full trajectory of testing accuracy during training demonstrates that generative replay is necessary to mitigate catastrophic forgetting, and that the level of downscaling affects the progression of maintained accuracy on previous tasks such that later tasks are maintained for longer.
In all of the training variants with generative replay after the first task, there is a rapid increase in overall accuracy (Fig. 4) followed by a steady decline. These dynamics of testing accuracy over the course of training iterations changes per downscaling level (Fig. 4). Without any downscaling or with lower levels of downscaling (p <= 0.5), the trajectory looks similar. For later tasks, with increasing p, the accuracy continues to climb for longer, with a later peak in overall accuracy. With p = 0.75, there is a clear increase in overall accuracy at later tasks ( Fig. 4-5). The maximum test accuracy during training of each task (maximum of µ N (i)) can be used as a metric to summarize overall performance. For early tasks, the maximum accuracy is relatively preserved, however, for later tasks, there is a trend for maximum accuracy to increase up to a downscaling level of p = 0.75 and decrease as downscaling levels increase further. The maximum accuracy levels per evaluated downscaling level can be understood in the context of the trade-off between current and replayed task accuracy. During training without downscaling, there is a rapid decay of replayed accuracy, which occurs before a sufficient increase in current task accuracy, leading to lower maximum accuracy levels. During training with downscaling p = 0.75, there is a prolonged maintenance of replayed accuracy that overlaps with an increase in current task accuracy, leading to a higher maximum accuracy level.
We can evaluate the properties of a continual learning system by further understanding how task performance is affected by the makeup of performance across previous tasks. A system that learns in a way that is balanced between all tasks would have uniform task accuracy between tasks, which we can measure during each training iteration as the KL divergence of the observed accuracy across tasks and a uniform distribution, Smaller KL N (i) values signify a balance of accuracies. Generally, over the course of training epochs for a task, there is a rise in KL N (i) as the current task accuracy dominates over the previous tasks (Fig. 6). In later tasks, however, there is a minimum KL N (i) value (signifying balance between tasks) with downscaling at p = 0.75 (Fig. 7). Underscoring the importance of balance between tasks, the training iterations with minimum KL N (i) value overlaps with the training epochs for the highest overall accuracy. Note that there is another trend of instantaneous rise and drop of KL N (i) right after downscaling which can be attributed to the discontinuity after downscaling. The balance of task accuracies can be further understood by investigating the relationship between recency and task balance (Fig. 8). Without downscaling, task accuracy is driven predominantly by the most recent tasks with diminishing contributions from earlier tasks. Contrarily, with extreme downscaling (p = 0.9), overall accuracy is driven by earlier tasks (at the expense of more recent tasks). At the intermediate value of p = 0.75, the task accuracy between previous tasks is balanced. Overall, we find that the level of downscaling affects the balance of accuracy between previous tasks, where no downscaling overrepresents later tasks, extreme downscaling overrepresents earlier tasks, and intermediate levels of downscaling maintains a more balanced level of accuracy between tasks.
To enable a system to have higher overall accuracy, forgetting of previous tasks needs to be limited. We can quantify the forgetting of a set of classes introduced in task C during training as where m is the number of additional tasks that have been introduced after C. (Fig. 9). When m = 0, f m C signifies the accuracy when the set of classes, C is first being introduced, where an overall increase in task accuracy is observed. During training on subsequent tasks, a decline in accuracy is observed. A striking trend, however, is that with downscaling, there is a pronounced recovery in previous task accuracy, with accuracies on initially trained tasks capable of reaching over 60% accuracy (Fig. 10). The level of downscaling affects this recovery, with larger amounts of downscaling resulting in larger recovery of accuracies. The relationship between downscaling and forgetting bolsters what has been seen in the task balance and cumulative accuracy, where downscaling diminishes forgetting of earlier tasks. Even though the most extreme levels of downscaling has the most protection of As the amount of total training iterations after a class is first introduced increases, the task class accuracy diminishes (forgetting occurs). However, there is a period of recovered accuracy which is higher with more synaptic downscaling. Solid lines are the average across all sets of task classes with bootstrapped 95% confidence intervals. Note that there have been less total iterations trained for task classes that are introduced in later tasks, thus less accuracies are included for task average accuracies as iterations trained for task classes increases. performance on earlier tasks, there is still a trade off, where the protection of earlier tasks competes with the ability to learn newer tasks. Thus high, but not extreme downscaling may be preferable in practice.
To further understand the mechanistic relationship between downscaling and system performance, we investigate how the network's distribution of weights changes over the course of training (Fig. 11). Generally, the overall weight distribution widens over the course of training on all tasks. After downscaling, there is a bi-modal distribution of weights between the preserved and the downscaled weights, which becomes more uni-modal over the course of training and weight update. For later tasks, the peak in overall accuracy corresponds to the training periods where the weight distribution is more bimodal. The amount of downscaling utilized also affects the overall distribution of network weights. With more downscal- Fig. 11. Distribution of positive network weights evolves over the course of training across tasks with synaptic downscaling. Top) Network weights in the first encoding layer with downscaling fraction p = 0.75 broadens over the course of training. Bottom) During early training iterations of above, the bi-modal structure of network weights after downscaling diminishes. In later tasks, the test accuracy decreases as the network weights approach a less bi-modal distribution. ing, there is a broadening of the overall weight distribution with a more pronounced bi-modal structure. The comparisons of trends in distributions with downscaling in artificial networks could be compared to the distributions observed in biology to better constrain future continual learning systems.
IV. DISCUSSION
In this work, we investigated the impact of a tripartite artificial sleep model in the training of an artificial neural network performing a continual learning task. This entailed three processes created to capture their respective beneficial properties observed in mammalian sleep: 1) NREM veridical replay, 2) REM generative replay, and 3) synaptic downscaling. We found that the addition of synaptic downscaling complements replay by enhancing continual learning and mitigating catastrophic forgetting. Specifically, during training, the addition of synaptic downscaling was found to enhance performance on earlier tasks, increase the balance of accuracy between previous tasks and recent tasks, and achieve the highest overall accuracy. Furthermore, while early task accuracy diminishes over the course of network training, the inclusion of synaptic downscaling increases the recovery of early task accuracy during subsequent training.
Our findings, specifically that the addition of this third component (synaptic downscaling) improves continual learning, lend themselves to several interpretations. One potential mechanism behind the overall increased performance is that downscaling implements model compression, similar to magnitude-based pruning [27], such that downscaled weights can be preferentially utilized on subsequent tasks. Unlike pruning for model compression applied to a single task, the downscaling investigated here is performed repeatedly by the same amount after each task and interleaved with generative replay, which makes the interplay of downscaling in continual learning hard to generalize from previous pruning-based model compression studies.
Our computational experiments identified a tradeoff with the level of downscaling -the maximum early task accuracy is increasingly protected with more downscaling, but more downscaling can degrade performance on more recent tasks. An observable effect of downscaling on model performance started at levels greater than 50%. At downscaling levels greater than 90%, performance on recently learned tasks began to be severely degraded. For intermediate downscaling levels of 75%, the prolonged protection of earlier tasks coincides with the rise of current task accuracy for the highest overall accuracy during training.
This highest overall accuracy is observed early on in the training iterations for an individual task. The decrease in average accuracy as training continues during a task can be attributed to the tradeoff in optimizing performance on current vs. replayed tasks. Thus, for a deployed system to have the highest continual learning accuracy, training would need to be stopped early for the current task. Note, however, that this higher accuracy cannot be achieved simply by training all tasks for less iterations, because the current set of classes that are learned during a task need to be trained over the full course of iterations to enable future accurate identification. Interestingly, even though performance drops on a task over training, it increases again (is recovered) (see Fig. 9), which is more pronounced with higher levels of downscaling. This suggests that with higher levels of downscaling, network weights are in a configuration to enable recovery of early task accuracy, even when early task accuracy is diminished. When investigating weight distributions during this re-learning / higher accuracy period, balanced accuracy starts to diminish as weights become less bi-modal. Overall, the interplay between integrating synaptic downscaling with generative replay shows how high (but not extreme) levels of downscaling can be beneficial for continual learning.
While the overall accuracy increased during training with the modeled tripartite artificial sleep, there are ways that the current approach can be extended in conjunction with other approaches for continual learning. In this work, the synaptic downscaling is used in conjuction with aspects of brain-inspired generative replay [14], however there are other generative approaches [4] which could be explored in tandem with synaptic downscaling. Perhaps the most similar work to the inclusion of synaptic downscaling for continual learning are pruning-based approaches [28], which prune neurons based on activation level from earlier tasks in order to compress the current task's model and then iteratively train, progressively utilizing more of the network's capacity with additional tasks. Related to pruning-based approaches which protect a subset of weights completely, are weight regularization approaches, which in effect offers varying levels of "protection" to the weights in a network [2], [3]. With weight regularization approaches, changes to certain weights are protected from subsequent change based on a calculated importance metric, as opposed to the protection of weights based on their magnitude implemented here during synaptic downscaling. Additionally, with many weight regularization approaches, once a synapse has importance attributed to it for a certain task, that importance will never decrease. While this is helpful for continual learning to prevent catastrophic forgetting and has been shown to increase accuracy when combined with generative (braininspired) replay [14], protecting changes to a network at the individual weight level may make the network less likely to reconfigure / re-consolidate larger configurations of weights. It is interesting that, here, even with pruning 90% of the weights for each task (and not explicitly calculating an importance), early task performance can be protected quite well. Our results also suggest testable hypotheses in cognitive neuroscience. For example, in subjects who learn different memory tasks on each of several days, our model predicts that synaptic downscaling measurements and interventions may correlate with the retention of early learning. This would of course necessitate tools for the reliable noninvasive measurement of downscaling.
In addition to extensions of existing continual learning approaches, there is an opportunity to guide neuroscience investigations based on observed trends here and to build continual learning models that capture even more aspects of sleep. In particular, there are several outstanding questions around synaptic downscaling during sleep which could be investigated further. First of all, a better quantification of size-dependant downscaling during sleep in future experimental studies could help parameterize more detailed models of synaptic downscaling. We performed a magnitude-based zeroing of weights as a first-order approximation of size-dependant downscaling that could be extended in further analysis. Furthermore, the investigation of how synpatic downscaling changes between neural regions (e.g. hippocampal, cortical sensory, cortical associational) could be integrated into future models as well. We implemented the same amount of downscaling throughout all encoder, decoder, and classifier output layers which could be augmented in future models.
Our implementation of NREM veridical replay could also be augmented in the future. For simplicity, we replayed intermediate layer representations of the current task's image inputs during model training. This NREM veridical replay mechanism could instead more closely model hippocampal processes, incorporating an additional veridical generator component.
In conclusion, there is a rich panoply of benefits and possible algorithmic extensions suggested by the inclusion of multiple sleep processes (here, three) in the construction of artificial neural networks and intelligent systems. Much of this will benefit from adopting a perspective that includes not only the overt behavioral neuroscience of wakefulness, but also the roughly one-third of our lives spent processing information during sleep.
|
2022-09-13T01:16:00.395Z
|
2022-09-09T00:00:00.000
|
{
"year": 2022,
"sha1": "783a67087c2afadb2bd562fe84b2233ad048e713",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "783a67087c2afadb2bd562fe84b2233ad048e713",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
253434676
|
pes2o/s2orc
|
v3-fos-license
|
Understanding meaningful work in the context of technostress, COVID-19, frustration, and corporate social responsibility
COVID-19 and digitalization represent important sources of many employees’ frustrations. In this article, we address the question of how employees can achieve meaningful work in such a challenging and frustrating context. Specifically, we investigate whether employees’ negative experiences related to technology use—that is, techno-invasion—leads to frustration and in turn reduces employee perceptions of meaningful work. In addition, we examine corporate social responsibility as a potential remedy that could mitigate these negative effects. The results of our four-wave longitudinal study of 198 working professionals collected during the first wave of the COVID-19 pandemic did not find support for a proposed negative direct effect of techno-invasion on meaningful work. However, we did find support that perceived corporate social responsibility moderates the indirect relationship between techno-invasion and meaningful work, mediated by frustration: for low levels of corporate social responsibility, techno-invasion results in higher levels of frustration, in turn reducing meaningful work. High levels of corporate social responsibility buffer this negative indirect effect. Implications for research and practice dealing with digitalization, meaningful work, and corporate social responsibility are discussed.
Introduction
Many philosophers since ancient Greece have contemplated the meaning of life, a debate that is gaining new momentum in a modern society, which is characterized by an abundance of opportunities (Guinness, 2018). As most adults spend the majority of their waking hours at work, contemporary discourse on a meaningful life increasingly emphasizes the importance of meaningful work. It denotes work that is significant, worthwhile, and has positive meaning and purpose for the individual (Lysova et al., 2019;Rosso et al., 2010;Smids et al., 2020).
Research suggests that meaningful work is related to many positive organizationally relevant outcomes, such as work engagement, job satisfaction, commitment, withdrawal intentions, and self-rated job performance (see Allan et al., 2019 for meta-analytic evidence). Given these benefits to employees and their organizations, scholars share a strong interest in understanding the factors that promote meaningful work (Michaelson et al., 2014). Despite many valuable efforts conducted to examine individual, job, organizational, and societal factors of meaningful work, understanding how these factors are interrelated and how organizations can thus promote the meaningful work experience for their employees remains limited (Lysova et al., 2019). Moreover, meaningful work is created through a highly social and contextualized process of conditions and constraints (Wrzesniewski et al., 2003), and the existing literature points to the need to better understand how the broader political, social, and institutional context shapes meaningful work (Bailey et al., 2019).
In recent years, the digitalization context has significantly shaped employees' meaning-making process at work (i.e., the process through which meaning of work is created or destroyed). Although digitalization has greatly influenced the value system (Nikitenko, 2019) and has brought many important advances to work and everyday life, the current trends' and developments' impact resulting from the digitalization of work on employees' meaningful work is still under-researched (Symon and Whiting, 2019). The few studies that have addressed this topic (e.g., Smids et al., 2020;Lent, 2018) have theorized on how digitalization and meaningful work are related; however, they have not empirically tested the proposed assumptions, or delved into individuals' specific technology-related experiences of employees, mechanisms, and boundary conditions that enable meaningful work to occur despite digitally constraining phenomena. In addition, the recent COVID-19 1 pandemic has magnified digitalization trends (see Molino et al., 2020) and brought new challenges, which have greatly influenced employees' perceptions of meaningful work.
We build on Lips-Wiersma and Morris's (2009) holistic model of meaningful work and the organizational frustration model (Fox and Spector, 1999;Spector, 1978) to explain how the context of digitalization and the COVID-19 have shaped the meaning-making process about work. The primary tenet of the organizational frustration model is that there is a relationship between the "sources of frustration in organizations, and effects on organizations through the reactions of individuals" (Spector, 1978: 818). In the current study, we propose technostress, defined as "any negative impact on attitudes, thoughts, behaviors, or body psychology caused directly or indirectly by technology" (Weil and Rosen, 1997: 5), focusing specifically on the techno-invasion dimension, as a novel source of frustration in organizations that has been made further prevalent owing to the COVID-19 context. Consistent with the organizational frustration model, we argue that techno-invasion elicits emotional responses of frustration, a key concept around which our research model theoretically revolves, and that experienced frustration influences employees' perceptions of meaningful work. We draw on Lips-Wiersma and Morris's (2009) holistic model of meaningful work, which emphasizes that seeking the balance between the needs of self and others are inherent in the process of meaning-making process, to propose a novel mechanism for mitigating the negative effects of frustration on meaningful work. Specifically, we argue that perceived corporate social responsibility (CSR) can be considered as a source of meaning derived from contributing to others that encourages employees to meet their own needs as well, thereby influencing the relationship between technostress-induced frustration and employees' perceptions of meaningful work.
This study offers several contributions to research on meaningful work and organizational frustration. A recent review has shown that while perceptions of meaningful work also depend on the overall social context, our understanding of how individual, organizational, and societal factors interact to facilitate meaningful work remains limited (Lysova et al., 2019). Therefore, our first contribution is aimed at advancing the growing body of literature on meaningful work by conceptualizing a comprehensive theoretical framework that explains how employees acknowledge meaningful work in circumstances that offer limited opportunities for meaning (e.g., in the context of the COVID-19 crisis). By focusing on individual-level phenomena arising from the digital context, we respond to calls for more research that not only considers the factors that promote or hinder meaningful work, but also advancing this line of inquiry into how the broader context shapes meaningful work (Bailey et al., 2019;Lysova et al., 2019;Mitra and Buzzanell, 2017). As digitalization is continuously changing the nature of work and there are no indicators of a trend in the opposite direction, digitalization and its potential downsides, which the COVID-19 pandemic further exacerbated, are also significantly shaping the future of work through their impact on meaningful work. By examining the relationship between techno-invasion, employee frustration, and meaningful work, we respond to the call to explore the negative aspects (i.e., the "dark side") of employees' digitalization experiences (Wood et al., 2019) and extend the existing literature on meaningful work by empirically investigating, for the first time, the influence of novel individual-level negative phenomena (i.e., techno-invasion and frustration) and feelings on meaningful work in a digital and challenging context. Furthermore, by examining the moderating role of CSR in the indirect relationship between techno-invasion and meaningful work, mediated by frustration, we complement current research on meaningful work and organizational frustration by highlighting the critical importance of CSR as a source of meaning that counterbalances the source of frustration source and emotional response, thereby promoting meaningful work. Perceptions of CSR have the potential to mitigate the negative effects of frustration caused by techno-invasion causes, thereby facilitating meaningful work in adverse and stressful situations. By demonstrating that CSR promotes meaningful work under specific conditions arising from the digital and crisis context, we address the call to explore the conditions under which CSR can simultaneously lead to win-win outcomes in terms of business value and employee well-being (Aguinis and Glavas, 2019), advancing the discourse on how CSR reduces the feeling of frustration and thereby increases the possibility that employees find their work meaningful.
Second, we aim to contribute to the literature on organizational frustration by exploring how situational events arising from digitalization and the COVID-19 context affect the three elements of the organizational frustration model. In doing so, we integrate the holistic model of meaningful work with the organizational frustration model into a comprehensive framework. This juxtaposition allowed us to theorize and explore the negative influence of the source of frustration and emotional responses on the perception of meaningful work, as well as the mitigating role of serving others. Building on the idea that situational factors are related to a specific source of frustration (Bessière et al., 2006), we propose a novel source of frustration (i.e., techno-invasion) that elicits a frustration emotional response (i.e., frustration), leading to a novel frustration outcome (i.e., meaningful work), as predicted by Fox-Spector model of organizational frustration. We further propose that CSR positively influences the relationship between frustration emotional response and outcome. In contrast to previous research that has examined how meaningful work can mitigate the negative effects of frustration (e.g., Ugwu and Onyishi, 2018), we aim to advance the existing literature on organizational frustration by providing an integrative theoretical framework that explains how specific events and perceptions affect the elements of the organizational frustration model and thereby influence meaningful work. Finally, a potential contextual and empirical contribution can be seen in testing the proposed model using a four-wave longitudinal study conducted during the first wave of the COVID-19 pandemic. Thus, we advance research on the experience of meaningful work under these unique and stressful conditions. The remainder of this article is structured as follows. We first present the theoretical background with the hypotheses development section in which we state the logic behind our research model and conceptualize hypotheses. An empirical section follows this in which we present our longitudinal analyses' methods and results. We conclude by discussing the theoretical contributions, practical implications, and limitations with future directions stemming from our findings.
Theoretical background and hypotheses development
Existing research indicates that stressful events can violate an individual's perception of meaning of work and initiate the meaning-making process (Park, 2010;Park and George, 2013). We argue that stressful events arising from the context that digitalization and COVID-19 shaped can cause a crisis of meaning (i.e., evaluating life as frustratingly empty and lacking meaning). This study is grounded in the organizational frustration model (Fox and Spector, 1999;Spector, 1978) and holistic model of meaningful work (Lips-Wiersma and Morris, 2009) to explain how the digitalization and the COVID-19 context influenced employees perceptions of meaningful work and what are the mechanisms to cultivate the feeling of meaningful work in stressful conditions. The integrated theoretical framework presenting the research logic behind our model is displayed in Figure 1. The organizational frustration model (Fox and Spector, 1999;Spector, 1978) specifies the relationships among sources of frustration, their effects on employees' emotional reactions and frustration outcomes. Spector (1978) highlights a number of potential sources of frustration, including the frustrating nature of the work itself and conditions arising from the work context. Technological advances and the frustration context of the COVID-19 pandemic (see Bessière et al., 2006) over time exposes employees to information overload, frequent interruptions, multitasking (Galanti et al., 2021;Tarafdar et al., 2010), fear of the unknown, and increased stress from health risks. Consistent with the organizational frustration model, we argue that in such circumstances, employees are more likely to experience a sense of frustration-a negative emotional response resulting from obstacles or interruptions (i.e., frustration; Fox and Spector, 1999)-as a result of Other (between-person) H3 + H2 -
Frustration outcome
Meaningful work Note. The integrated model explaining the logic behind our research is based on the juxtaposition of Lips-Wiersma and Morris's holistic model of meaningful work (2009) and the Fox-Spector model of organizational frustration (Fox and Spector, 1999;Spector, 1978).
Definitions of core constructs:
(1) Meaningful work: work that is experienced as particularly significant and has a more positive meaning for the individual (Rosso et al., 2010), or more broadly, as "work that is personally significant and worthwhile" (Lysova et al., 2019: 375).
(3) Frustration: the feeling of being upset or annoyed as a result of being unable to change or achieve something (Boyd, 1982). (4) Perceived corporate social responsibility: refers to the degree of employees' perception about the support their employer provides to the CSR-related activities (Choi and Yu, 2014).
the techno-invasion, as one of the causative agents of technostress. Namely, feeling of frustration occurs when there is an inhibiting condition (such as those the COVID-19 pandemic posed) that obstructs realizing a goal (Lazar et al., 2006). We further argue that the frustration resulting from the techno-invasion limits employees' ability to express their talent, creativity and to have a sense of achievement. In other words, such frustration may lead to diminished perceptions of meaningful work (i.e., negative influence on the "expressing ful potential" dimension of meaningful work). To mitigate the negative impact of frustration on meaningful work, employees need to make sense of this particular event or occurrence (Park and George, 2013) and set a goal to make their work meaningful.
To achieve the goal, employees must find proper sources of meaning that motivates their engagement and agency toward the goal (Schnell, 2009). Lips-Wiersma and Morris's holistic model of meaningful work (2009) proposes four such sources of meaning (i.e., "developing and becoming self," "unity with others," "serving others," and "expressing self"). Building on these, we argue that frustrated employees are more likely to achieve their goal of experiencing higher levels of meaningful work if they recognize their work makes a difference and meets the needs of others. Specifically, consistent with the existing studies showing that CSR contributes to meaningful work as a source of meaning (Bauman and Skitka, 2012;Glavas and Kelley, 2014), we argue that CSR perception moderates the relationship between frustration (i.e., frustration emotional response), caused by techno-invasion (i.e., frustration source), and meaningful work (i.e., frustration outcome). CSR extends the notion of work beyond one's workplace and organization, beyond an exclusively profit-oriented perspective, and thus serves as an ideal channel for frustrated employees to find meaning through work (Aguinis and Glavas, 2019).
Techno-invasion and meaningful work
Work fulfills our need for survival, relatedness, self-development, and self-efficacy (Blustein, 2008), and as such occupies a central position in the human search for meaning by serving as a primary source of purpose, belongingness, and identity (Michaelson et al., 2014;Rosso et al., 2010). Thus, meaningful work has become a topic of interest for many scholars and practitioners (Bailey et al., 2019) in various disciplines, including philosophy, ethics, organizational studies, economics, and sociology, leading to the development of various definitions of meaningful work and approaches on its study Lysova et al., 2019). Early conceptualizations of meaningful work were unidimensional, emphasizing employees' perceptions that their work is worthwhile, important, or valuable (Pratt and Ashforth, 2003). Allan and colleagues' (2019) recent meta-analysis shows that some scholars have maintained this conceptualization while others (e.g., Lips-Wiersma and Wright, 2012; Rosso et al., 2010) have developed multidimensional conceptualizations that bring together aspects of the self (e.g., selfactualization and personal growth) with aspects of orientation toward others (e.g., helping others and contributing to the greater good). Ciulla (2000) argues that meaningful work has an "objective" dimension (i.e., working conditions) and a "subjective" dimension (i.e., employee perceptions). While researchers in business ethics have explored the common element that all work and workplaces should have to facilitate meaningful work (i.e., emphasizing the objective dimension of meaningful work), scholars in organizational studies have focused their attention on examining what makes a certain task or job meaningful to a particular employee in a specific workplace (i.e., emphasizing the subjective dimension of meaningful work; Michaelson et al., 2014). However, some scholars argue that meaningful work is not associated only with specific tasks, but must also be interpreted and constructed in circumstances that may offer impoverished opportunities for meaning (Bailey et al., 2019). Contemporary workplaces increasingly encompass a strong digital dimension, which importantly shapes and constrains employee perceptions, responses, and behaviors, including those in the quest of meaning-making, and ultimately has the potential to impact salient individual and organizational outcomes.
Despite technology's generally positive consequences, digitalization does not always necessarily produce positive outcomes (Wood et al., 2019). Indeed, the autonomy and flexibility that comes with digital work may be attractive and could help in the quest of meaning-making, but research shows that freedom and choice come with negative consequences, such as work overload and distress (Butler and Stoyanova Russell, 2018). Indicators reveal that in times of digitalization (Rutkowski and Saunders, 2018;Turel et al., 2011), life satisfaction, happiness, and interpersonal trust are declining while people are working more than ever before (Eurofound & ILO, 2017). The COVID-19 emergency has further exacerbated this (DeFilippis et al., 2020) with digitalization and technology encroaching more upon individuals at work and beyond.
Technostress encompasses the stress employees experience as a result of the potential of information technology, with the techno-invasion dimension referring to being constantly connected and thereby invading the employee's personal life (Tarafdar et al., 2007). The COVID-19 context further facilitates the frustration response that occurs when the encroaching digitalization and its intrusion into individuals' work and lives, that is, techno-invasion (Tarafdar and Stich, 2018), is too severe. Individuals who are too encroached with IT may feel burdened by technology and have difficulty coping with these digital demands. Digitalization has encouraged the so-called "always on" workplace culture, characterized by 24/7 access to information and connectedness. Receiving, checking, and responding to work-related emails, calls, and other messages many times during the day and often after office hours has become routine for many employees, and has been further aggravated during the COVID-19 emergency. Such techno-invasion has been shown to paradoxically decrease work productivity (Turel and Serenko, 2010). Moreover, it increases the work-life imbalance (Derks et al., 2015) and leads to various health problems, such as addiction, anxiety, and insomnia stress (Jenaro et al., 2007), as well as focus distraction (Rosen et al., 2013).
In line with these arguments and Lips-Wiersma and Morris's (2009) holistic model, we argue that employees experiencing a techno-invasion may perceive and recognize their work as less meaningful in terms of fulfilling their potential for three distinct but interrelated reasons. First, employees who are highly techno-invaded are likely to be scattered across many different work activities, with plenty of task switching and a perception that their work does not constitute a coherent whole (Durward and Blohm, 2017). In this case, employees are more likely to receive distractions (e.g., additional tasks and requests, formal and informal communication) that require their responses and prevent them from focusing on a coherent task (Rosen et al., 2013).
Second, techno-invasion can result in technology invading not only an individual's professional life, but also their personal life. Techno-invasion likely causes individuals to spend additional time dealing with the technology, with additional tasks and issues stemming from it. This can lead to work additionally invading their lives, resulting in reduced levels of their work-life balance (Raišienė and Jonušauskas, 2013). Indeed, meaningful work is strongly based on how individuals are able to achieve a balance between their work and non-work lives. Munn (2013) empirically demonstrated that work-life balance increases employees' perceptions of meaningful work. In contrast, the study showed that when work-life conflict increases (i.e., when work and family/life interfere with each other and employees feel that they are inadequately fulfilling one or both of their roles), employees tend to find less meaning in their work.
Third, techno-invasion likely generates negative perceptions about work. Individuals who are heavily technologically invaded tend to equate their work with the use of technology, leading to negative perceptions about their work and low job satisfaction (Suh and Lee, 2017). Negative associations that individuals develop about their work are in turn likely to diminish their perceptions of how meaningful their work is (Rothausen and Henderson, 2019). Thus, we propose: H1: Techno-invasion negatively affects meaningful work.
The mediating role of frustration in the relationship between technoinvasion and meaningful work
Technological invasion increases digital workers' information technology overload, leading them to feel overwhelmed and unable to cope with all the demands and invasions that digitalization places on them (Shu et al., 2011). Consistent with the Fox-Spector model of organizational frustration, we argue that this likely leads to feeling frustrated. The model of organizational frustration, which builds on the general model of frustration, specifies various sources of either mild or severe frustration (Britt and Janus, 1940;Spector, 1978;Stäcker, 1977), and such a technological invasion may accordingly act as one of these important frustration sources. Existing research shows that working faster and for longer hours, as well as being in an "always on" work culture, manifested in the constant monitoring of work-related information via digital means (e.g., email and social media), causes anxiety, insomnia, and inefficiency (Derks et al., 2015;Salanova et al., 2010). Individuals may become frustrated owing to digital encroachment, working on many different tasks and being constantly thrown off their work owing to additional information being communicated. As a result, they frequently switch among tasks, losing valuable time and becoming even more frustrated. Furthermore, the fact that technoinvasion is throwing individuals out of their work-life balance and encroaching not only their professional lives, but also their personal lives, likely further contributes to their emotional response of frustration.
In a digitally invasive setting, work has been shown to be fragmented, with lowered perceived significance (i.e., not seeing the positive impact of one's work on others; Nemkova et al., 2019). This fragmentation and its resulting frustration can in turn undermine the experienced meaningful work (Nemkova et al., 2019;Sanchez et al., 2015). Meaningful work does not reflect a stable state (Bailey and Madden, 2017). Rather, individuals have many episodic experiences at work that are meaningful or meaningless, which they integrate into a belief system about how meaningful their work is overall. Techno-invasion-induced feelings of frustration might result in perceiving their work as meaningless or even worthless (May et al., 2004). Therefore, we propose: H2: Frustration mediates the relationship between techno-invasion and meaningful work.
The moderating role of corporate social responsibility CSR is broadly defined as "context-specific organizational actions and policies that take into account stakeholders' expectations and the triple bottom line of economic, social, and environmental performance" (Aguinis, 2011: 858). Recent CSR research has highlighted the importance of examining the micro-level perspective of CSR (El Akremi et al., 2018;Jones et al., 2019;Rupp and Mallory, 2015). Micro-level CSR is defined as "the study of the effects and experiences of CSR (however it is defined) on individuals (in any stakeholder group) as examined at the individual level of analysis" (Rupp and Mallory, 2015: 216). Because employees are the ones who plan, advocate, participate in, and witness CSR, scholars have begun to investigate how CSR affects employee attitudes and behaviors (Jones et al., 2017;Rupp and Mallory, 2015). Perceived CSR refers to the degree to which employees perceive their employer's support of CSRrelated activities (Choi and Yu, 2014). Because CSR-related activities are defined as a long-term and stable corporate policy in line with stakeholders' values and resulting expectations (Žukauskas et al., 2018), we propose that CSR perceptions remain relatively stable over time.
Recent literature reviews focusing on the micro-level CSR literature have revealed that employee perceptions of CSR are associated with a number of positive consequences, including increased employee engagement, organizational citizenship behaviors, improved employee relations, and job satisfaction (see Rupp and Mallory, 2015). However, our understanding of the relationship between CSR and employee outcome relationships remains limited; thus, further research is needed to answer the questions of why, how, and when CSR has an effect on employees (Glavas, 2016).
Building on Lips-Wiersma and Morris's (2009) holistic model of meaningful work and existing research suggesting that employees' perceptions of CSR can facilitate meaningful work (Michaelson et al., 2014), we argue that the perception of CSR can serve as a counterbalance to frustration sources and thus reduce the negative impact of frustration techno-invasion causes on meaningful work. Rosso and colleagues (2010) argue that one way employees find meaning is by contributing to the common good or CSR. This extends the notion of work beyond one's job and organization, and beyond an exclusively profitoriented perspective, thus providing an ideal channel for individuals to counterbalance frustration sources and find meaning in their work (Aguinis and Glavas, 2019). Lysova and colleagues' (2019) recent multilevel review has shown that CSR contributes to meaningful work because (a) it signals that organizations have an ethical approach toward their stakeholders, which makes employees perceive and feel a sense of pride of and identification with the organization (e.g., Glavas and Kelley, 2014); and (b) by making employees feel they are part of an effort that helps improve others' well-being, CSR satisfies employees' need for a meaningful existence (e.g., Bauman and Skitka, 2012).
In organizations where CSR is integrated into the organization's strategy, routines, and operations, employees are more likely to experience meaningfulness in work, which arises from their own work role, and at work, which arises from being a part of something bigger (Aguinis and Glavas, 2019;Pratt and Ashforth, 2003). Pratt and Ashforth (2003) further argue that CSR practices, such as promoting the organization's goals, values, and beliefs and the changing the nature of the relationship between members, can foster meaningful work. CSR can be particularly beneficial when used as a means for employees to bring meaning and their whole selves to work (Glavas and Kelley, 2014), and can provide an opportunity to re-engage individuals facing work fatigue, boredom, or even career stagnation (Aguinis and Glavas, 2019). Further, we could expect the same to be the case when frustration sources accumulating over time lead employees to an emotional response of frustration.
Therefore, we argue that employees who perceive high levels of CSR believe they are part of something bigger and can make a significant contribution to others, thereby perceiving their work as more meaningful even when they are annoyed and irritated (i.e., frustrated). CSR emphasizes the importance of an employee's actions beyond the specific task, job, and organization, and therefore can help employees gain an understanding that their potential dissatisfaction and disappointment (i.e., frustration) serves for a bigger cause, thereby mitigating frustration's negative effects on meaningful work. We therefore propose: H3: CSR perception moderates the second stage of the indirect relationship between techno-invasion and meaningful work via frustration in such a way that this relationship is less negative for individuals with a higher CSR perception compared with individuals with a low CSR perception.
Sample and procedure
Through an agency specialized in data collection on work-related phenomena, we collected data from 198 working professionals across different industries with a four-wave longitudinal online survey. To match their responses across time waves, individual identification numbers were assigned to ensure participants' anonymity. The data were collected before, during, and after the first wave of the officially declared COVID-19 pandemic in Slovenia. To describe the data collection content, Table 1 briefly summarizes the COVID-19-related situation in Slovenia in 2020 during each wave of data collection. The full sample size of participants who started with the first wave is 200; however, only 198 fully responded to all four waves of data collection, with two dropping out after the first wave. We have thus only used full respondents' data for our analyses.
The sample consisted of 46% of respondents working in public companies, 50% in private companies, and the remaining respondents working in joint ventures. Of the respondents, 10.6% were working in micro-companies with up to nine employees, 19.7% in small companies with up to 49 employees, 24.7% in medium-sized companies with up to 249 employees, and 44.9% working in large companies with 250 employees or more. Respondents operated mainly in the following industries: education, culture, and sport (13.1%); administration (12.6%); production (12.6%); health (9.6%); and sales (8.1%). Respondents were, on average, 46 years old, had 22 years of work experience, 49.5% were female, and on average, had 0.6 children. Among the respondents, 42.4% had a high school diploma, and 55% had at least an undergraduate diploma. In all, 30% performed managerial duties, and on average, worked 41.8 hours per week.
Measures
All the focal variables were self-reported and all, except CSR, measured in all four waves. We assumed that the CSR perception was stable and would not change rapidly in the short time, thus we measured it in the first wave.
Techno-invasion was assessed with three items from Shu et al.'s (2011) scale that measures technostress technology invading personal life causes. A five-point Likert scale was used with the anchors "5 = strongly agree" and "1 = strongly disagree." Representative items include: "I have to be in touch with my work even during my vacation due to this technology," and "I feel my personal life is being invaded by this technology" (α t1 = .87, α t2 = .86, α t3 = .86, α t4 = .89, α cumulative = .87).
Frustration was measured with the following item from Peters et al.'s (1980) scale: "Overall, I experienced very little frustration at work" (reverse scored). The responses ranged from "1 = strongly disagree" to "5 = strongly agree." Meaningful work was measured with three items from Lips-Wiersma and Wright's (2012) scale that represents expressing the full potential dimension of meaningful work. A five-point Likert scale was used with the anchors "5 = never" and "1 = very." Representative items include: "I make a difference that matters to others," and "I am excited by the available opportunities for me" (α t1 = .78, α t2 = .86, α t3 = .80, α t4 = .84, α cumulative = .82). Glavas and Kelley (2014) proposed. The scale covers an organization's social and environmental responsibilities. Examples of items include: "Contributing to the well-being of the community is a high priority at my organization," and "My organization achieves its goals while staying focused on its impact on the environment." The responses ranged from "1 = strongly disagree" and "5 = strongly agree" (α = .84).
Gender and age were measured in the first wave and incorporated in the model as individual-level control variables. Table 2 shows means, standard deviation, correlations, and reliability coefficients for the key study variables. Based on Cronbach's alpha coefficients, all measurement scales were internally consistent. They all exceeded the 0.70 criterion established in the literature (Hair et al., 1998). We first observed the factor structure of the focal variables and thus conducted a multilevel confirmatory factor analysis (MCFA) using Mplus 8.3 software Muthén, 1998-2012). The expected four-factor solution (techno-invasion, frustration, meaningful work, and perceived CSR) displayed adequate fit with the data (χ 2 (61) = 128.362, p < .01, Comparative Fit Index (CFI) = .96, Tucker Lewis Index (TLI) = .94, Root Mean Square Error of Approximation (RMSEA) = .04, Standardized Root Mean Square Residual (SRMR) within = .03, SRMR between = 0.05). The standardized factor loadings ranged from .64-.78 for the techno-invasion items, .52-.59 for the meaningful work, and .56-.90 for the perceived CSR items.
Descriptive statistics
As we have time-varying variables (techno-invasion, meaningful work, and frustration), we also checked for measurement invariance, to establish that participants across all groups interpret the individual questions, as well as the underlying latent factor, in the same way. Multiple CFAs were conducted for the time-varying construct of our model. Next, we move to metric invariance. The factor variance and mean were fixed to 1 and 0, respectively. The constraint of the first item for each factor was released so that the factor loadings and intercepts can be compared across groups (van de Schoot et al., 2012). The chi-square test (Δχ2 =12.53; p = 0.40) is showing invariance between groups; this is also reinforced by the CFI difference, which is less or equal or less than 0.01 (ΔCFI = 0.00) (Putnick and Bornstein, 2016). Next, we checked for scalar invariance. The factor mean and variance were fixed to 0 and 1, respectively, and all residual variances were permitted to differ across time (van de Schoot et al., 2012). The results show that compared to the metric invariance model (Δχ2 = 29.72; p = 0.01), the scalar model is not showing scalar invariance; we thus have scalar noninvariance. This suggests that noninvariance of the factor intercept for techno-invasion, frustration, and meaningful work (thus the scores between points) change, but this increase is not related to the change over time of the focal construct itself. Following the suggestions of Putnick and Bornstein (2016), we tried to identify the reasons behind such noninvariance. We did this by constraining the intercepts to be equal across time points and doing this for each factor separately. Techno-invasion has a significant chi-square suggesting noninvariance, meaningful work was invariant, and frustration was also nonvariant. As scalar invariance was not supported, we did not check for other steps, such as residual invariance, correlations, or means (Putnick and Bornstein, 2016;van de Schoot et al., 2012).
Hypotheses testing
Our For robustness, as another indication of variance partitioning, we also calculated the intraclass correlations (ICCs) of all the constructs that were measured across time using Biemann et al.'s (2012) Excel template. For techno-invasion, ICC(1) was .24, and ICC (2) was .64 (F = 2.81, p < .01). For frustration, ICC(1) was .32, and ICC(2) was .73 (F = 3.69, p < .01). For meaningful work (expressing full potential), ICC(1) was .22, and ICC(2) was .62 (F = 2.62, p < .01). While there are multiple techniques available to analyze longitudinal data, we decided to apply a multilevel modeling technique to test our hypotheses, as deemed to be superior (Bell et al., 2019;Hanchane and Mostafa, 2012). For example, the authors suggest that multilevel modeling can treat uneven time intervals (as in our case, see Table 1) or model individual-level variables over time for each participant (rather than simply averages; see Kwok et al., 2008, for the list of all potential benefits). Thus, we used hierarchical linear modeling (random intercepts with fixed slopes) to test our model using multilevel structural equation modelling (SEM) in Mplus 8.3. Such an approach allows simultaneous estimation (while applying full maximum likelihood principles) of all the model's parameters. Following the suggestions of Preacher et al. (2010), as we used a multilevel SEM, we did not center the variables prior to the analysis.
Moderated-mediation model Predicting frustration = mediating variable
Predicting meaningful work = outcome variable
Discussion
The results of our longitudinal (four-wave) study of professionals exposed to different levels of technology use and techno-invasion before, during, and after the first wave of the COVID-19 emergency supported our proposed moderated-mediation model. Technoinvasion, as one of the causative agents of technostress, did not directly negatively influence meaningful work, but it did contribute to higher levels of frustration, indicating crucial potential downsides of the (over)use of IT in contemporary organizations. Furthermore, in certain conditions, techno-invasion reduces perceptions of meaningful work through frustration, a short-term negative outcome. As technostress is a common phenomenon in the digital context, this finding is highly relevant for understanding how the digital work environment shapes the experience of meaningful work and thus influences future meaningful work experiences. Moreover, the results show that the higher the level of CSR, the less negative the indirect relationship between techno-invasion and meaningful work, as mediated by frustration.
Theoretical contributions
This study advances the literature on meaningful work and organizational frustration in several ways. First, our study extends previous research on meaningful work by integrating Lips-Wiersma and Morris's (2009) holistic model of meaningful work and the organizational frustration model (Fox and Spector, 1999;Spector, 1978) into a novel, comprehensive theoretical model that explains how the constraints and tensions of a context influence individuals' recognition of meaningful work. Although scholars have established that context influences the degree to which an individual can find meaningful work (Lysova et al., 2019), they have paid scant theoretical and empirical attention to the role of digitalization and crisis context in shaping employees' experiences of meaningful work. As digitalization and high-profile events are continuously changing the nature of work, thus shaping the future of work through their impact on meaningful work, our study extends the existing literature on meaningful work by deepening our understanding of how digitalization and crisis (i.e., COVID-19 pandemic) context affect employees' perceptions of meaningful work.
Specifically, following Lips-Wiersma and Morris's (2009) holistic model of meaningful work, we argue that stressful events arising from the context of digitalization context and the COVID-19 pandemic can trigger a crisis of meaning (i.e., evaluating life as frustratingly empty and lacking meaning). Combining this logic with the organizational frustration model (Fox and Spector, 1999;Spector, 1978), we propose that techno-invasion, arising from the digitalization context, induces a frustration emotional response and thereby reducing meaningful work. Although the literature suggests that, owing to different reasons and mechanisms, digitalization can either positively or negatively affect meaningful work (Lent, 2018;Smids et al., 2020), empirical studies rigorously examining these effects remain limited. Our study thus adds to a growing conversation about the meaningful work by theoretically and empirically investigating how digitalization and the COVID-19 context affect employees' perceptions of meaningful work and what are the mechanisms to cultivate feelings of meaningful work under stressful conditions. Consistent with the holistic model of meaningful work that emphasizes service to others as a source of meaningful work, we propose CSR perception as an important mechanism that mitigates the negative effects of frustration (i.e., the emotional response) technoinvasion causes (i.e., frustration sources) and thereby facilitates the experience of meaningful work (i.e., frustration outcome) in the specific context (i.e., digitalization and COVID-19 pandemic). A recent review highlights the need for more empirical research exploring the positive and negative factors that shape experiences of meaningful work (Lysova et al., 2019). Therefore, this study adds to the literature on meaningful work by theoretically and empirically examining a unique set of factors (i.e., techno-invasion, frustration, CSR perceptions) that influence meaningful work. In addition, it is important to emphasize that while we proposed and empirically investigated the negative effects of context (i.e., digitization in the COVID-19 pandemic) on meaningful work, our comprehensive model is also appropriate for investigating the positive effects of context on the meaning-making process.
By outlining a comprehensive theoretical framework for how context shapes the meaning-making process, we respond to calls for examining of how interactions between various individual, organizational, and social factors contribute to the experience of meaningful work (Lysova et al., 2019) and how a broader context shapes that experience (Bailey et al., 2019). Specifically, we provide empirical support on how factors arising from the digital context (i.e., technostress and frustration) interact with a potentially mitigating factor related to a source of meaning (i.e., CSR) to influence meaningful work. By examining novel negative consequences of the techno-invasion, we also contribute to technostress research, responding to the call to explore the negative aspects (i.e., the "dark side") of employees' digitalization experiences (Wood et al., 2019). However, contrary to expectations, our results suggest that techno-invasion can serve as a source of meaningful work in a crisis context. In line with the existing literature, which mainly focuses on examining the negative effects of techno-invasion on employees' work and non-work experiences (e.g., Tarafdar et al., 2010;Wu et al., 2020), we proposed that the direct relationship between techno-invasion and meaningful work is negative. However, the results suggest that the direct effect of techno-invasion on employees' perceptions of meaningful work in crisis contexts, such as COVID-19, is actually positive. Diller and colleagues (2016) argue that techno-invasion enhances both positive and negative stress responses, depending on particular boundary conditions. For example, Wu and colleagues (2020) found that employee computer self-efficacy and perceived organizational support can significantly mitigate the negative consequences of techno-invasion. Because our data were collected during the COVID-19 pandemic when many employees performed their work remotely (Molino et al., 2020), many organizations were forced to pay special attention to employee support to maintain continuous implementation of business processes. In addition, employees who had to work during the first wave of the pandemic possibly received reassurance that their work, although intruding on their personal life, was important and worth doing, serving as a source of meaning, which in turn increased their perception of how meaningful work actually it was. However, our theorization and results also suggest a negative indirect effect of techno-invasion on meaningful work through increased frustration. If techno-invasion leads to frustration, the degree to which employees find their work meaningful will decrease. Moreover, in line with Lips-Wiersma and Morris's (2009) holistic model of meaningful work, we also highlight CSR's crucial importance as a source of meaning, counterbalancing sources of frustration and promoting meaningful work. Thereby, our study responds to the call to investigate conditions under which CSR can lead to win-win outcomes of business value and employee well-being simultaneously (Aguinis and Glavas, 2019). By examining the moderating role of CSR perception in the indirect relationship between techno-invasion and meaningful work, which frustration mediates, our study contributes to the discourse on CSR's positive impact related to meaningful work. Existing evidence suggests that CSR is particularly beneficial when used as a means for employees to bring both more meaning and their whole selves to work (Glavas and Kelley, 2014), and provides an opportunity to re-engage individuals who are in a challenging situation (Aguinis and Glavas, 2019). Consistent with the existing evidence, our study suggests that CSR perception can mitigate the negative consequences of frustration techno-invasion causes, highlighting CSR's critical importance in promoting meaningful work in the digital context.
Second, by exploring how situational events arising from digitalization and the COVID-19 context affect the organizational frustration model's three elements, our study also contributes to the literature on organizational frustration. Building on the idea that situational factors are related to specific frustration source (Bessière et al., 2006), we proposed a novel source of frustration (i.e., techno-invasion) and explained how it is linked to the frustration emotional response (i.e., frustration) and the novel frustration outcome (i.e., meaningful work) as the Fox-Spector model of organizational frustration predicts. We found empirical evidence that the techno-invasion contributed to higher levels of frustration, thereby advancing the existing debate in the literature on organizational frustration and meaningful work, which has mainly examined how of meaningful work can mitigate the negative effects of frustration (e.g., Ugwu and Onyishi, 2018). Our study shows that certain events can affect the organizational frustration model's three elements and thus meaningful work. One of the novelties of this study is also to shed light on how CSR perceptions, which are influenced by corporate CSR policies, can mitigate the negative effects of the frustration emotional response resulting from the frustration source on the frustration outcome.
Third, our study also makes an empirical contribution by testing the moderated-mediation model, which posits frustration as a mediator of techno-invasion effects on meaningful work, with CSR perceptions as a moderator of such effects using a four-wave longitudinal study conducted during the first wave of the COVID-19 pandemic. As a global phenomenon with severe global consequences across all aspects of work and life, COVID-19 further exacerbated some negative consequences of digitalization and acted as an important contingency interfering and constraining workers' experiences of frustration (Bessière et al., 2006). Thus, we advance research on the experience of meaningful work under these unique and stressful conditions.
Practical implications
Our findings have important practical implications for creating work environments that aim to maximize employees' perceptions of meaningful work, particularly with regard to administering appropriate levels of technological invasion. There is ongoing public and professional debate about the negative impact of techno-invasion on individuals, which may also negatively affect organizationally relevant outcomes. Our study provides the rare empirical evidence that techno-invasion can have both positive and negative impact on meaningful work. Specifically, our results show that in times of crisis (e.g. COVID-19 pandemic), when employees are aware of the importance of their work for the survival of the organization, the techno-invasion can positively influence their perception of meaningful work. However, our results also show that techno-invasion can also lead to frustration, decreasing employees' perception of meaningful work. Therefore, managers and organizations should carefully examine their employees' attitudes toward the use of technology, keep track of their workloads and overloads, and monitor whether employees perceive technological invasion as a challenge that enables them to express their full potential or as a source of frustration. To avoid employees feeling frustrated and thereby believing their work is less meaningful, organizations should keep an eye on digital intrusion into employees' lives and pay attention to how much work is assigned to them through technological means, as well as when that work is administered. Moreover, frustration occurs when employees feel that some inhibiting factors, such as the work environment and the structure of the working environment, including procedures and rules, prevent them from achieving their goals (Lazar et al., 2006;Spector, 1978). Therefore, to reduce the techno-invasion-induced frustration, organizations should apply rules and policies to limit after-hours work (e.g., by instituting a "no after-hours" or a "limited timeframe email" policy), so that employees can achieve their goals in both their professional and personal lives. Organizations should also keep formal expectations of employees' availability at all times and places low (Piszczek, 2017) and encourage them to take time off from work and technology to reduce feelings of technological invasion, thereby reducing frustration.
Second, even if employees feel that technology is invading their work and lives, our findings suggest that organizations can prevent this occurrence from resulting in reduced feelings of how meaningful work they perform is by focusing on inducing higher levels of CSR perceptions. Organizations and managers should therefore carefully design and implement CSR practices, policies, and actions, as they can significantly influence employees' CSR perceptions and thus their perception of meaningful work in a crisis context. Organizations can use human resource management practices and systems, such as training and development, to make employees aware of CSR polices (Shen and Benson, 2016). Once they become aware that their organizations give them the opportunity to positively contribute to the world, they may become re-energized and find meaningfulness in their work (Aguinis and Glavas, 2019) despite techno-invasioninduced feelings of frustration. In addition, our study shows that employees are more likely to experience their work as meaningful in crisis situations if they feel they are part of something bigger. This finding suggests that organizations should pay particular attention to promoting meaningfulness at work-a sense of meaning that comes from being part of the organization rather than from what one does. Pratt and Ashforth (2003) suggest that meaningfulness at work can be fostered through building cultures, ideologies, identities, and communities, as well as through charismatic, visionary, or transformational leadership.
Practically, our study implies that organizations should: (1) review and analyze business processes and core values to identify where values, such as solidarity, environmental awareness, or contribution to humanity, exist or could potentially exist (Asif et al., 2013) and integrate them into the corporate culture; (2) carefully develop CSR initiatives and adapt them to the particular organizational context and promote them even in difficult circumstances as they may influence employees' CSR perceptions, thereby increasing meaningful work; and (3) promote CSR initiatives internally as well as externally, integrating them with people management and marketing strategies, and (co)create positive (employee) brand awareness (Bhattacharya et al., 2004;Jamali et al., 2015) to promote meaningfulness at work.
Limitations and future research directions
As is true for any research, our study is not without limitations. While our longitudinal research across four points in time entails important advantages in terms of making causal claims, a possible limitation of our research design can be the exclusive use of self-reports. However, according to Fox and Spector (1999), self-reported measures capture critical features of the situation more adequately than more objective, non-intuitive measures. Because our study aimed to understand how employees view, feel, and respond to digitalized work, a self-report methodology made the most sense (Howard, 1994;Spector, 1994). Nonetheless, such research could be complemented by including additional objective measures, perhaps those of CSR, by investigating a multilevel model of organizational CSR initiatives that moderate the basic mediated model at the individual level, or expand the model to include potential impact on business performance.
In terms of our used measurement instruments, for several constructs (specifically for technostress and meaningful work), we have only captured a single dimension of otherwise multidimensional constructs, and short and even single-item scales had to be used. Such an approach might be especially useful in longitudinal research in an attempt not to overburden respondents with too long research instruments, thus enabling them to maintain concentration and focus on the content when responding (Fisher et al., 2016;Lucas and Donnellan, 2012). During the COVID-19 pandemic, when individuals faced a range of professional, personal, and health challenges that required a rapid response and the time and energy of individuals, it was even more important that we were considerate of their time and kept the survey as short as possible. Further, the selected dimensions were chosen because they were theoretically the most relevant for the research model in question. However, future research could further improve the validity and scope of our study by employing multidimensional scales and investigating whether additional dimensions of technostress and meaningful work behave differently.
In addition, we have captured techno-invasion, frustration, and meaningful work across time, while only relying on the CSR perceptions the respondents provided in a single point in time. Because our study took place across several months, it is also possible that some respondents changed jobs in this period, which would likely change their perceptions regarding their organizations' CSR. Since we have a nationally representative quota sample (representative of age, gender, and industry), we can assume, based on our national labor statistics, that only a small number of employees included in the sample changed jobs. However, a viable research avenue would be to examine how CSR perceptions are shaped over time, which would involve longer periods included in such longitudinal research, potentially spanning across multiple years. This would also produce larger variance in all examined constructs over time. Lastly and on a related note, our preliminary checks highlighted that few constructs (especially techno-invasion and frustration) did not change over time or there was no scalar invariance. This means that comparison between time points might not be needed as there are differences in the scale meaning for some constructs is different between time points, or that the change over time is less prominent that expected. Even if some authors still compare groups/time points even without assessing scalar invariance (e.g., Dahlstrom and Nygaard, 1995) and multilevel analysis is recommended, we believe that our results should be taken with this limitation in mind.
|
2022-11-10T16:08:47.650Z
|
2022-11-08T00:00:00.000
|
{
"year": 2023,
"sha1": "28db02f85288f4878ac309d6a5406f88221254fb",
"oa_license": "CCBY",
"oa_url": "https://pure.uvt.nl/ws/files/65715723/00187267221139776.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8f2c45fc4a9d402e963a489b082acac3f6fbeeb",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
23220114
|
pes2o/s2orc
|
v3-fos-license
|
FDTD method for laser absorption in metals for large scale problems
The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material’s reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions. ©2013 Optical Society of America OCIS codes: (050.1755) Computational electromagnetic methods; (350.3390) Laser materials processing; (260.3910) Metal optics. References and links 1. K. S. Yee, “Numerical solution of initial boundary value problems involving Maxwells equations in isotropic media,” IEEE Trans. Antenn. Propag. 14(3), 302–307 (1966). 2. C. M. Dissanayake, M. Premaratne, I. D. Rukhlenko, and G. P. Agrawal, “FDTD modeling of anisotropic nonlinear optical phenomena in silicon waveguides,” Opt. Express 18(20), 21427–21448 (2010). 3. K. Kitamura, K. Sakai, and S. Noda, “Finite-difference time-domain (FDTD) analysis on the interaction between a metal block and a radially polarized focused beam,” Opt. Express 19(15), 13750–13756 (2011). 4. S. Buil, J. Laverdant, B. Berini, P. Maso, J. P. Hermier, and X. Quélin, “FDTD simulations of localization and enhancements on fractal plasmonics nanostructures,” Opt. Express 20(11), 11968–11975 (2012). 5. C. Lundgren, R. Lopez, J. Redwing, and K. Melde, “FDTD modeling of solar energy absorption in silicon branched nanowires,” Opt. Express 21(S3), A392–A400 (2013). 6. A. Taflove and S. C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd ed. (Artech House, 2005). 7. H. Ki and J. Mazumder, “Numerical simulation of femtosecond laser interaction with silicon,” J. Laser Appl. 17(2), 110–117 (2005). 8. H. Li and H. Ki, “Effect of ionization on femtosecond laser pulse interaction with silicon,” J. Appl. Phys. 100(10), 104907 (2006). 9. W. M. Steen and J. Mazumder, Laser Material Processing, 4th ed. (Springer-Verlag, 2010). 10. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999).
Introduction
Since the FDTD algorithm was first developed by Yee in 1966 [1], it has been extensively used for a variety of electromagnetic problems [2][3][4][5][6].Recently, there have been some efforts in the laser material processing community to use the FDTD method for simulating laser material interaction problems [7,8].Because accurate predictions on laser absorption in materials is by far the most important in better understanding of the processes, the direct solving of the Maxwell equations by the FDTD method seems like an ideal method.
However, due to its very stringent requirement on wavelength-dependent grid density, this method has been considered inappropriate for simulating light interaction with materials, because the domain size is extremely large compared to the wavelength of the light.In fact, Ki et al. [7,8] simulated femtosecond laser interaction with silicon targets using the FDTD method, but in order to reduce the total grid number, they employed the body-of-revolution (BOR) FDTD method assuming a radially polarized laser beam.Besides, the domain size was only about 50 μm in their studies.If a full three dimensional code needs to be applied to a typical laser manufacturing process, however, a conventional FDTD method is still far from a viable option.In a typical laser welding problem, for example, an Nd:YAG or CO 2 laser is generally irradiated on a metal plate that is at least several millimeters thick.In the case of a Nd:YAG laser, the typical wavelength is 1.06 μm, and assuming that 10 grids per wavelength is required and the plate thickness is 1 mm, the grid number in one dimension is roughly 10,000, which leads to ~10 12 grids in three dimensions.If a CO 2 laser beam is used, which has a wavelength of 10.6 μm, the total grid number can be decreased to ~10 9 , but even this grid number can be handled only by the most powerful supercomputing systems.
In this article, we propose a FDTD based scheme of simulating laser absorption in metals that can be used for relatively large scale problems, such as laser manufacturing processes.In this method, instead of using the original wavelength of the laser beam, a scheme to use a much enlarged wavelength has been developed, where the angle-dependent absorption characteristic is not altered or minimally changed.For validation purposes, we have tested the method with both the original Yee algorithm (or standard FDTD, hereafter) and an FDTD algorithm for dispersive media with the Drude model [6] (or dispersive FDTD, hereafter).We have also proposed a scheme that enables the use of the standard FDTD method for dispersive media by obtaining a new set of refractive index and extinction coefficient.Numerical tests have been performed for 1.06 μm laser beam interaction with iron (Fe) and tin (Sn) targets.The obtained simulation results are in good agreement with the theoretical predictions.
Changing wavelength for standard FDTD algorithm
Light absorption in a metal can be determined by the metal's complex refractive index and the incident angle as shown by the reflectance (R) formulas for s-and p-polarized lights where subscripts s and p denote s-and p-polarizations, n and κ are the real and imaginary parts of the complex refractive index n (i.e., refractive index and extinction coefficient), and i θ is the incident angle [9].Therefore, a material's absorption characteristic can be completely understood if n and κ are known ( i θ is not a material property).Here, as well known [10], n and κ can be expressed in terms of primitive variables as ( ) where c is the speed of light in free space; ε , μ , and σ are permittivity, permeability, and electrical conductivity of the material; and λ is the laser wavelength in free space.If we look at Eqs. ( 4) and ( 5) carefully, we can learn that wavelength λ can be altered without changing n and κ if σλ remains unchanged.In other words, if we want to use a wavelength that is 100 times larger than the actual wavelength, we can decrease the electrical conductivity σ 100 times and still the angle-dependent absorption characteristic of the material remains exactly the same.Note that, in Eqs. ( 4) and (5) with n and κ values fixed, there are two equations in two unknowns, με and 2 c μσλ π , so these unknowns can be completely determined., i.e., Now, we can choose λ, ε, μ, and σ values by using Eq.(6).Note that in this study we let
Changing wavelength for dispersive FDTD algorithm
For many metals with high reflectance values, extinction coefficient κ is larger than refractive index n, and therefore, Eqs. ( 4) and ( 5) become inappropriate and the strategy presented in Section 2 cannot be employed.(Comparing Eqs. ( 4) and ( 5), everything is exactly the same except for the sign between the two terms in parentheses, so the latter cannot be larger than the former.)In this case, the standard FDTD method cannot be used to simulate metals, and the dispersive FDTD method needs to be used [6].In this section, we will present a scheme to use an enlarged wavelength, which can be used with the dispersive FDTD method.
The parameters for the dispersive FDTD are obtained from the Drude model [6] with the given n and κ values.Equations ( 4) and ( 5) can be re-written as where ε 1 and ε 2 are the real and imaginary parts of complex dielectric constant r ε and r μ is relative permeability.Also, the Drude model for the complex dielectric constant can be expressed as where ω p is plasma frequency, γ p is damping constant, and ω is the angular frequency of the laser beam, which is related to the beam wavelength as follows: Note that our objective is to increase λ (i.e., decrease ω) while maintaining the same n and κ values.From Eqs. ( 7)- (10), if 1 r μ = is assumed, we can increase λ as long as ε 1 and ε 2 are unchanged.In other words, from Eq. ( 9), when ω is changed, we can select proper values of ε ∞ , p ω , and p γ that will result in the same ε 1 and ε 2 values, which will in turn lead to the same n and κ values.
Using standard FDTD Scheme for n ≤ κ
In the previous two sections, we have discussed methods to increase wavelength for both standard and dispersive FDTD methods without changing the metal's absorption characteristics.The standard FDTD algorithm (Yee's original algorithm), however, can be employed only when n κ > (as is shown from Eqs. ( 4) and ( 5)), although it is simpler to implement than the dispersive FDTD algorithm, which can be used for virtually all n and κ values.In this section, we will present a scheme that enables the use of the standard FDTD scheme for n κ ≤ cases.In order to figure out the afore-mentioned issue when n κ ≤ , let's take a look at Eqs. ( 1) and ( 2).If the incident angle is fixed and the reflectance value is assigned, the two equations become quadratic equations in terms of n and κ.Here, for the sake of simplicity, the incident angle will be fixed at 0 o i θ = (i.e., at normal incidence), where p-and s-polarizations become identical.Figure 1 shows the reflectance value contour lines plotted using Eqs.( 1) and ( 2).In this figure, for a reflectance value, there exist infinitely many sets of n and κ , shown as a contour line.Furthermore, out of these infinite sets, we can notice that n κ > cases always exist.In other words, although n κ ≤ for the given material, we can always find a different set of n and κ with the same reflectance and n κ > .With a newly obtained set of n and κ with n κ > , now we can use the standard FDTD method instead of the dispersive FDTD method.
In fact, this result is true only when the incident angle is 0° in the whole computational domain.For simple problems like a flat plane vacuum-material interface, there is only one incident angle in the whole domain, and a single set of n and κ can be used to maintain the exact same reflectance according to Eqs. ( 1) and (2).In most problems, however, this is not the case and the incident angle cannot be assumed constant because the structure geometry could be very complicated: the incident angle can assume any values between 0° and 90°, and thus, different sets of n and κ values will be required at different locations and times, therefore, this scheme is virtually impossible to implement.
Then, naturally a question arises as to if there exists a certain representative incident angle * i θ that, when used in Eqs. ( 1) and ( 2), can resemble the original reflectance characteristic of the material very closely over the entire range of incident angle.To answer this question, let's consider an iron target irradiated by a 1.06 μm laser, where n = 3.81 and κ = 4.44 [9].In this case, apparently n κ ≤ , and we need to come up with a new set of n and κ, which reproduces the reflection characteristic of Fe at 1.06 μm over the entire range of incident angle.Figure 2 shows angle-dependent reflectance values of iron (Fe) under 1.06 μm irradiation.Here, the red curves represent the actual reflectance patterns of Fe constructed for s-and p-polarizations by using Eqs.( 1)-( 2) and n = 3.81 and κ = 4.44.In this case, the reflectance value at normal incidence is found to be 0.644.Now, let's choose * 0 i θ = °, and evaluate Eqs. ( 1) and (2) assuming 0.644 . Then, from the obtained quadratic equations we can take infinitely many sets of new n and κ, several of which are listed in Table 1.Using these new n and κ values, we can re-evaluate Eqs. ( 1) and ( 2) as a function of incident angle i θ , which are shown as blue lines in Fig. 2.
Here, we can notice two important things.First, for s-polarized lights, regardless of n and κ values, the calculated angle-dependent reflectance curves are very close to the actual reflectance curve of Fe at 1.06 μm irradiation (red curve) over the entire incident angle range from 0° to 90°.Secondly, for p-polarized lights, as the n value increases from the smallest to the largest, the reflectance curve approaches the actual reflectance curve from above and passes it eventually.Therefore, in between, there exist some n and κ values that very closely approximate the actual reflectance pattern of the material.Here, our strategy is to find a set of n and κ, which gives the most accurate approximation to the actual and also satisfies n κ > .For example, when n = 4.62 and κ = 4.51, the approximated reflectance curve is reasonably close to the actual especially if the incident angle is away from the minimum reflectance point.
, now we have two different quadratic equations for different polarizations.In order to estimate the error generated from this approximation, the following definition is used: ( ) Here, , ( ) s p R θ is the actual reflectance curve for p-or s-polarizations calculated by Eqs. ( 1) and ( 2), and * , ( ) R θ is the approximated reflectance curve with a new set of n and κ values.Figure 5 presents the relative errors for s-and p-polarizations as a function of the newly chosen n value.Here, we considered two * i θ values, 0° and 75.2°.As shown clearly, near the original value of n the error is small and it increases as it moves away from this point.The minimum errors occur at around n = 3.81 (original value), and this is understandable.Also, we can notice that when * 0 i θ = ° errors are smaller than when * 75.2 i θ = °.We can see that for s-polarized lights the error is very small for the almost entire range of n values while for ppolarization errors are much larger but still reasonably small if n is chosen near the original n value.For n = 4.62 and κ = 4.51, the relative errors for s-and p-polarizations are 0.04% and 3.4%, respectively.For p-polarization, the relative errors will be much smaller if the incident angle is not very large.Note that, although the scheme has been explained for a Fe target under 1.06 μm irradiation it can be used for other metal/wavelength combinations.
Results and discussion
For validation purposes, we have implemented the presented schemes in our in-house 3-D FDTD codes.We considered Sn targets under 1.06 μm irradiation (n = 4.7 and κ = 1.6 [9]) and Fe targets under 1.06 μm irradiation (n = 3.81 and κ = 4.44).For both materials, four types of simulations were performed: (a) standard FDTD simulation with the given wavelength, (b) dispersive FDTD simulation with the given wavelength, (c) standard FDTD simulation with an increased wavelength, and (d) dispersive FDTD simulation with an increased wavelength.In the case of the Sn target, n is already larger than κ, so a new set of n and κ does not have to be obtained for standard FDTD simulations ((a) and (c)).On the other hand, for the Fe target, a new set of n and κ needs to be used because n κ < .For all simulations, we calculated the laser beam absorptance as the incident angle changes from 0° to 70° at an interval of 10°.Note that 80° and 90° incident angles were not simulated because these require extremely large domain size to capture the entire reflection phenomena.Figure 6 shows the schematic diagram of the computational domain with dimensions when the original wavelength is used.When the wavelength is enlarged, we proportionally increased the domain size.In this study, the laser beam propagates in the negative z-direction, and the surface tilting angle θ is increased from 0° to 70°.For all cases, p-, s-, and circularly polarized Gaussian beams were considered.Table 2 and Table 4 present the parameters used for the standard FDTD simulation of Sn and Fe, respectively, and in Table 3 and Table 5 are presented the parameters for the dispersive FDTD simulations of Sn and Fe, respectively.All these parameters were obtained by using the methods presented in Sections 2 to 4. For the Sn target, the wavelength has been increased to 31.8 μm (30 times larger), and we have used a wavelength of 21.2 μm (20 times larger) for the Fe target.These wavelengths were selected arbitrarily, and of course, much larger wavelengths can be used as long as the electrical conductivity is large enough to absorb the electromagnetic waves near the material interface.For all simulations, uniform grids with a grid spacing of / (10 ) n λ was used, and a 12 core high performance computer was used.Running times varied from 4 to 8 hours depending on the problem size.Figure 7 shows the simulation results showing electric fields for different tilting angles.In this simulation, the dispersive FDTD algorithm was used for a Fe target with an enlarged wavelength and the beam was s-polarized.In this figure, we can clearly see how the beam reflection changes as the tilting angle increases.Note that the tilting angle is equal to the incident angle, and when it is larger than 50°, the z dimension has to be increased to capture the whole reflection phenomena.For an incident angle of 80°, as mentioned earlier, the required domain becomes extremely large, so that case was omitted.However, the authors believe that the simulation results for larger angles will be qualitatively similar.
In Fig. 8, the calculated reflectance versus incident angle for all simulations is presented.In this study, the reflectance was calculated by using the energy flux difference between the incident waves and reflected waves in terms of the Poynting vector.As shown clearly in the figures, for both Sn and Fe, for all polarizations, and for all simulation methods, the obtained results are in good agreement with the analytical solutions.Considering that uniform grids were used and the surface is not aligned with any of the grid lines, we believe that the simulation results are reasonably accurate.In order to validate the method with a more complex geometry, we considered a three dimensional problem, where a 1.06 μm Gaussian beam irradiates on the upper side of a cylinder made of Fe. Figure 9 shows the schematic diagram of the computational domain with dimensions when the original wavelength is used.The same problem was also solved with a 20 times larger wavelength (21.2 μm), and in this case all the dimensions were increased proportionally.In this study, we have conducted four types of simulations for validation purposes: (a) standard FDTD simulation (new n and κ) with λ = 1.06 μm, (b) dispersive FDTD simulation with λ = 1.06 μm, (c) standard FDTD simulation (new n and κ) with an increased wavelength of 21.2 μm, and (d) dispersive FDTD simulation with an increased wavelength of 21.2 μm.For each case, three different polarizations were considered: p-, s-, and circular polarizations.Apparently, if the electric field of the beam is aligned in the y- direction, even though the cylinder surface is curved, the laser beam is 100% s-polarized.Also, if the electric field is in the x-direction, the laser beam is entirely p-polarized.All simulation parameters are listed in Table 4 and Table 5. Figure 10 shows the simulation results for Case (d).It turned out that all other results are undistinguishably similar to this result, so in this study only one result is shown.In the first and second rows, the electric fields on the x-y and y-z planes are presented, respectively, where both planes pass through the center of the laser beam.For all three polarization results, the electric fields are astonishingly similar, and the laser beams look smaller when viewed through the x-z plane because of more complex interference of incident and reflected waves.To validate the results, we have analytically calculated the reflectance values of the Fe cylinder for s-and p-polarizations as follows: where ℜ and 0 r are cylinder and beam radii, respectively, and 0 E is the electric field value at the center of the Gaussian beam.Note that the reflectance for circular polarization is mathematically the average of * s R and * p R .Table 6 presents the simulated reflectance values and the corresponding errors with respect to the analytical solutions obtained from Eq. (12).As we can see from the results, all four cases agree with the analytic solutions well, and the maximum relative error is less than 4.6%.Especially, it is clearly shown from the table that the result obtained with an increased wavelength is virtually the same as the one from the corresponding original-wavelength simulation.This means that the relative errors shown in the table (up to 4.6%) are the errors of the FDTD method, and not the errors caused by the additional procedures proposed in this study.Moreover, we can notice that the dispersive FDTD simulations are found to be slightly more accurate than the standard FDTD ones, which is because newly selected n and κ values were used for the standard FDTD simulations of Fe.For dispersive and standard FDTD simulations, the maximum errors are 3.27% and 4.59%, respectively, both of which occurred for p-polarization.Also, from the results, we can see that the reflectance for circular polarization is exactly the average of the p-and spolarizations.
One last comment is that, if an enlarged wavelength is used, the actual wave characteristic such as diffraction and interference are also changed although the beam absorption characteristic is preserved.However, in many problems, such as in laser processing problems, the most important things are the laser beam absorptivity and the beam absorption pattern.Besides, in laser manufacturing problems, the focused laser beam diameter is ~500 μm (which is much larger than the wavelength), so the beam divergence is generally very small compared to the problem size.
Conclusions
In this article, we have presented and validated an FDTD based method for simulating largescale laser beam absorption problems by enlarging laser wavelength while maintaining angledependent absorption characteristics.A method to use the standard Yee algorithm for a material with n κ < has bee also presented.Using these methods, we believe that various problems where laser beam absorption is critical can be effectively solved.
Table 1 .
Several selected n and κ values that lead to the same reflectance at normal incidence (θ i * = 0°) as the actual reflectance value of iron under 1.06 μm irradiation (n = 3.81, κ = 4.44).Bold faced cases are shown in Fig. 2 as blue lines.
Figure 3 (
b) shows the contours of a reflectance value of 0.644 for s-and ppolarizations.The original n and κ values of Fe are shown at the intersection of two curves as a green circle.In finding a new set of n and κ values that satisfies n κ > , even though it is always possible, we notice one problem.Because now the two polarization cases have different quadratic curves, for different polarizations a different set of n and κ values need to be used.If a light polarization is well defined and fixed in a given problem, it is not a problem at all.However, in most problems, light polarization is arbitrary and/or changes from one place to another, so that having to select different n and κ values for different polarizations is impractical.Furthermore, as shown in Fig.4, the calculated reflectance curves, especially for p-polarization, is much worse except when * 75.2 i θ = °.In this study, we have also tested other * i θ values (30°, 45°, and 82°), but * 0 i θ = ° were found to be the most desirable because of the same reasons explained above.
Fig. 3 .Fig. 4 .
Fig. 3. Lines of constant reflectance (R = 0.644) for s-and p-polarized lights shown as blue dashed lines and red solid lines, respectively.
Fig. 5 .
Fig. 5. Overall relative errors generated when a different n and κ values are used to approximate the material's reflectance patterns.Errors are calculated for Fe under 1.06 μm irradiation.
Fig. 7 .
Fig. 7. Electric fields in the computational domain showing the reflection patterns at different tilting angles.Simulations were performed for Fe with an enlarged wavelength using the dispersive FDTD code assuming the laser beam is s-polarized.The width of the computational domain is 200 μm.
Fig. 10 .
Fig. 10.Simulation results showing a Gaussian beam irradiating on a cylinder made of Fe.Here, the dispersive FDTD with an enlarged wavelength of 21.2 μm was used instead of 1.06 μm.The width and height of the figures are both 240 μm.
was totally arbitrary, and our goal is to obtain the best approximation.Therefore, we need to know what * i θ gives the best result.From the above result, because the error is the largest near the minimum reflectance point, we will try to use Brewster's angle B θ , which is 75.2° in this case.If Eqs.(1) and (2) are evaluated using i θ = °
|
2018-04-03T00:53:19.529Z
|
2013-10-21T00:00:00.000
|
{
"year": 2013,
"sha1": "be10d9e97186b7ce4c7b0fd1781b3c149a7b9d0b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.21.025467",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "be10d9e97186b7ce4c7b0fd1781b3c149a7b9d0b",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
204965988
|
pes2o/s2orc
|
v3-fos-license
|
Interventions to integrate care for people with serious mental illness and substance use disorders: a systematic scoping review protocol
Introduction People with serious mental illness (SMI) and/or substance use disorders (SUDs) have an elevated risk of premature mortality compared with the general population. This has been attributed to higher rates of chronic illness among these individuals, but also to inequities in healthcare access and treatment. Integrated care has the potential to improve the health of people with SMI/SUDs. The aims of this scoping review are to: (1) identify empirical investigations of interventions designed to integrate care for people with SMI/SUDs; (2) describe the underlying theories, models and frameworks of integrated care that informed their development; and (3) determine the degree to which interventions address dimensions of a comprehensive and validated framework of integrated care. Methods and analysis Guidelines for best practice and reporting of scoping reviews will be followed using the framework of Arksey and O’Malley and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses scoping review checklist. An iterative and systematic search of peer-reviewed publications reporting empirical research findings will be conducted. This literature will be identified by searching five databases: Medline (Ovid), PsycINFO, CINAHL, Embase (Ovid) and Scopus. The search will be restricted to articles published between January 2000 and April 2019. Two reviewers will independently screen publications in two successive stages of title and abstract screening, followed by full-text screening of eligible publications. A tabular summary and narrative synthesis will be completed using data extracted from each included study. A framework synthesis will also be conducted, with descriptions of interventions mapped against a theoretical framework of integrated care. Ethics and dissemination This review will identify the extent and nature of empirical investigations evaluating interventions to integrate care for people with SMI/SUDs. Ethical approval was not required. A team of relevant stakeholders, including people with lived experience of mental health conditions, has been established. This team will be engaged throughout the review and will ensure that the findings are widely disseminated. Dissemination will include publication of the review in a peer-reviewed journal. The review protocol has been registered through Open Science Framework and can be accessed at https://osf.io/njkph/
Introduction People with serious mental illness (SMI) and/or substance use disorders (SUDs) have an elevated risk of premature mortality compared with the general population. This has been attributed to higher rates of chronic illness among these individuals, but also to inequities in healthcare access and treatment. Integrated care has the potential to improve the health of people with SMI/SUDs. The aims of this scoping review are to: (1) identify empirical investigations of interventions designed to integrate care for people with SMI/SUDs; (2) describe the underlying theories, models and frameworks of integrated care that informed their development; and (3) determine the degree to which interventions address dimensions of a comprehensive and validated framework of integrated care. Methods and analysis Guidelines for best practice and reporting of scoping reviews will be followed using the framework of Arksey and O'Malley and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses scoping review checklist. An iterative and systematic search of peer-reviewed publications reporting empirical research findings will be conducted. This literature will be identified by searching five databases: Medline (Ovid), PsycINFO, CINAHL, Embase (Ovid) and Scopus. The search will be restricted to articles published between January 2000 and April 2019. Two reviewers will independently screen publications in two successive stages of title and abstract screening, followed by full-text screening of eligible publications. A tabular summary and narrative synthesis will be completed using data extracted from each included study. A framework synthesis will also be conducted, with descriptions of interventions mapped against a theoretical framework of integrated care. Ethics and dissemination This review will identify the extent and nature of empirical investigations evaluating interventions to integrate care for people with SMI/SUDs. Ethical approval was not required. A team of relevant stakeholders, including people with lived experience of mental health conditions, has been established. This team will be engaged throughout the review and will ensure that the findings are widely disseminated. Dissemination will include publication of the review in a peer-reviewed journal. The review protocol has been registered through Open Science Framework and can be accessed at https:// osf. io/ njkph/ bACkground Serious mental illness (SMI; also referred to as severe and enduring mental illness or SEMI) includes a range of conditions, such as major depression, bipolar disorder and schizophrenia. 1 2 These conditions are associated with debilitating symptoms that require ongoing treatment or management. People with SMI have a significantly reduced life expectancy and are at risk of poor health outcomes relative to those in the general population. 3 In New Zealand, men and women using mental health services have more than twice the risk of experiencing premature mortality than the general population. 4 This is similar to the UK, where a recent study of a nationally representative cohort of people with bipolar disorder and schizophrenia found that the rate of allcause mortality was 1.77 times greater among Open access individuals with bipolar disorder and 2.08 times greater for individuals with schizophrenia. 5 The UK study also found that these disparities in mortality had increased significantly from the year 2000 to 2014. 5 Evidence suggests that people with substance use disorders (SUDs) are also at increased risk of mortality compared with the general population. These disorders reflect the pattern of symptoms that result from prolonged use of illicit or legal drugs, including alcohol and medicines, despite mental or physical problems associated with their use. 6 The reduced life expectancy associated with SUDs is estimated to be 13.8 years, higher than the 6.3 year reduction associated with depression and the 7.2 year reduction associated with schizophrenia. 7 Of particular concern is the high prevalence of co-occurring SMI and SUDs. 8 A systematic review of studies conducted in the UK found the prevalence of co-occurring SMI and SUDs to be between 0.05%-0.16% in the general population. 9 In contrast, current harmful drug use or dependence among people with SMI was 1.9%-7.0%, and current harmful alcohol use or dependence was 7.0%-15.5%. 9 The lower average life expectancy evident among people with SMI/SUDs is largely attributable to an increased risk of a number of chronic health conditions. 10 Cardiovascular diseases have been identified as the most common cause of death in the SMI population, 11 12 contributing to more than 30% of all deaths among public mental health clients across eight US states between 1997 and 2000. 12 This contrasts to the percentage of deaths due to suicide over the same time period, which did not exceed 15% in any state, during any year examined. 12 Metabolic syndrome has been found to affect as many as one in three people with SMI, 13 and type 2 diabetes occurs at almost twice the rate among people with SMI than the general population. 14 While the incidence of cancer is no greater in people with SMI than in the general population, these individuals are more likely to have metastases at diagnosis and are less likely to receive specialist cancer treatment resulting in higher cancer mortality rates. 15 16 Similarly, after adjusting for age and gender, people with SUDs have been identified as at increased risk of diabetes, heart disease, asthma, gastrointestinal disorders, skin infections, malignant neoplasms and acute respiratory disorders. 17 However, risk of these disorders is substantially greater for individuals with comorbid SMI and SUDs, particularly individuals with psychosis. 17 There is growing acknowledgement that people with these comorbid conditions experience the worst health, well-being and social outcomes, and are among the most disadvantaged and vulnerable in society. 9 A number of factors have been found to contribute to the high prevalence of chronic health conditions which, in turn, contribute to reduced life expectancy among people with SMI/SUDs. These include socioeconomic disadvantage, 18 obesity and poor nutrition, 19 reduced physical activity, 20 21 side-effects of antipsychotic medication, 10 elevated consumption of alcohol and illicit drugs 22 and high rates of smoking. 23 24 However, there is increasing evidence that the poor health outcomes among people with SMI/SUDs are also a result of inequities in the provision of healthcare. 25 26 In addition, difficulties with access to healthcare or routine screening among people with SMI/SUDs have been identified. 27 Even when healthcare is accessed, these individuals have been found to receive poorer quality care, as well as higher rates of misdiagnosis, and lower rates of specialist interventions that could prevent the progression of a number of diseases, 10 28 compared with people without SMI/SUDs. Stigma has a pervasive influence on the quality of care that is provided to people with SMI/SUDs, 25 with medical professionals frequently disregarding the physical health concerns of this population and misinterpreting physical symptoms as mental illness. 29 One strategy to address inequities in healthcare access and treatment for people with SMI/SUDs is the integration of healthcare and social services. Integration of care is increasingly recognised as the most appropriate method for delivering care to people with multiple, complex chronic conditions, and has been found to be associated with significant improvements in conditionspecific quality of life. 30 However, a consensus on the concept of integrated care is yet to be reached, presenting difficulties for meaningful evaluation of integrated care approaches. 31 32 Some definitions are process oriented, some (although few) are person-centred, and others are health service oriented. 33 In an effort to provide a comprehensive concept of integrated care, Singer et al developed an integrated care framework that emphasises the importance of both care-coordination and person-centred care, acknowledging the central role of service users/ patients and their families in the management of their own health. 34 They describe integrated care as: 'patient care that is coordinated across professionals, facilities and support systems; continuous over time and between visits; tailored to the patients' needs and preferences; and based on shared responsibility between patient and caregivers for optimising health' (Singer et al,p 113). 34 Because of the varied definitions of integrated care, it is important to understand the underlying theories, models or frameworks of integrated care that are being used to inform empirical research in this area.
In the mental health context, a number of strategies to integrate care have been investigated. 35 36 Examples of intervention strategies include the co-location of mental and physical health services within a single setting, [37][38][39] collaborative care meetings between general practitioners and mental health professionals, 40 and the appointment of case managers to liaise between services and coordinate the overall care of individuals with SMI. 41 42 Interventions for people with co-occurring mental and addictive disorders have also been explored, such as on-site medical consultations, team-based approaches and facilitated referrals to primary care. 26 Despite substantial research in this area, the number and types of integrated care interventions that have been investigated empirically among Open access each population is unknown. It is also unclear which outcomes have been examined in evaluations of interventions aiming to integrate care (eg, whether the goal has been to increase contact with healthcare professionals or to improve the physical health of mental health/addiction service users). Most importantly, the underlying theoretical models on which these interventions have been based are yet to be identified.
These gaps in evidence suggest that a scoping of the literature could help to identify the characteristics of interventions that have integrated care for people with SMI/SUDs to date. While these individuals represent groups with distinct diagnoses, the symptom burden associated with the diagnoses is highly similar, 43 and they frequently co-occur. 44 Both groups also face barriers to receiving integrated care that could lead to more timely and effective treatment of physical health conditions. 45 Scoping reviews are recommended to examine the extent, range and nature of the evidence relating to a topic, providing an opportunity to clarify concepts, identify knowledge gaps and inform future research, practice and policy-making. 46 47 We intend to identify the types of empirically tested interventions aiming to integrated care for people with SMI/SUDs that have been investigated; the range of outcomes these investigations have endeavoured to modify; the theories, models and frameworks of integrated care that have informed intervention development; and the extent to which interventions have addressed key components of a widely recognised framework for the delivery of integrated care. 34 Given the significance of the inequities in health and mortality for people with SMI/SUDs, an understanding of the degree to which interventions to integrate care for this population are meeting key components of successful integrated care delivery is extremely important.
objECtIvES
The aims of the proposed scoping review are to: (1) systematically identify and describe empirical investigations of interventions to integrate care for people with SMI/SUDs, (2) describe the theories/models/frameworks of integrated care informing the empirical research and (3) determine the degree to which identified interventions address components of a comprehensive and validated framework of integrated care.
MEthodS
This scoping review will be conducted according to the methods developed by Arksey and O'Malley, 48 and the subsequent refinements to these methods. 49 50 There are six steps including: (1) defining the research question/s; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarising and reporting the results; and (6) consultation. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines will be followed using the PRISMA extension for scoping reviews checklist. 46 An iterative approach will be taken toward searching the literature, refining the search strategy, reviewing articles for inclusion and extracting relevant data. The review protocol has been registered through Open Science Framework and can be accessed at: https:// osf. io/ njkph/. Any amendments or deviations from the protocol will be reported in the methods section of the final published review.
defining the research question Research questions were formulated by considering the concept (integrated care), target population (people with SMI/SUDs), context (healthcare settings) and outcomes (empirically investigated outcomes) of interest in order to clarify the focus of the review and establish an effective search strategy. This scoping review intends to answer the following research questions: Identifying relevant studies Our search strategy was developed with the goal of undertaking a comprehensive review of the existing evidence base. An experienced subject librarian at the University of Otago has been consulted to assist with the identification of relevant search terms and databases. Search terms have also been reviewed by a team of relevant stakeholders, including people with lived experience of mental health conditions, mental health professionals, other health professionals and researchers from a range of disciplines. In order to identify empirical literature, an initial limited search of a selection of relevant databases has been performed followed by a review of text words contained in the titles and abstracts, and of index terms used to describe the articles. A second search will be conducted using all identified keywords and index terms and will be undertaken across five databases: Medline (Ovid), PsycINFO, CINAHL, Embase (Ovid) and Scopus. The reference lists of all included articles will be searched for additional studies. The search will be restricted to articles and reports published in English and to articles published between January 2000 and April 2019. The search strategy has been developed in Medline (Ovid) and will be adapted to other databases (see table 1). All searches will include a combination of subject headings, related terms and keywords. Boolean logic and operators (ie, 'and', 'or') will be used to combine and refine search terms and concepts.
Study selection
All records retrieved from the searches will be exported to Endnote referencing database. Following this, duplicate records will be removed (using both the Endnote 'de-duping' function and a manual scan of records), and the number of unique records will be identified. A two-stage collaborative review process will select studies for inclusion. Screening of studies will be piloted by two reviewers (AR and LR) on the first 5% of citations retrieved from the database search to test eligibility criteria and reviewer agreement. After consensus on each of these citations is reached, the reviewers will independently apply eligibility criteria during the initial title/ abstract review. Titles and abstracts will be retained for full text review if they: (1) refer to an intervention to integrate care; (2) the intervention is for people with mental health conditions, people with substance use problems or health professionals responsible for their care; and (3) the intervention is set in a health-oriented context. The full text of relevant studies will then be obtained and independently assessed for eligibility by two reviewers (AR and LR). After each review stage, the reviewer's agreement will be assessed and a third reviewer (SD) will be consulted in cases of disagreement, until consensus is achieved.
Eligibility criteria for a full text article to be included have been developed a priori with the assistance of the stakeholder team. This criteria is identified below in relation to participants, interventions, outcomes, context and study design.
Participants
Populations of interest will include: (1) adults with SMI/ SUDs who have received an intervention designed to integrate care or (2) healthcare professionals or associated staff (including unregistered health workers, managers and administrators; hereafter referred to collectively as 'health providers') who were involved in the delivery of an intervention to integrate care for people with SMI/SUDs. In the present investigation, SMI is defined as mental illnesses (schizophrenia, schizoaffective disorder, bipolar disorder, major depression and other psychoses) that produce severe and debilitating symptoms for 12 months or more. 2 51 Following feedback from our stakeholder group, and the widely recognised challenges associated with integrating care for people with SUDs, the decision was made to also review interventions for this population. SUDs is defined problems resulting from alcohol or other drug use for 12 months or more. 6 Despite facing many of the same health and mortality burdens as people with SMI, as well as inequities in access to appropriate care, this population is frequently overlooked in the development and evaluation of clinically integrated service delivery approaches. 45
Open access
Interventions Studies and reports describing interventions (ie, activities, programmes or strategies) with the explicit goal of integrating care for people with SMI/SUDs, addressing any of the key components of integrated care defined by Singer et al, 34 will be eligible for inclusion. This includes studies endeavouring to integrate care both within and between organisations and services. Eligible integrated care interventions can be very specific or can be implemented across a broad range of domains (ie, funding, administrative, organisational, service delivery and clinical domains). 52 Outcomes A broad range of service user and provider outcomes will be included in order to identify which outcomes have been most frequently examined. However, primary outcomes of interest will be service user health behaviours and physical health outcomes, given the potential of integrated care to increase access to treatments designed to improve physical health. Examples of secondary outcomes for consideration include: cost-effectiveness, patterns of healthcare utilisation and perceived satisfaction with an intervention (from service user and/or provider perspectives). Studies investigating process-oriented indicators and evaluation outcomes will be excluded, as the focus of this scoping review is on identifying the specific outcomes integrated care approaches are endeavouring to improve.
Context
Studies and reports published between January 2000 and May 2019 will be eligible for inclusion. This time period was selected to ensure identification of interventions likely to be relevant and applicable to contemporary healthcare contexts. Interventions delivered in any healthcare settings will be eligible, including primary care and community care settings, forensic settings, outpatient clinics, acute care hospitals and long-term care facilities.
Study Design
All empirical investigations examining outcomes following the implementation of an integrated care intervention using quantitative, qualitative or mixed methods designs will be eligible for inclusion. Quantitative studies will include randomised and non-randomised controlled trials, as well as studies implementing before-after designs (with or without a control group), and crosssectional studies. Qualitative investigations of participants' perceptions or experiences of an intervention will also be considered, including (but not limited to) designs such as qualitative description, phenomenology, grounded theory, ethnography and action research. Pilot studies will be included, whereas conceptual articles will be excluded, in addition to those reporting case study and quality improvement designs.
data extraction Data will be extracted according to the recommendations of Arksey and O'Malley. 48 A standardised extraction excel spreadsheet will be used to record: author(s), year of publication, study location, intervention type (and any comparator), underlying theory of integrated care, duration of the intervention, study population, aims of the study, methods, outcomes and key findings. Data extraction will be performed independently by two researchers (AR and KG), and compared by a third researcher (SD). The third researcher will be consulted to resolve any discrepancies in data extraction relating to each study. Possible additions/modifications to the data extraction form may be made after review of the first five references in order to ensure that all relevant information will be captured.
Collating, summarising and reporting A two-step approach will be used to summarise the findings of included studies.
Step one will involve a narrative synthesis of the characteristics and findings of the studies (including tabular and/or graphical summaries). Studies will be organised according to intervention type in order to highlight the range of integrated care approaches for people with SMI/SUDs that have been empirically evaluated. The underlying theory of integrated care associated with each included intervention will also be described (where this information is available).
Step two will identify the degree to which interventions for people with SMI/SUDs have addressed dimensions of integrated care as conceptualised by Singer et al. 34 A framework synthesis will be conducted to review the included interventions, with coding and analysis directed by the integrated care framework. Specifically, we are interested in qualitatively analysing the extent to which each intervention description addresses the seven elements of integrated care: (1) coordination within a care team, (2) coordination across care teams, (3) coordination between care teams and community resources, (4) continuous familiarity with patients over time, (5) continuous proactive and responsive action between visits, (6) service user-centred care and (7) shared responsibility. 34 To do so, descriptions of the interventions will be imported to an excel spreadsheet and analysed by two authors (AR and LR); both researchers have previous experience coding qualitative data. The a priori coding framework will be applied to each intervention description independently by the researchers. Results from these analyses will be summarised in order to highlight dimensions of integrated care that require further investigation and implementation among people with SMI/SUDs.
Consultation process
The aim of this review is to identify and describe empirical investigations of interventions to integrate care for people with SMI/SUDs in order to highlight which interventions have been associated with positive outcomes and which dimensions of integrated care have been targeted. This information will have potential to inform both future research activity and clinical practice. In order to ensure that findings of the review are of relevance to Open access mental health and addiction service users, and those who provide care to these individuals, we have engaged a stakeholder team as mentioned above. Stakeholders have been involved in developing the research questions guiding this review and have reviewed the search strategy to identify key terms that are relevant to the population, concept and contexts of interest presented in this protocol. Stakeholders will also be involved in interpreting the review findings and will advise on dissemination.
Patient and public involvement
This scoping review protocol has engaged the expertise of individuals with lived experience of SMI. These individuals have contributed to the development of the research questions, reviewed and made suggestions to the proposed search terms, and will be extensively involved during the interpretation and dissemination phases of this project.
EthICS And dISSEMInAtIon
Although integrated care is increasingly recommended for people with SMI/SUDs, it is unclear what elements of integrated care have been investigated in empirical evaluations of interventions designed to improve outcomes for these populations. To our knowledge, our scoping review will be the first to systematically describe the extent and nature of interventions to integrate care for people with SMI and people with SUDs, including which outcomes these interventions have endeavoured to modify. Therefore, the scoping review findings are expected to be of interest to service users, researchers, clinicians and policymakers. Our dissemination strategy will include publication of the review in an open-access peer-reviewed journal (i.e., available to service users, their families and the general public), and scientific presentations of the findings at conferences and to staff working within a range of mental health and addiction settings. All stakeholders will be involved in interpreting the review findings and ensuring that these are widely disseminated through their respective networks-including to service users. This will be facilitated by a half-day round-table meeting with our stakeholder group. During this meeting, findings of the review will be discussed and opportunities for future areas of research and clinical practice work will be brainstormed. It is hoped that stakeholders' knowledge and interpretations of the review findings will identify clear priorities for changes in the development and delivery of integrated care.
|
2019-10-31T09:13:39.257Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "9602ac60ffb1be4c689ee680078d6c2a087881b5",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/10/e031122.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4461716da76dd41f9fb15fdf9dd72e9d7daedf6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13662420
|
pes2o/s2orc
|
v3-fos-license
|
Bilateral ovarian masses with different histopathology in each ovary
Key Clinical Message We document the rare occurrence of multiple primary benign lesions that can occur in bilateral ovarian masses with benign imaging appearances and tumor markers. In addition, this case report contributes important information that may aid physicians in guiding their patients to make optimal clinical decisions together.
Introduction
A female's lifetime risk of having ovarian tumor is 6.0-7.0% [1], and these tumors account for up to 30% of all cancers of the female genital system. Surface epithelial tumors are the most common variety and accounts for approximately 65-75% of all ovarian tumors [2]. The most common type of epithelial ovarian neoplasms encountered is benign cystadenomas, of which 75% are serous cystadenomas and 25% are mucinous cystadenomas [3]. The occurrence of mixed epithelial tumors is rare, while the occurrence of two different types of ovarian tumors in each of the ovaries is very rare, with only few cases having been documented. We present a case of a 35-year-old woman with bilateral ovarian mass treated by laparoscopic cystectomy, and histopathological examination revealed two different ovarian tumors.
Case Report
A 35-year-old nulligravid woman presented to our gynecology outpatient clinic of the King Fahad Medical City, Saudi Arabia, with gradual distension of the abdomen and discomfort over 1 year. The swelling was accompanied by mild lower abdominal pain, constipation, and poor appetite. There was no history of vomiting or other gastrointestinal symptoms, urinary symptoms, colicky pain, and fainting attacks. She had no previous history of any illnesses, allergies, or operations. She denied the use of any medications. There was no family history of malignancies. Her menarche commenced at the age of 12 years.
Her body weight was 80 kg, her height was 161 cm, and her BMI was 30.86 kg/m 2 . Physical examination demonstrated that there was no jaundice, edema, or lymphadenopathy, and secondary sexual characteristics were evident. Abdominal examination revealed a large ill-defined pelvic-abdominal cystic mass extending from the pubis up to the umbilicus with an abdominal girth of 95 cm. There was dullness upon percussion but no tenderness. Upon auscultation, the intestinal sounds were normal. Her external genitalia were normal with no abnormality detected by speculum examination. Bimanual examination revealed a normal-sized uterus, and a cystic mass was felt bilaterally near the posterior fornix that was approximately 7 cm in diameter.
Transabdominal and transvaginal ultrasound were performed, which showed a bilateral pelvic multiloculated cystic mass approximately 13 9 10 cm in the right ovary and 6 9 5 cm in the left ovary, with evidence of solid components and septations. The uterus was normal, and endometrial thickness was 8 mm. CA-125 was 30 IU/mL, and other tumor markers (alpha-fetoprotein, lactate dehydrogenase, carcinoembryonic antigen, beta human chorionic gonadotropin) were within normal ranges. Magnetic resonance imaging (MRI) findings were consistent with bilateral multiloculated cystic ovarian lesions. The cyst on the right side measured 13.6 9 15.4 9 6.8 cm, while that on the left side measured 3.6 9 7.4 9 3.1 cm in the anteroposterior, transverse, and craniocaudal dimensions (Fig. 1). Thus, an ovarian cystadenoma was suspected. No abdominopelvic metastases or lymphadenopathy was reported. After the patient was counseled, she signed informed consent for laparoscopic bilateral ovarian cystectomy. The procedure was performed without complications. Intraoperatively, both ovarian cyst walls were identified and removed using blunt dissection with countertraction without disruption of the capsule. The specimen was intact, placed in the Endo Catch and sent for histopathology.
Histopathological examination revealed that the right cyst was approximately 14 cm with surface papillary excrescences, containing straw-colored fluid (Fig. 2). The left ovarian cyst was approximately 6 cm and was multiloculated with thick walls. Microscopic examination of the right ovary showed that the cyst wall was lined by simple columnar lining with papillary proliferations, while the left cyst revealed a thick wall with endocervical-like mucinous cell lining (Fig. 3). The diagnosis was made as right ovarian benign serous cystadenoma and left ovarian benign mucinous cystadenoma.
The postoperative period was uneventful. The patient was discharged on the 2nd postoperative day. She returned back to her normal daily activities and was advised to follow-up after 4 weeks. Consent for publication of the report was obtained from the patient as well as from the institutional review board (IRB).
Discussion
Ovarian neoplasms may be divided into four main groups, which include epithelial tumors (65-75%), germ cell tumors (15%), sex-chord-stromal tumors (5-10%), and metastatic tumors (10%) [2]. The epithelial tumors are among the most prevalent, with the single most common of benign ovarian neoplasm being cystic teratoma. Others have reported serous cystadenoma as the most common type. Serous or mucinous cystadenomas of the ovary arise from the Mullerian germinal epithelium and usually present after puberty [3]. Most serous tumors or 50% are benign; however, 15% are borderline, and 35% are invasive carcinomas [4]. Mucinous cystadenoma is usually unilateral. This is also the case in benign serous tumors, of which only 20% are bilateral. In this case, we report a rare occurrence of two different types of benign epithelial histopathology in each of the ovaries.
The most common complications of benign ovarian cysts are torsion, hemorrhage, and rupture. Small ovarian cysts are usually asymptomatic and may be found incidentally either clinically or on ultrasound. There are many differential diagnoses for ovarian cysts, such as functional cysts, omental cysts, and mesenteric cysts [5]. The most common management options are conservative surgery, ovarian cystectomy, and salpingo-oophorectomy for benign lesions [6]. In young women, one of the main goals is to preserve the reproductive and hormonal functions of the ovaries while preventing recurrence. In this case, we performed treatment with laparoscopic bilateral ovarian cystectomy with the main aim of preserving the patient's hormonal functions.
Microscopic features demonstrate that the serous cystadenoma is lined by a flat (one cell layer) ciliated epithelium covering broad fibrous stromal cores with bland ovoid basal nuclei. On the other hand, microscopic features demonstrate that the mucinous cystadenoma has a layer of columnar cells that are endocervical-like or intestinal-like, with uniform round or oval basal nuclei and clear or amphophilic cytoplasm-lined fibrous stroma [7].
Most of the bilateral ovarian masses with multiple primary cancers in the literature are malignant in nature [8]. In our case, both histopathological types were benign, which included serous cystadenoma in one ovary and mucinous cystadenoma in the other ovary. Only a few cases have been reported as bilateral ovarian masses with multiple primary benign lesion [9].
Conclusion
We document the rare occurrence of multiple primary benign lesions that can occur in bilateral ovarian masses with benign imaging appearance and tumor marker. In addition, this case report contributes important information that may aid physicians in guiding their patients to make optimal clinical decisions together.
|
2018-05-11T22:37:44.601Z
|
2018-03-05T00:00:00.000
|
{
"year": 2018,
"sha1": "63841862dc53d1e4a8f384e6a35e37eceb25284d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.1466",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63841862dc53d1e4a8f384e6a35e37eceb25284d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231594070
|
pes2o/s2orc
|
v3-fos-license
|
Tetracyclines improve experimental lymphatic filariasis pathology by disrupting interleukin-4 receptor–mediated lymphangiogenesis
Lymphatic filariasis is the major global cause of nonhereditary lymphedema. We demonstrate that the filarial nematode Brugia malayi induced lymphatic remodeling and impaired lymphatic drainage following parasitism of limb lymphatics in a mouse model. Lymphatic insufficiency was associated with elevated circulating lymphangiogenic mediators, including vascular endothelial growth factor C. Lymphatic insufficiency was dependent on type 2 adaptive immunity, the interleukin-4 receptor, and recruitment of C-C chemokine receptor-2–positive monocytes and alternatively activated macrophages with a prolymphangiogenic phenotype. Oral treatments with second-generation tetracyclines improved lymphatic function, while other classes of antibiotic had no significant effect. Second-generation tetracyclines directly targeted lymphatic endothelial cell proliferation and modified type 2 prolymphangiogenic macrophage development. Doxycycline treatment impeded monocyte recruitment, inhibited polarization of alternatively activated macrophages, and suppressed T cell adaptive immune responses following infection. Our results determine a mechanism of action for the antimorbidity effects of doxycycline in filariasis and support clinical evaluation of second-generation tetracyclines as affordable, safe therapeutics for lymphedemas of chronic inflammatory origin.
Introduction
Lymphedema (LE) affects 200 million individuals worldwide (1). LE is caused by disruption of normal lymphatic function whereby return drainage of fluid, proteins, fats, and immune cells (lymph) is impaired (2). LE is either hereditary, caused by mutations in genes controlling lymphatic development, or nonhereditary, caused by infection, trauma, or surgical removal of lymphatics to prevent cancer metastasis (2,3). The major cause of secondary LE is lymphatic filariasis (LF), a neglected tropical disease affecting an estimated 67 million people, with a further 890 million at risk (4). Filarial LE causes life-long physical and associative mental disability (5), ranking LF as the fourth highest contributor to global disability-adjusted life-years. Tangible progress has been made in LF elimination via mass drug administration of antifilarial drugs, effectively halving the number of active infections between 2000 and 2013 (4), whereas the number of LE patients remained static at 40 million over the same time period. Current treatment for filarial LE is limited to morbidity management and disability prevention, which involves an array of hygienic measures and implementation of physiotherapy in the household (6). No chemotherapeutic interventions are indicated for filarial LE. However, antibiotics are recommended to treat secondary skin bacterial infections, which can reduce the frequency of periodic inflammatory episodes known as acute dermatolymphangioadenitis (ADLA), a form of cellulitis. In a recent placebo-controlled clinical trial, while both amoxicillin (the standard antibiotic treatment for ADLA) and doxycycline reduced the frequency of ADLA, doxycycline also showed surprising efficacy in reversing LE grade (7).
In this study, we developed a murine hind-limb model of filarial infection, utilizing longitudinal intravital imaging to demonstrate that filarial infective larvae induce rapid lymphatic alterations associated with induction of lymphatic insufficiency. We demonstrate that early filarial lymphatic pathology is primarily host-immune driven, characterizing an interleukin-4 receptor (IL-4R) type 2-dependent axis involving recruitment of inflammatory monocytes and alternatively activated macrophages (AAMΦs) that promote the development of lymphatic disease. We demonstrate that second-generation tetracyclines can target multiple aspects of this pathway to ameliorate lymphatic pathology.
Lymphatic filariasis is the major global cause of nonhereditary lymphedema. We demonstrate that the filarial nematode Brugia malayi induced lymphatic remodeling and impaired lymphatic drainage following parasitism of limb lymphatics in a mouse model. Lymphatic insufficiency was associated with elevated circulating lymphangiogenic mediators, including vascular endothelial growth factor C. Lymphatic insufficiency was dependent on type 2 adaptive immunity, the interleukin-4 receptor, and recruitment of C-C chemokine receptor-2-positive monocytes and alternatively activated macrophages with a prolymphangiogenic phenotype. Oral treatments with second-generation tetracyclines improved lymphatic function, while other classes of antibiotic had no significant effect. Second-generation tetracyclines directly targeted lymphatic endothelial cell proliferation and modified type 2 prolymphangiogenic macrophage development. Doxycycline treatment impeded monocyte recruitment, inhibited polarization of alternatively activated macrophages, and suppressed T cell adaptive immune responses following infection. Our results determine a mechanism of action for the antimorbidity effects of doxycycline in filariasis and support clinical evaluation of second-generation tetracyclines as affordable, safe therapeutics for lymphedemas of chronic inflammatory origin.
Tetracyclines improve experimental lymphatic filariasis pathology by disrupting interleukin-4 receptor-mediated lymphangiogenesis Because inbred mice mount an effective adaptive immune response to control B. malayi infection before chronic adult intralymphatic filarial parasitism can become established (18), we next investigated if infection-induced lymphatic remodeling and dysfunction resolved after clearance of filarial infection. BmL3-infected mice imaged at 16 weeks after infection retained backflow and tortuous lymphatic patterning, with no significant decline in lymphatic remodeling or levels of lymphatic insufficiency, compared with 2 weeks after infection ( Figure 1, C, I, and J). At 16 weeks after infection, there was no evidence of active intralymphatic adult parasitism or circulating microfilariae, indicating that lymphatic pathology persists long after initial induction by filarial infection.
To explore host molecular mechanisms mediating filarial lymphatic pathology, we compared circulating plasma concentrations of a focused array of angiogenic/lymphangiogenic factors between BmL3-and sham-infected cohorts at 14 days postinfection (dpi). A milieu of lymphangiogenic factors were upregulated in BmL3-infected mice, including vascular endothelial growth factor C (VEGF-C), soluble activin receptor-like kinase 1 (sALK-1), and prolactin ( Figure 2, A and B). As VEGF-C is a well-characterized primary lymphangiogenic mediator (19), we investigated the impact of isolated VEGF-C delivery to the hind-limb skin-draining lymph nodes (sdLNs).
We administered a VEGF-C-expressing adenoviral vector (adVEGF-C) to increase local VEGF-C signaling in the same anatomical areas exposed to BmL3 infection. adVEGF-C-treated groups displayed significantly higher levels of both lymphatic remodeling and insufficiency, compared with both naive mice and control mice treated with GFP-expressing adenoviral vector (adGFP), with mid-dose adVEGF-C administration recapitulating magnitudes of lymphatic remodeling and pathology comparable to 14-dpi BmL3-infected mice (Supplemental Figure 3).
(lymphangions) of the infected hind limb ( Figure 1B and Supplemental Video 1; supplemental material available online with this article; https://doi.org/10.1172/JCI140853DS1). Motile BmL3 could be observed within superficial dermal lymphatics from 3 hours to 4 days after infection. Near-infrared (NIR) intravital indocyanine green (ICG) lymphography was undertaken to investigate the impact of B. malayi larval infection on lymphatic structure and function ( Figure 1A, Supplemental Figure 1, and Supplemental File 1). Clinical ICG lymphography has characteristic "splash," "stardust," and "diffuse" dermal backflow patterns, and visualization of tortuous collateral lymphatics, associated with onset of LE in patients (17). At 2 weeks after B. malayi infection, we observed the presence of all 3 dermal backflow patterns and tortuous collateral lymphatic development ( Figure 1C and Supplemental Figure 2, A and B). By image analysis we determined that BmL3-infected C57BL/6J mice displayed significant levels of lymphatic remodeling in dorsal, lateral, and ventral aspects of the infected limb (Figure 1, C and D). Remodeling was pronounced at sites proximal to initial invasion of the superficial lymphatics, although by this time point there was no evidence of motile intralymphatic larvae. By epifluorescence imaging, we could detect significant, mean 2-fold dilations of Prox-1 + lymphatic vessels at 2 weeks after infection (Figure 1, E and F). By comparing ICG dermal backflow in infected and uninfected limbs, significant ICG retention was evident in the infected limbs, compared with sham controls ( Figure 1G). Further, in an Evan's blue (EB) dermal retention assay (Supplemental Figure 1), significant EB accumulation in the skin of BmL3-infected limbs was discerned ( Figure 1H). Repeat experiments using BALB/c mice demonstrated that all aspects of lymphatic pathology were reproducible in this background strain, although to a generally lower degree of severity (Supplemental Figure 2). significantly different compared with sham controls and significantly lower than corresponding WT BALB/c infections assessed at either 2 or 5 weeks after infection ( Figure 3, A and B). Concomitantly, no significant difference in lymphatic insufficiency was observed between sham and infected SCID mice, judged by either ICG or EB dermal backflow at 2 weeks after infection (Figure 3, C and D). We then characterized the localized CD4 + T cell adaptive immune response in sdLNs and major afferent lymphatic collecting vessels proximal to filaria-parasitized and -remodeled lymphatic tissues, utilizing intracellular cytokine flow cytometry (Supplemental Figure 4). Significant expansions of type 2 IL-4-and Filarial lymphatic pathology is dependent on IL-4R type 2 adaptive immune responses. Previous clinical studies have demonstrated a link between symptomatic LF and enhanced parasite-specific host adaptive immune responses (20,21). In mice, a polarized type 2 adaptive immune response coordinates effective eosinophil-mediated immunity against larval stage filariae (22). We investigated the role of adaptive immunity by comparing magnitudes of lymphatic remodeling and insufficiency between WT and severe combined immunodeficient (SCID) mice lacking functional B and T lymphocytes. BmL3-infected CB.17 (BALB/c congenic) SCID mice displayed muted levels of lymphatic remodeling that were not mediating filarial lymphatic pathology. By immunophenotyping the sdLNs and major afferent lymphatic collecting vessels proximal to BmL3 inoculation sites, we determined significant expansions of CD11b + Ly6C + CCR2 + inflammatory monocyte and CD11b + F4/80 + MHCII + MΦ populations ( Figure 5, A and B), significant eosinophilic and neutrophilic granulocyte recruitment, and T and B lymphocyte proliferation (Supplemental Figure 5). In the absence of functional IL-4Rα signaling, a slight decrease in monocyte recruitment was observed and lymphatic tissue MΦ expansions were significantly impeded following filarial infection (Figure 5, A and B). A significant, 2-fold reduction in MΦs expressing the tissue residency marker Tim-4 (23), in filaria-infected WT but not IL-4Rα -/mice, was apparent ( Figure 5, C and D), suggestive of IL-4R-dependent recruitment of monocyte-derived MΦs within the expanded lymphatic tissue MΦ pool, after BmL3 infection. Filarial infection-expanded lymphatic tissue MΦs also displayed significantly increased expression of the AAMФ markers RELM-α and CD206 (mannose receptor, a specific marker of alternative activation within monocyte-derived MΦs in cardiac and hepatic tissues; refs. 24, 25, and Figure 3, C and D). AAMΦ development IL-13-secreting CD4 + T cells were observed in sdLN single-cell suspensions derived from BmL3-infected mice at 14 dpi and CD4 + secretion levels of the regulatory-type cytokine IL-10 was also increased, while local secretion levels of the type 1 cytokine IFN-γ remained unaltered following infection (Figure 3, E and F). Subsequently, we tested whether ablation of type 2 immune signaling would affect the severity of filarial lymphatic pathology, utilizing IL-4Rα-deficient mice, which are unable to respond to either IL-4 or IL-13. Following BmL3 infection, IL-4Rα-knockout (IL-4Rα -/-) mice, on either the BALB/c or C57BL/6J background, exhibited significantly diminished lymphatic remodeling and lymphatic dysfunction ( Figure 4, A-D). Levels of circulating lymphangiogenic mediators were also significantly abrogated in IL-4Rα -/-BmL3infected mice, notably VEGF-C and angiopoietin 2 (Ang-2) ( Figure 4, E and F). These data indicate a functional role for IL-4R-dependent type 2 adaptive immune responses in the induction of early filarial lymphatic pathology.
Prolymphangiogenic inflammatory monocytes and AAMΦs are mediators of filarial lymphatic dysfunction. We investigated the contribution of local cellular inflammatory responses in after filarial infection in proximal lymphatic tissues was completely abrogated in the absence of intact IL-4R signaling ( Figure 3, C and D). AAMΦ polarization is a well-characterized hallmark of filarial infection (26), and IL-4-stimulated AAMΦs are important in mediating filarial expulsion by sustaining recruitment of eosinophils (22). Because MΦs are potent cellular mediators of angiogenesis and lymphangiogenesis (27), we explored the lymphangiogenic phenotype of purified monocytes and macrophages from lymphatic tissues after filarial infection. FACS-isolated CD11b + Ly-6C + CCR2 + inflammatory monocytes secreted significantly higher concentrations of prolactin, sALK-1, IL-6, and amphiregulin, while CD11b + F4/80 + MHCII + MΦs secreted significantly higher levels of VEGF-C compared with sham-infected controls ( Figure 5E). In a tandem approach, we examined the direct prolymphangiogenic potential of type 2 cytokine-or filaria-stimulated human THP-1 monocyte-derived MΦs. For this purpose, we developed a human dermal lymphatic endothelial cell (LEC) proliferation assay following coculture with monocyte-derived MΦs preconditioned with recombinant (r) IFN-γ, rIL-4+rIL-13, and live BmL3 or BmL3 extract (BmL3E) ( Figure 6A). rIL-4+rIL-13-, live BmL3-, and BmL3E-conditioned MΦs mediated significant LEC proliferation compared with LEC cells cultured in basal media alone or in the presence of naive THP-1 monocyte-derived MΦs (Figure 6B). Analysis of conditioned media from rIL-4+rIL-13-stimulated monocyte-derived MΦs revealed significantly elevated levels of prolymphangiogenic mediators VEGF-A, follistatin, and human growth factor (HGF) ( Figure 6C), while significantly elevated VEGF-C and HGF were observed in BmL3E-pulsed MΦ-conditioned media ( Figure 6D). This human cell coculture system confirmed that monocyte-derived MΦs exposed to both a To interrogate the functional role of prolymphangiogenic monocytes and monocyte-derived MΦs recruited to the site of filaria-parasitized lymphatics, we blocked CCR2 + monocyte recruitment following BmL3 infection by administration of an anti-CCR2 ablating antibody (28). In a complementary approach, we reduced total phagocyte cell populations, including monocytes and MΦs, by local subcutaneous administration of clodronate encapsulated in liposomes ( Figure 7A). Confirming treatment efficacy, both anti-CCR2 and clodronate liposome treatments delivered to filaria-infected mice successfully reduced circulating blood monocyte populations. Further, anti-CCR2 significantly reduced lymphatic-associated monocyte populations following infection ( Figure 7, B and C). Following ablations of monocyte and total phagocyte populations, while remodeled lymphatics were still apparent, the magnitude of lymphatic insufficiency was significantly reduced, as demonstrated by reduced backflow of ICG following anti-CCR2 treatment (Figure 7, F and G) and dermal retention of EB ( Figure 7H) following both anti-CCR2 and clodronate liposome treatments. Additionally, dermal lymphatic vessel dilation was signifi- Histograms show the mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001 by 1-way ANOVA with Tukey's multiple-comparison post hoc test.
with tetracyclines (33), as well as amoxicillin and chloramphenicol; both potent, broad-spectrum antibiotics lack significant anti-Wolbachia activity (34). Similar to effects observed with doxycycline, minocycline delivered at doses bioequivalent to 100 mg human oral exposures (30) led to significantly improved severity of lymphatic remodeling and insufficiency (Figure 9, B-D). Comparatively, none of the other administered broad-spectrum antibiotics -amoxicillin, chloramphenicol, or rifampicin -had any significant effect on either lymphatic remodeling or insufficiency following filarial lymphatic infection ( Figure 9, B-D). Filaria-infected TLR6-deficient mice displayed no significant difference in either magnitude of lymphatic remodeling or lymphatic insufficiency compared with WT controls (Figure 9, E-G). Together, these data define a specific antimorbidity efficacy of second-generation tetracyclines in ameliorating filaria-induced lymphatic pathology, independently of general antibiotic or anti-Wolbachia-specific modes of action.
We tested which facets of the type 2 inflammatory lymphangiogenic pathway induced by filarial infection were targeted by tetracyclines. We first investigated whether doxycycline could directly affect lymphangiogenesis in vitro. Growth assays, utilizing time-lapse microscopy to longitudinally quantify LEC or tissue-equivalent adult human dermal microvascular endothelial cell (blood endothelial cell; BEC) proliferation over 9 days were performed (Supplemental File 2). Treatment of LECs or BECs with 10 or 20 μM doxycycline impeded proliferation in response to a VEGF-A stimulus, in a dose-dependent manner ( Figure 10, A and B, and Supplemental File 2). Similar effects were obtained with BECs and LECs treated with minocycline (Supplemental Figure 7). We then treated monocyte-derived MΦs with 10 μM doxycycline simultaneously during stimulation with live BmL3, BmL3 with type 2 cytokines, or BmL3E. MΦs were washed before their transfer within Transwells onto LEC cultures to remove drug ( Figure 10C). While rIL4+rIL13-, BmL3+rIL4+rIL13-, and BmL3E-pulsed MΦs mediated significant LEC proliferation, this affect was abolished by pretreatment with doxycycline ( Figure 10D). Addition of 10 μM doxycycline to BmL3E-pulsed MΦ and LEC cocultures also abrogated LEC proliferation ( Figure 10E). No significant cytotoxicity was discerned when LEC or THP-1 MΦs were exposed to 10 or 20 μM doxycycline and LEC cultures responded to VEGF prolifer-filaria-lymphatic pathology model was responsive to oral doxycycline intervention. After 14 days of infection and cotreatment with a doxycycline regimen bioequivalent to human 200 mg daily oral dosing (ref. 30 and Figure 8A), mice exhibited significantly lower levels of both lymphatic remodeling ( Figure 8, B and C) and lymphatic insufficiency compared with infected and vehicle control animals (Figure 8, D and E). We did not observe direct antifilarial efficacy at these treatment dose ranges against B. malayi developing larvae up to 14 days, ruling out a direct antiparasitic mode of action contributing toward reduced pathology (Supplemental Figure 6). Because the filarial endosymbiont Wolbachia is depleted by tetracyclines (30,31) and can trigger innate inflammation via ligation of surface lipoproteins by Toll-like receptor 2 and 6 heterodimers (TLR2/6) (32), we investigated whether initiation of lymphatic pathology was influenced by Wolbachia. In addition, using the related second-generation antibiotic minocycline and a selection of different classes of antibiotics, we tested whether suppression of lymphatic pathology was a phenomenon unique to the tetracycline class or could be mediated by other antibiotics with anti-Wolbachia and/or broad-spectrum antibacterial activities ( Figure 9A). We selected high-dose rifampicin as a broad-spectrum antibiotic with superior anti-Wolbachia activity compared Figure 11A). Eosinophil levels in lymphatic tissues were also significantly reduced in infected mice following doxycycline treatment ( Figure 11A). Doxycy-ating stimulus following removal of drug (Supplemental Figure 8). These in vitro data indicate that second-generation tetracyclines reversibly suppress VEGF-mediated lymphangiogenesis and, independently, the development of prolymphangiogenic monocyte-derived MΦs following filarial and/or type 2 cytokine stimulation. Using the filarial lymphatic pathology mouse model, we immunophenotyped lymphatic-associated myeloid (BmL3+Mino, BmL3+Chlor, BmL3+Rif groups in C, D, F, and G). Histograms show the mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001 by 1-way ANOVA with Tukey's multiple-comparison post hoc test. NS, not significant.
treatments modified numerous cytokines, compared with BmL3 infection alone (Figure 11, F and G). Reductions in secretions of type 2 cytokines IL-3, IL-4, IL-9, and IL-5 were observed after doxycycline treatment in infected mice. Additionally, modified systemic type 1 (IFN-γ) and type 17 (IL-17) splenocyte secretions were recorded after doxycycline treatment. Further, general reductions in chemokine production, including those responsible for monocyte and macrophage activation (CXCL2, G-CSF), as well as the prolymphangiogenic growth factor VEGF-A were observed within splenocytes after doxycycline treatment (Supplemental Figure 9). Therefore, second-generation tetracyclines target multiple aspects of the type 2 inflammatory lymphangiogenic axis induced by filarial larval infection, as well as directly targeting lymphatic endothelial proliferation, to modify lymphatic filarial disease. cline treatment also significantly blocked AAMΦ polarization as measured by reduced populations of RELM-α + MΦs ( Figure 11, B and C). We examined if this modified myeloid cell recruitment and reduced AAMΦ lymphangiogenic potential resulted in reduced local concentrations of the lymphangiogenic milieu. Ex vivo culture of single-cell suspensions prepared from sdLNs and adjacent lymphatic channels of filaria-infected mice treated with doxycycline demonstrated reductions in multiple lymphangiogenic secretions compared with infection controls ( Figure 11D). Follistatin was significantly reduced, while VEGF-C secretions remained at sham-infection control levels ( Figure 11E). We then examined whether the initial, predominant type 2 adaptive immune response important for mediating lymphatic pathology was perturbed by doxycycline. We assessed splenocyte recall assays to evaluate systemic immune responses. Doxycycline adult infections. It is currently not known whether such rapid pathology is evident in humans, as markers of adult filarial infection are typically utilized as selection criteria for study. However, a recent investigation has defined via lymphoscintigraphy that lymphatic pathology is evident in children as young as 5 years (37). Thus, we contend that frequent larval assaults transmitted by mosquito bites, that do not necessarily result in patent adult infections, may cause underappreciated lymphatic pathology in LF-endemic areas.
Strain-dependent magnitude of lymphatic remodeling, whereby BALB/c mice exhibited reduced pathology compared with C57BL/6J mice, reflects the relative vigor of sterilizing immunity against filarial infection between these 2 strains (38). Indeed, severity of LE in filariasis patients is associated with mag-
Discussion
We reveal persistent lymphatic dilation, remodeling, and dermal backflow patterns in mice that emulate clinical lymphatic remodeling in both filarial and nonfilarial LE patients (14)(15)(16)(17). Further, we record significant upregulation of the prolymphangiogenic circulating factors Ang-2, TNF-α, and VEGF-C, which are clinical serological markers of filariasis infection and LE pathology (29,35,36). Thus, we conclude that our preclinical model is representative of early lymphatic pathological changes in filariasis patients and a useful tool to interrogate the pathophysiology and therapeutic targeting of filarial disease.
Our model revealed that, surprisingly, abbreviated larval filarial infections, in as little as 6 days, could rapidly induce enduring lymphatic pathology without the necessity for establishment of chronic angiogenic mediators at the site of filarial lymphatic pathology. Clinically, it has been shown that circulating blood mononuclear cells derived from filarial LE patients also demonstrate heightened VEGF-A/-C production upon ex vivo stimulation with either TLR or filarial antigens (47).
By serial depletion of CCR2 + monocytes or total phagocytes in vivo, we confirmed that temporal monocyte deficiency and impaired lymphatic recruitment alleviated lymphatic dysfunction and reduced lymphatic dilation. Similarly, CCR2 + monocyte recruitment has been demonstrated to mediate intestinal inflammatory lymphangiogenesis (48), whereas monocyte CD36 blockade prevents corneal lymphangiogenesis (49), suggesting a common mechanism in inflammatory lymphangiogenesis induction. We hypothesize that the gross local dilation in parasitized skin lymphangions impairs trafficking of solutes from proximal interstitial spaces during type 2 filarial inflammation. Lymphangion lumen dilation to the point of valve dysfunction has been proposed as a mechanism for lymphostasis in postsurgical LE (50). In filarial hydrocele pathology, gross honeycomb dilation of the supratesticular lymphatics correlates with circulating VEGF-A levels (15). As VEGF-A and VEGF-C both activate lymphatic endothelium via VEGFR1/2 and VEGFR3, respectively, our data support VEGF-A/-C-specific activation of the superficial lymphatics during filarial type 2 inflammation, delivered by recruited CCR2 + monocytes and their subsequent differentiation into AAMΦs. However, we also identified circulating and monocyte-specific production of other lymphangiogenic factors, namely sALK-1 and prolactin, while another lymphangiogenic factor, Ang-2, which was IL-4R type 2 dependent in circulation, was not produced by the monocyte/MΦ lineage within parasitized lymphatics. This suggests additional lymphangiogenic factors contribute to remodeling events during initiation of type 2 filarial inflammation within sdLNs. The relative functional roles of these multiple growth factors need investigating to determine whether targeted antiangiogenics may be of therapeutic benefit in filarial LE.
In our human cell coculture system, polarization of monocyte-derived MΦs with type 2 cytokines resulted in a MΦ phenotype able to induce LEC proliferation. However, live filarial larvae or their products could also induce an MΦ phenotype without additional type 2 cytokine help. Type 2 or filaria-polarized monocyte-derived MΦs in vitro produced increased secretions of VEGF-A/-C, follistatin, and HGF. Filaria-specific activation of human CD14 + monocytes has been previously demonstrated to induce prolymphangiogenic VEGF-A secretions (9). Thus, local patrolling CD14 + monocyte populations in the lymphatics may also be able to facilitate localized lymphatic dilations in the immediate vicinity of invading larvae in response to larval secretions. This may facilitate larval migrations through lymphatics and would occur prior to initiation of type 2 immunity, resulting in the recruitment of inflammatory monocytes, their differentiation into AAMΦs, and resultant augmented and widespread lymphatic pathology.
Prior clinical research has promoted an antipathological role of 6-week 200 mg/day doxycycline treatment in ameliorating filarial LE pathologies (7,16,29,51). Reduced circulating VEGF-A/-C was observed in these studies, strengthening the hypothesis that chronic lymphatic remodeling supports development and maintenance of filarial LE (7, 16, 29). The mechanism by which nitude of CD4 + T cell immune responses to filarial antigen (20). In our model, local draining LN adaptive immune responses were polarized toward CD4 + T cell IL-4 and IL-13 secretion, suggesting an important role for type 2 sterilizing immune responses in induction of lymphatic dysfunction. We have defined eosinophil coordinated type 2 immune responses as critical to preventing B. malayi larval survival (22,39). Lymphatic remodeling and dysfunction were reduced in SCID mice following filarial infection, demonstrating a requirement for adaptive immunity to induce early lymphatic dysfunction.
A limitation of our study was that while lymphatic pathology was rapidly induced, we did not observe overt LE in immunocompetent mice following a single infection event and up to 16 weeks follow-up. Further, we used a single high-dose infection (100 L3), whereas humans will be naturally exposed repetitively to low doses (typically <10 L3) in so-called trickle infections. Although dilation of B. malayi adult parasitized lymphatics and LE formation has been documented in B. malayi-susceptible T cell-immunodeficient mice (11,13), reactivation of adaptive immunity during chronic infection time courses in aged mice was not scrutinized in these leaky lymphopenic models. Indeed, experimental immune reconstitution triggers a destructive, fibrotic, perilymphangitic pathology with myeloid-rich infiltrates in infected lymphatics coincident with immune-mediated killing of adult parasites (11). Further, in experimental infections of outbred feline and canine natural Brugia hosts, overt LE is associated with leukocytic intralymphatic obstructive thrombi and exacerbated by bacterial or fungal secondary infections (40,41). In a susceptible ferret model of B. malayi infection, 6 trickle-dose inoculations over a 10-week period resulted in overt LE in 1 out of 4 animals tested (12). Thus, we suggest the immediate adaptive immune-dependent lymphatic pathology we detail is an early facet of a complex multifactorial process, likely requiring several chronic infection events within the limb lymphatic network and prime-boosting of type 2 immunity to culminate in pronounced lymphedematous disease.
In nonfilarial LE models, CD4 + T cell depletion reduces lymphatic pathology, while specific neutralization of type 2 cytokines IL-4 and IL-13 ameliorates edematous skin fibrosis (42,43). Confirming the importance of type 2 immunity in filarial lymphatic pathology, IL-4R-deficient mice did not develop significant remodeling and were protected from lymphatic dysfunction after infection. IL-4R deficiency resulted in reductions in multiple circulating lymphangiogenic factors, notably VEGF-C and Ang-2, reduced monocyte/MΦ expansions within parasitized lymphatics, and prevention of MΦ alternative activation. We, and others, have previously described IL-4R-dependent alternative activation of serous cavity tissue MΦ populations in the context of filarial infection (22,44). In oncology, dysregulated, tumor-derived stimuli polarize monocytes and MΦs into tumor-associated phenotypes, possessing similarities to AAMΦs, and resulting in increased tumor angiogenesis and lymphangiogenesis (45). In clinical filariasis, circulating monocytes with features of alternative activation have also been detected (46). We determined that lymphatic-associated monocytes and AAMΦs from parasitized tissues produced elevated VEGF-C, sALK-1, and prolactin, the 3 most upregulated prolymphangiogenic molecules in circulation following filarial infection, demonstrating that this cell lineage is a source of lymph-native activation of monocyte-derived MΦs, with concomitant impairment in MΦ-induced angiogenesis (59). The likely multifaceted mechanisms by which second-generation tetracyclines cause such wide-ranging antilymphangiogenic, antiinflammatory, and immunosuppressive effects on mammalian cells to stymie filarial type 2 lymphatic pathogenesis require further detailed investigations. An assumed mode of doxycycline-mediated antiangiogenic activity in vivo has been via targeted inhibition of matrix metalloproteinases (MMPs) to prevent extracellular matrix degradation necessary for neovascularization (60,61). One alternative, emerging mechanism is that doxycycline suppresses mammalian mitochondrial protein synthesis, thus shifting cellular metabolism toward glycolysis and slowing the cell proliferative rate (62). Finally, a recent study demonstrates that calcium signaling is relevant in VEGF-A-induced angiogenesis (63). Because doxycycline is a known calcium ion chelator, antiangiogenic and more widespread antiproliferative effects of the drug could be mediated by attenuating multiple calcium-dependent, second messenger signaling pathways. Certainly, the T cell antiproliferative activity of doxycycline can be overcome by addition of exogenous calcium (64).
As with current indications in the treatment of rheumatoid arthritis or rosacea (65), we found that the mode of action of second-generation tetracyclines in mediating antipathological efficacy in filariasis is via immunosuppressant/antiinflammatory activities. However, akin to the dual mode of action considered important in the treatment of acne (65), we do not discount that second-generation tetracyclines are also beneficial to filarial LE patients by resolving secondary bacterial infections, preventing ADLA episodes. Lipophilicity and dermal accumulation of second-generation tetracyclines may be important physiochemical features contributing to a long tail of antipathological activities in superficial lymphatics and local sdLNs. Because minocycline is a more lipophilic antibiotic compared with doxycycline (30), it may be a clinically superior treatment for filarial LE, warranting comparative clinical assessment, while newly approved formulations of minocycline (66) for the treatment of skin complaints warrant clinical assessment of antipathological effects in filarial LE patients.
Because sterile postsurgical LE has been clearly linked with inflammation and leukotriene production (67), doxycycline may be of therapeutic benefit in the treatment of nonfilarial LE of inflammatory origin, especially where cellulitis complications contribute to disease etiology.
Potential limitations of the deployment of oral second-generation tetracyclines as antimorbidity therapy for filarial LE include the potential for gastrointestinal side effects, development of photosensitivity, and contraindications during pregnancy and for young children. However, large-scale implementation trials of doxycycline treatment as a cure for filariasis in over 13,000 African participants have determined greater than 90% adherence to treatment and phase II trials have only reported infrequent and generally mild adverse effects during 6-week therapy (68). Large-scale, multicenter trials are currently commencing to evaluate doxycycline as an antimorbidity therapy for filarial LE (69). Future clinical trials should also address dose duration and frequency, comparative efficacy of doxycycline versus minocycline, and whether addition of affordable nonsteroidal antiinflammato-doxycycline mediates antimorbidity effects in filariasis is difficult to determine in the clinic, due to its curative activity via targeting filarial Wolbachia (52), and its broad-spectrum antibiotic properties that reduce secondary skin bacterial infections and cellulitis complications (53). Further, Wolbachia can directly activate classical inflammatory processes upon liberation from filarial tissues (32) and have been identified as mediators of systemic adverse reactions in LF patients after filaricidal treatment (54,55). Therefore, Wolbachia may contribute to filarial LE via triggering classical inflammation (56) and doxycycline may prevent this disease pathway. Upon characterizing a type 2 inflammatory response causal in inducing filarial lymphatic pathology, we exploited our model systems to investigate the mode of action by which second-generation tetracyclines ameliorate filarial lymphatic disease. First, we established that both doxycycline and the related second-generation tetracycline, minocycline, are directly antilymphangiogenic, blocking LEC proliferation in response to VEGF stimuli. These data confirm earlier reports that doxycycline directly modifies VEGF-C-induced LEC proliferation by interrupting phosphorylation of phosphoinositide 3 kinase (PI3K), α-serine/threonine protein kinase (AKT1), and endothelial nitric oxide synthase (eNOS) signaling (57). We also determined that the suppressive effect of doxycycline extends to inhibiting LEC proliferation mediated by IL-4/-13 or filaria-conditioned proangiogenic MΦs. The antiangiogenic pharmacological activity of doxycycline or minocycline achieved in vitro, at 10 or 20 μM, was at or slightly higher than typical clinical peak-plasma concentrations. However, concentrations of doxycycline, following 14-day dosing in the skin, are known to accumulate 3-fold more than measured in circulation (58). This suggests our effective dose levels reflect local concentrations experienced within and surrounding superficial lymphatics.
Antilymphangiogenic activities of doxycycline and minocycline were reproducible in vivo, whereby oral dosing of mice with human bioequivalent regimens (30) significantly reduced the magnitude of lymphatic remodeling and dysfunction induced by filarial infection. We determined that this antipathological mechanism was tetracycline specific and unrelated to broad-spectrum antibiotic or anti-Wolbachia efficacies. Lack of evidence for Wolbachia in lymphatic pathology induction in our larval model probably reflects low Wolbachia titers in infectious stage B. malayi and does not necessarily preclude a role for higher titers of Wolbachia, liberated upon death of more mature filariae in parasitized lymphatics, augmenting LE pathology development in vivo. The skewed, local type 2 inflammation observed in our mouse model also reflects low Wolbachia exposure during initial immune priming, as we previously demonstrated that type 2 T cell polarization by filarial extract becomes modified toward a mixed type 1 and type 2 T cell response by relative abundance of Wolbachia products (32).
Doxycycline modified the type 2 recruited monocyte/AAMΦ pathway of lymphatic pathology at multiple points in vivo. Thus, we demonstrate that doxycycline has wide-ranging immunosuppressive and antiinflammatory activities in modulating filaria-induced type 2 inflammatory lymphangiogenesis. As doxycycline directly perturbed prolymphangiogenic MΦs in response to type 2 or filaria-specific stimuli in vitro, this provides evidence of a specific targeted effect at the level of MΦs. Doxycycline has previously been shown to suppress IL-4/-13-dependent alter-J Clin Invest. 2021;131(5):e140853 https://doi.org/10.1172/JCI140853 EB dermal retention assay. A modified Miles assay was utilized whereby mice were administered s.c. injections of 10 μL 1% EB (MilliporeSigma) w/v in sterile Dulbecco's PBS (DPBS) (Mil-liporeSigma) on top of the infected hind foot. After 20 minutes, mice were euthanized and left hind leg skin excised between the knee and ankle joint, transferred to 1 mL of DPBS, and incubated 20 minutes. Absorbance was read at 620 nm on a Varioskan plate spectrometer (Bio-Rad).
Fluorescence microscopy. Skin samples from C57BL/6J Prox-1 GFP mice were dissected from areas of aberrant lymphatics (equivalent areas used in sham control mice). Lymphatic vessels were visualized using Prox-1 GFP epifluorescence under a fluorescence stereo-dissecting microscope with an eGFP filter (Leica Microsystems). Between 15 and 30 images were taken per mouse, blinded, and lymphatic channels measured for aperture in ImageJ. All image measurements were pooled per mouse to calculate average lymphatic widths.
BmL3 were washed before incubation with 50 μM Alexa Fluor 546 NHS ester (succinimidyl ester) (Thermo Fisher Scientific) in Fluorobrite DMEM (Thermo Fisher Scientific) for 2 hours. C57BL/6J Prox-1 GFP transgenic mice were injected with 400 fluorescent BmL3 as described above. After 3 hours and 1-6 dpi in mice, areas of subcutaneous tissues where lymphatic remodeling occurs were imaged as above (DsRed and eGFP filters).
Lymphoid, lymphatic, splenic, and blood single-cell preparations. Cardiac blood was collected into heparinized tubes (Starstedt), centrifuged, plasma harvested, and stored at -80°C for downstream analyses. In blood immunophenotyping experiments, red blood cells were depleted using RBC lysis buffer (Biolegend), resuspended in DPBS with 5% fetal bovine serum (FBS) and 2 mM EDTA (FACS buffer). Spleens or popliteal, iliac, and subiliac LNs with surrounding lymphatic collecting vessels were collected and a single-cell suspensions made by maceration through a 40-μm cell sieve (MilliporeSigma). Resultant cell suspensions were centrifuged, resuspended in RPMI 1640 or FACS buffer, and enumerated.
Splenocyte and LN cell recall assays. LN cells and splenocytes were plated at 2.5 × 10 5 /well: splenocytes into wells previously coated with 1.25 μg/mL anti-CD3 antibody followed by the addition of 2 μg/mL anti-CD28 antibody (Biolegend). LN cells received no ex vivo stimulation. All cells were incubated for 72 hours at 37°C and 5% CO 2 and subsequent supernatants frozen at -20°C.
Multiplex protein array analysis. Multiplex immunoassays of 25 growth factors or 32 cytokines/chemokines (Mouse Angiogenesis/ Growth Factor/Mouse Cytokine/Chemokine Magnetic Bead Panels, Merck) were undertaken on plasma or restimulated splenocyte/ LN cell cultures, following the manufacturer's protocol. Plates were read on the Bioplex 200 system (Bio-Rad) and data analyzed using Luminex XPONENT software.
Flow cytometry. Single-cell suspensions were FcR blocked before staining with viability dye and specific fluorescently labeled antibodies (Supplemental Table 1), as previously described (22). For intracellular cytokine experiments, sdLN suspensions were stimulated for 5 hours in Cell Stimulation Cocktail (eBioscience), followed by anti-CD4 and intracellular cytokine staining. Data were acquired on an LSRII flow cytometer (BD Biosciences) and analyzed using FlowJo software (BD Biosciences) (Supplemental Figure 4).
FACS and cell secretion assays. Following surface staining, cell populations (Supplemental Figure 4) were sorted using a FACSAria II (BD ry drugs, such as ketoprofen, which is currently undergoing clinical assessment for the treatment of postsurgery LE (70), may be of added benefit, including in contraindicated groups.
In conclusion, our preclinical research establishes the mode of action of second-generation tetracyclines as antimorbidity drugs in the therapy of filarial LE. These findings support the onward clinical evaluation of these affordable, readily available, and safe treatments for LE of filarial origin and potentially for other LE associated with chronic inflammation.
Methods
Study design. Group sizes of animal experiments were determined using appropriate sample size calculations to power a study greater than 80%. Data were pooled from repeat experiments where done. Mice were randomized into infection/intervention groups by ID number. Dosing and interventions were done in a nonblinded manner. Image-based readouts were blinded prior to analysis.
Experimental animals. Laboratory animals were maintained in specific pathogen-free facilities at The Biomedical Services Unit, University of Liverpool. Mongolian gerbils and BALB/c/C57BL/6J IL-4Rα -/-, C57BL/6J Prox-1 GFP , and C57BL/6J TLR6 -/mice were bred in house. Mongolian gerbils were originally purchased from Charles River. BALB/c IL-4Rα -/mice were originally purchased from The Jackson Laboratory. C57BL/6J IL-4rα -/mice were originally gifted by Cecile Benezech (The University of Edinburgh, United Kingdom). FVB/N-Crl:CD1(ICR) Prox-1 GFP mice were provided by Young-Kwon Hong, University of Southern California, before being backcrossed onto the C57BL/6J background for 7 successive generations. C57BL/6J TLR6 -/mice were originally gifted by Shizuo Akira (Osaka University, Japan). Male BALB/c, C57BL/6J WT, and CB.17 SCID mice were purchased from Charles River. All mice were 6-12 weeks old at the start of procedures. Gerbils were infected between 8 and 12 weeks of age. Males were used in this study.
Parasite life cycle and maintenance. B. malayi life cycle was maintained in mosquitoes and Mongolian gerbils as previously described (31). Briefly, microfilariae (mf) from gerbils infected more than 12 weeks were collected via peritoneal catheterization. Purified and enumerated mf were mixed with heparinized human blood to 15-20,000 mf/mL and artificial membrane feeder (Hemotek) fed to female Aedes aegypti mosquitoes. After 14 days, infective BmL3 were collected from infected mosquitoes by crushing and Baermann's filtration.
Leg pathology model experimental infection. Mice were inoculated with 100 BmL3 s.c., split between the top of the left hind foot and caudal to the left knee. Sham-infected mice received equal volumes of sterile RPMI 1640.
Intravital NIR imaging of lymphatics. NIR imaging was adapted from techniques previously described (17). Briefly, anesthetized mice were administered 20-μL s.c. injections of 1 mg/mL ICG (Mil-liporeSigma) onto the top of the left and right hind feet. Lymphatic drainage was monitored using a photodynamic eye (PDE) NIR optical imaging device (Hamamatsu Photonics) to track NIR signals. Mice were imaged from 4 viewpoints: dorsal, ventral, left, and right. Movies (720 × 480 at 60 fps; 3 minutes per mouse) were recorded using an EasyCap DC60 USB Video Capture Card Adapter (Softonic) that converted footage to ImageJ software (NIH). Still images (720 × 480) were used in downstream analyses. For more information see Supplemental Methods.
|
2021-01-14T06:16:23.432Z
|
2021-01-12T00:00:00.000
|
{
"year": 2021,
"sha1": "c33619f350f2a115246357b36dddef4a3b89e58a",
"oa_license": "CCBY",
"oa_url": "http://www.jci.org/articles/view/140853/files/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d371bd22c719fb6e10c8afe2ddaa37ab963b2de9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211838155
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement of Plant Growth by Using PGPR for a Sustainable Agriculture: A Review
Our nature is embedded with full of treasures with not only expensive products like gold, diamond, minerals etc. but also with some priceless things which are having literally even more value than all of this. Apart from these things which are so important for the survival on this earth like air, water and soil, nature is aiding things which are equally important for a sustainable future. Yes, it is in fact beyond our imagination that even small organisms surviving in soil can be useful for the plants. There are group of natural entities like beneficial soil microbial flora which are dwelling in the rhizosphere and on the surface of the plant roots which impose overall wellbeing of the plants are categorized as Plant Growth Promoting Rhizobacteria (PGPR). Researchers are studying these microbes for the past 30 years to understand the mechanics and usefulness employed by these PGPR to International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 2 (2020) Journal homepage: http://www.ijcmas.com
Introduction
Our nature is embedded with full of treasures with not only expensive products like gold, diamond, minerals etc. but also with some priceless things which are having literally even more value than all of this. Apart from these things which are so important for the survival on this earth like air, water and soil, nature is aiding things which are equally important for a sustainable future. Yes, it is in fact beyond our imagination that even small organisms surviving in soil can be useful for the plants. There are group of natural entities like beneficial soil microbial flora which are dwelling in the rhizosphere and on the surface of the plant roots which impose overall wellbeing of the plants are categorized as Plant Growth Promoting Rhizobacteria (PGPR). Researchers are studying these microbes for the past 30 years to understand the mechanics and usefulness employed by these PGPR to
ISSN: 2319-7706 Volume 9 Number 2 (2020)
Journal homepage: http://www.ijcmas.com Biotic and abiotic stresses exert a serious impact on crop productivity throughout the world. The alternate strategy is to introduce tolerant microbes into plants under stress conditions. Intensive research attempts are underway to improve plant growth, enhance tolerance level against these stresses and protect plants by using plant growth promoting rhizobacteria (PGPR) which have a great potential for sustainable crop production. PGPR play a direct role in maintaining better plant health by nitrogen fixation, phosphate solubilisation, phytohormone production etc. and indirectly by siderophore production, antibiotic production, ACC deaminase activity, Induced Systemic resistance etc. The microbes provide plants resistance to stress by enhancing the activity of the antioxidant enzymes and other nonenzymatic antioxidants. support plant growth. The plant-beneficial rhizobacteria may decrease the global dependence on hazardous agricultural chemicals which destabilize the agroecosystem. Microbial populations are ubiquitous and are present in diverse ecological niches, including extreme environments, present in both lithosphere and hydrosphere, where they can thrive easily and their metabolic abilities play a significant role in geochemical nutrient cycling (Aeron et al., 2011).
Agriculture is hit badly by both biotic and abiotic factors. Plant pathogens, such as bacteria, viruses, fungi, and parasites heavily damage the yield. The annual agriculture yield losses due to the disease cause by these pathogens are at least 30% globally (Fisher et al.,2012). Among all, two-third of total diseased plants is infected by fungi. Agricultural land management, greater use of chemicals including fertilizers, judicious and safe pesticides and herbicides uses, more farm mechanization, greater use of transgenic crops are some of the solutions to boost the yield. But these solutions are effective in short time because we have limited number of resources. The fertilizers will affect our environment adversely. The farm mechanization is not acceptable to everyone due to its high cost. The use of transgenic crops is restricted due to some ethical concerns and resistance breakdown. Thus, we need a long term, safe, sustainable, eco-friendly biological solutions. Expanded use of PGPR is one of the ultimate solutions in our hand which will complete all the mentioned criteria. We should praise our nature for gifting us with such a noble creature.
Rhizosphere and associated communities
Bacterial populations are widely distributed over the soil and some adheres with the plant's roots, interact with it. The term 'rhizosphere' was coined by Hiltner (1904) who describes it as a zone which is dominated by root exudates. Later on, it was defined as that portion of the soil which is specially affected by plant roots and/or in association with roots and roots hairs, and plant-produced materials (Andrade et al., 1997). This area cover the soil packed by the roots, extending a few millimetres from the root surface and can consist of the plant root epidermal layer (Bringhurst et al., 2001). Further the definitions were updated that it is the modulation of root's parameters like physical, chemical and biological with respect of growth and activity (Sivasakthi et al., 2014). The bacterial populations in the rhizosphere are 100-1000 times higher than the rest of the soil. The probability of finding these bacteria is higher in the rhizosphere because they possess unique ability to alter their metabolic activities and consumes the roots exudates efficiently. Also, 15% of the root surface is covered by microbial populations belonging to several bacterial species (Govindasamy et al., 2011;Jha et al., 2010). Plant photosynthetic product (about 5 to 30%) is secreted by roots in form of different sugars which in turn is utilized by microbial populations (Glick, 2014). Subsequent metabolic activities of these bacteria in the rhizosphere accelerate mineral nutrient transport and uptake by plant roots (Glick, 1995). The rhizosphere serves as an ecological niche for PGPR. Generally, about 2-5% of rhizosphere bacteria are PGPR (Antoun and Prevost, 2006;Jha et al., 2010;Sgroy et al., 2009;Siddikee et al., 2010).
Due to accumulation of variety of plant exudates, such as amino acids and sugars, the zone is acquainted with nutrients as compared to rest of soil providing source of energy and nutrients for microbes (Gray and Smith, 2005). A range of microorganisms including bacteria, algae, fungi, protozoa and actinomycetes colonize the roots of plants.
They live independently or in association with another organism. A popular symbiotic association exist between fungi and roots of plants (mycorrhizal) which facilitate the plant to absorb more water and nutrients by increasing the root surface area (Nadeem et al., 2014), the microorganisms in turn get shelter.
Plant Growth Promoting Rhizobacteria (PGPR)
The bacteria that colonized the root, improve plant growth and yield by the addition of some growth factors and hormones are called as PGPR (Kloepper and Schroth, 1978). A rhizosphere bacterium is considered to be a PGPR when it affects the plant in a positive way upon inoculation, thus showing a different active characteristic to the existing rhizosphere communities. In 1998, Bashan and Holguin, revised the definition because there are bacteria which demonstrate a positive interaction over the plant although they are outside the rhizosphere environment. During the Fourth International Congress of Bacterial Plant Pathogens, conducted in France, the importance of rhizobacteria for the plant health was showed by Kloepper and Schroth (1978). PGPRs can act as solid tools for the sustainable agriculture and can produce a new era for the management of diseases.
Classification of PGPR
PGPRs are associated differently with respect to the plant root cells, either outside the root i.e. in rhizosphere, on the rhizoplane or can be confined to the spaces between cells of the root cortex, or inside the roots particularly in the root cortex and thus can be grouped into extracellular plant growth promoting rhizobacteria (ePGPR) and intracellular plant growth promoting rhizobacteria (iPGPR) respectively (Martinez-Viveros et al., 2010). (Gray and Smith, 2005). Endophytes and Frankia species belongs to iPGPR. Endophytes include large number of soil bacterial genera such as Allorhizobium, Azorhizobium, Bradyrhizobium, Mesorhizobium and Rhizobium of the family Rhizobiaceae that are closely associated in the formation of root nodules (Wang and Maetinez-Romero, 2000). Among all the recognised genera of PGPR, Bacillus and Pseudomonas predominates (Podile and Kishore, 2006).
Mechanism of PGPR
PGPR affect plant growth in two different ways, indirectly or directly ( Figure 1). Indirect mechanisms, as the name suggest are those that do not affect the plant in a straight way and happen outside the plant, while direct mechanisms are those that occur inside the plant and directly involved in the plant's metabolism (Antoun and Prevost, 2006;Glick, 1995;Siddik et al., 2010;Vessey, 2003). Biological nitrogen fixation, phosphate solubilization, phytohormone and siderophore production are some of the direct whereas production of defence enzymes and antibiotic, modulation of plant stress markers, induce systemic resistance (ISR) and competition for the rhizosphere are some of the indirect mechanism .The direct mechanisms include those in which either the bacteria will produce the growth regulators, ultimately incorporate in plant system and thus affect the balance of plant growth regulators or they act as a sink of plant release hormones that will induce plant metabolism leading to the overall growth of the plant (Glick, 2014;Govindasamy et al., 2011).
Nitrogen fixation
Bacterial strains which are able to fix atmospheric nitrogen can be classified into two parts. The one which act symbiotically (root/legume association), got the specificity and infects the root of the plants to produce nodules e.g. Rhizobium strains. Other group of bacteria are free living which does not possess specificity (Oberson et al., 2013). Examples of such free-living nitrogen fixers include Azospirillum, Azotobacter, Burkholderia, Herbaspirillum, Bacillus, and Paenibacillus (Goswami et al., 2015;Heulin et al., 2002;Seldin, 1984;von der Weid et al., 2002). These free living nitrogen fixers although not closely associated with plants as they do not penetrate the root of plants but able to fix the nitrogen for better nitrogen absorption to the plants. This relationship is called as non-specific or loose symbiosis. The amount of nitrogen fixed ranges between 20 and 30 kg per hectare per year (Stacey et al., 1992). Azotobacter and Azospirillum are the most widely used species in agricultural trials. They are first reported in 1902 and are the most widely use strains till date (Bhattacharya and Jha, 2012). Application of Azotobacter chroococcum and Azospirillum brasilense inoculants in agriculture, especially in cereals has resulted in significant increases in crop yields (Oberson et. al., 2013).
Based on nitrogenase activity, Bacillus azotofxans, Bacillus macerans, and Bacillus polymyxa, were identified as nitrogen fixers, (Seldin et al., 1984). However, after reclassification, these organisms are now classified in Paenibacillus genus. Paenibacillus odorifer, Paenibacillus graminis, Paenibacillus peoriae, and Paenibacillus brasilensis have been described as nitrogen fixers (Heulin et al., 2002;von der Weid et al., 2002). Symbiotic nitrogen fixing bacteria such as rhizobia are closely assosciated with root hairs. The rhizobia and the nod factors (lipo-chitin oligosaccharides) interact to change the cell division processes in the root hair cells resulting in curling of the root hairs. The nod factors operate within these curled root hairs, leading to the formation of infection threads through which these rhizobia make their way to enter inside leguminous crops (Broughton et al., 2000;William et al., 2000) and are reported to possess nif gene cluster which are responsible to code for nitrogenase enzyme, a key enzyme involved in nitrogen fixation. These are widely use in biofertilizers for the past 20 years and are very important for agriculture (Goswami et al., 2015;Heulin et al., 2002).
Phosphate solubilisation
Despite abundant reserve of phosphorous, plant is unable to take these phosphorous directly. Plants are only able to absorb monoand dibasic phosphate which are the soluble forms of phosphate (Jha et al., 2012;Jha and Saraf, 2015). Hence, they are among the most limiting nutrients after nitrogen for the plants.
The key mechanism of phosphate solubilization is based on production and secretion of organic acid by microbes i.e. PGPR (Han et al., 2006). Sugars (Glucose, fructose, mannitol and other form of carbohydrates) from root exudates are metabolized to produce organic acids by these noble creatures living in the rhizosphere ). The acids released by the micro-organisms has a property to act as a good chelators of divalent Ca cations or decrease the pH which facilitates the release of phosphates from insoluble phosphatic compounds (Pradhan and Shukla, 2006). Further, these microbes have the ability to release enzymes specially phosphatases (Tarafdar et al., 1988;Yadav and Tarafdar, 2003;Aseri et al., 2009) and phytases (Moughal et al., 2014) which bring about enzymatic reaction to transform the organic P into soluble forms of P through the process of mineralization (Figure 2). Since 1903, these microorganisms are known to act as a chief agent of phosphate solubilisation (Kucey et al., 1989).
Phtyohormone production
The soil microorganisms especially those residing in the rhizosphere are assosciated with production of phytohormones like auxins, gibberellins, cytokinins, ethylene, and abcisic acid (Arshad and Frankenberger, 1998). These phytohormones play an important role in plant growth and development process such as plant cell enlargement, division, and extension in both symbiotic and non-symbiotic associations of roots (Glick, 2014;Patten and Glick, 1996). Auxin basically impacts the growth and development of whole plant but as IAA is produced in the rhozospheric zone, it mainly affects the root system (Salisbury, 1994) by increasing its size and weight, branching number, and the surface area in contact with soil. Consequently, it accelerates the nutrient exchange process by the roots which strengthen nutrition balance and growth buildup of the plant (Ramos-Solano et al., 2008). L-tryptophan is known to be the precursor of IAA. Most of these PGPRs make use of Ltryptophan which is secreted in root exudates for the production of IAA through L-Tryptophan dependent pathway. Although some like Azospirillum brasilense, produces more than 90% of IAA through L-tryptophan independent pathway and remaining 10% IAA is produced by utilizing L-tryptophan. However, the exact mechanism and enzymes used for IAA synthesis by this route is still unrevealed (Jha and Saraf, 2015;Spaepen et al., 2007).
Pseudomonas,
Azospirillum, Bacillus, Proteus, Klebsiella, Escherichia, Pseudomonas, and Xanthomonas includes some of the microorganism which are responsible for cytokinins production (Maheshwari et al., 2015). Zeatin and kinetin are two major Adenine-type cytokinins, in which N6 position of adenine is substituted with an isoprenoid and an aromatic side chain respectively. Zeatin can be synthesized in two different pathways: the tRNA pathway and the adenosine monophosphate (AMP) pathway.
Seed germination, stem elongation, flowering, and fruit setting are some of the function of gibberellic acid (Hedden and Phillips, 2000). Rhizobium meliloti, Azospirillum sp., Acetobacter diazotrophicus, Herbaspirillum seropedicae and Bacillus sp. are some of the important microorganism capable of producing gibberellic acid.
Siderophore production
Iron is quite abundant in soils but is frequently unavailable for plants or soil micro-organisms. Fe +3 is the oxidized form that reacts to form insoluble oxides and hydroxides such as Fe(OH)3 which are difficult to be utilize by the plants and microorganisms. Siderophores (sid= iron, phore = bearer) are low-molecular weight (<1 kDa), high affinity iron chelating compounds which functions to deliver iron to the plant cell (Hider and Kong, 2010).
Pseudomonas fluorescens and Pseudomonas aeruginosa release pyochelin and pyoverdine type of siderophores (Haas and Defago, 2005). These siderophores producing microorganism improve Fe uptake and hinder the growth of pathogen (generally fungi) as a result of competition for scavenging iron (Shen et al., 2013).
Defence enzymes
Different strains of PGPR possess the ability to secrete cell wall degrading enzymes like β-1,3-glucanase, chitinase, cellulase, lipase and protease which degrade the cell wall of fungi (Chet and Inbar, 1994). Chitinase breaks the chitin, second largest abundant organic molecule and a major component of the fungal cell wall. Another defence enzyme, β-1,3-glucanase is produce by Bacillus cepacia which destroys the cell walls of R. solani, P. ultimum, and Sclerotium rolfsii (Compant et al., 2005). The mycelia of the fungal pathogens Rhizoctonia solani and Fusarium oxysporum co-inoculated with an effective biocontrol strain Serratia marcescens B2 alter hyphal proliferation resulting in swelling, curling or bursting of the hyphal cell (Someya et al., 2000).
ACC (1-aminocyclopropane-1-carboxylic acid) deaminase activity
Apart from the major function of ethylene, ripening, an over production under stress condition (Abeles et al., 1992;Arshad and Frankenberger, 2002;Etesami et al., 2015;Jha and Saraf, 2015) result in inhibitory effect on root growth. To combat this situation, an interesting phenomenon is exhibited by PGPR which perform ACC deaminase activity, regulating this important hormone and thus modulating the growth and development of plant (Arshad and Frankenberger, 2002;Glick, 2005). PGPR convert SAM (Sadenosylmethionine) to ACC by the enzymes ACC synthetase which is activated through production of IAA. Moreover, the ACC release by the root exudates are taken by the microorganism as a source of nitrogen and are further converted into ammonia and αketobutyrate by bacterial ACC deaminase activity which checks the production of ethylene. Therefore, in this way, ACC deaminase producing PGPR act as soldier against the adverse effect of ethylene under stress condition (Glick, 2014). ACC deaminase activity performing microbes bind non-specifically to a wide range of plant surfaces as compared to those which have less ACC deaminase activity (Glick, 2005). ACC deaminase are possessed by a range of microbes including gram negative bacteria (Babalola et al., 2003), gram positive bacteria (Belimov et al., 2001;Ghosh et al., 2003), rhizobia (Ma et al., 2003). PGPR such as Azospirillum lipoferum (Blaha et al., 2006), Bacillus (Belimov et al., 2001), Pseudomonas (Belimov et al., 2001Blaha et al., 2006;Hontzeaset et al., 2004), Ralstonia solanacearum (Blahaet al., 2006), Rhizobium (Ma et al., 2003;Uchiumi et al., 2004) are actively involved in ACC deaminase activity. These ACC deaminases containing PGPR are fascinating researcher to exploit them at molecular level through genetic manipulation (Belimov et al., 2002;Safronova et al., 2006;Sergeeva et al., 2006) and thus creating a vision to utilize them more precisely. PGPR produces a wide range of low molecular weight metabolites with antifungal activity. Some Pseudomonads can synthesize hydrogen cyanide to which these pseudomonads are themselves resistant, a metabolite that has been linked to the ability of those strains to inhibit some pathogenic fungi.
Induced Systemic Resistance
The non-pathogenic PGPR activates Induced systemic resistance which operates to several pathogens simultaneously, thus proving resistance to wide range of pathogens ( Figure 3). Rhizobacteriain the plant roots produce signal, which spreads systemicallywithin the plant and increases the defensive capacity of the distant tissuesfrom the subsequent infection by the pathogens . Pseudomonas and Bacillus spp. are the rhizobacteria most studied that trigger ISR (Kloepper et al., 2006).
In conclusion, sustainable agriculture is the need of the world as of late due to the unfavourable impact of synthetic substances utilized in farming. In the present situation, using PGPR in agriculture are one of the most appropriate decisions for plant development advancement so as to mitigate various sorts of stresses which are experienced by the plants and also to conquer the utilization of synthetic composts and pesticides. The rhizosphere is a huge supply of organisms, where PGPRs are most generally found and are involved in overall well being of the plant. The exudates of plant roots generally collaborates them and help in root-colonizing activities. For that reason, PGPR is considered as 'A gift of nature for bright future'. PGPR and their interactions with plants are exploited commercially (Podile and Kishore, 2006) and can be a huge future scope for sustainable agriculture. These noble and beneficial creatures have been introduced in several crops like maize, wheat, oat, barley, peas, canola, soya, potatoes, tomatoes, lentils, radicchio and cucumber (Gray and Smith, 2005). Thus, PGPR offers an excellent attractive alternative to chemicals and can maintain or even increase the yield of crop which is the need of time.
|
2020-03-04T06:25:22.841Z
|
2020-02-20T00:00:00.000
|
{
"year": 2020,
"sha1": "bf610906d60c7f24dea18062203f521dba54a9ac",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-2-2020/Anshu%20Kumar,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bf610906d60c7f24dea18062203f521dba54a9ac",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
248118623
|
pes2o/s2orc
|
v3-fos-license
|
Strategic model reduction by analysing model sloppiness: a case study in coral calcification
It can be difficult to identify ways to reduce the complexity of large models whilst maintaining predictive power, particularly where there are hidden parameter interdependencies. Here, we demonstrate that the analysis of model sloppiness can be a new invaluable tool for strategically simplifying complex models. Such an analysis identifies parameter combinations which strongly and/or weakly inform model behaviours, yet the approach has not previously been used to inform model reduction. Using a case study on a coral calcification model calibrated to experimental data, we show how the analysis of model sloppiness can strategically inform model simplifications which maintain predictive power. Additionally, when comparing various approaches to analysing sloppiness, we find that Bayesian methods can be advantageous when unambiguous identification of the best-fit model parameters is a challenge for standard optimisation procedures.
Introduction
Mathematical models are used to understand complex biological and ecological systems, and to make predictions about system behaviours even in extreme conditions (Getz et al., 2018;Jeong et al., 2018). These models ideally include as much of the expected system dynamics as possible in order to gain mechanistic exploratory power (Snowden et al., 2017). However, this constructionist approach can lead to large, complex models, which can become problematic for various reasons. Complexity can introduce practical issues -such as difficulty implementing, calibrating, solving, and interpreting models (Hong et al., 2017) -as well as resulting in overfitting, poor predictive performance, uncertainty in estimated parameter values and potentially becoming unscientific by being harder to disprove. Models should aim to balance being simple enough to capture general trends without over-fitting, but complex enough to capture the key features of a dataset -this is known as the bias-variance tradeoff (Geman et al., 1992).
Development of strategies to reduce a model's complexity without loss of explanatory power is a research area of increasing interest (Hjelkrem et al., 2017;Jeong et al., 2018;Snowden et al., 2017;Transtrum and Qiu, 2014). However, it can be challenging to identify appropriate model reductions for complex biological or ecological models, because there are many possible ways to reduce a model (Cox et al., 2006;Jeong et al., 2018). More so, parameter interdependencies can make it difficult to determine the informativeness of individual model components on the outputs (Gibbons et al., 2010;Transtrum and Qiu, 2014). Hence, systematic model reduction methods can be helpful for maintaining predictive power when simplifying a model.
There are a variety of existing methods in the literature which propose ways of simplifying an underlying model. For example, projection-based methods aim to reduce the degrees of freedom of a model (Schilders et al., 2008). Systematically removing parameters from models is challenging (Transtrum and Qiu, 2014), so mechanistic-focused methods commonly fix parameter values or state variables (Cox et al., 2006;Crout et al., 2009;Elevitch and Johnson, 2020;Hjelkrem et al., 2017;Lawrie and Hearne, 2007) or lump them together (Huang et al., 2005;Liao and Lightfoot, 1988;Pepiot et al., 2019;Snowden et al., 2017) to make models more efficient and easier to calibrate. Sensitivity analysis methods are typically used to identify the importance of variations in parameter values on model outputs (Mara et al., 2017;Saltelli et al., 2004), and can inform model reductions in a factor-fixing setting (Cox et al., 2006;Hjelkrem et al., 2017;Hsieh et al., 2018;Van Werkhoven et al., 2009). Sensitivity analyses used for model reduction typically focus on the sensitivity of model outputs to changes in parameter values. However, we may instead be interested in which parameters are constrained by an available dataset. Additionally, the effects of changing model parameters on outputs often depend greatly on the assumed values of other parameters; hence we need to consider the effect of parameters in combination on model behaviours, rather than individuals (Brown et al., 2004;Brown and Sethna, 2003;Gutenkunst et al., 2007;Monsalve-Bravo et al., 2022;Transtrum et al., 2011;Transtrum and Qiu, 2014). This paper presents a new approach to strategic model reduction based on an analysis of model sloppiness. This type of sensitivity analysis captures the sensitivity of model outputs informed by a dataset; it looks at model-data fit sensitivities, rather than just model sensitivities. The analysis of model sloppiness draws on dimension reduction techniques to identify hidden parameter interdependencies (Transtrum et al., 2011(Transtrum et al., , 2015. Such parameter interdependencies, or parameter combinations, are identified by analysing the curvature of the surface which describes how the model-data fit depends on the model parameters (Brown et al., 2004;Brown and Sethna, 2003;Monsalve-Bravo et al., 2022). Consequently, parameter combinations which strongly (or weakly) influence model predictions can be revealed by identifying directions in parameter space which most (or least) influence model outputs. The analysis accounts for individual parameters acting together or against each other (compensating effects) and identifies sensitive and insensitive parameter combinations for the model-data fit (Brown et al., 2004;Brown and Sethna, 2003;Gutenkunst et al., 2007;Monsalve-Bravo et al., 2022;Transtrum et al., 2011;Transtrum and Qiu, 2014). Hence, an analysis of model sloppiness could be used to identify and remove model mechanisms that only weakly impact predictions (or are weakly informed by the data), whilst accounting for hidden compensatory effects between individual model parameters.
The concept of model sloppiness in model reduction methods has previously been explored by Transtrum and Qiu (2014), who proposed the manifold boundary approximation method (MBAM). The MBAM uses an information theory-based approach whereby a parameter-independent geometric interpretation of the model is used to systematically reduce the effective degrees of freedom (Transtrum and Qiu, 2014). However, alternative methods have been proposed for dimensionality reduction; for example, the active subspace method (Constantine et al., 2016) similarly captures model-data fit sensitivities and can be used for individual parameter rankings through activity scores (Constantine and Diaz, 2017). More recently, Elevitch and Johnson (2020) used a spectral analysis of the Hessian matrix, akin to the non-Bayesian analysis of model sloppiness (Brown et al., 2004), to quantitatively rank parameter importance and thus determine which parameters should be estimated or fixed, rather than simplifying the model structure.
In contrast to these methods, our approach is based solely on the analysis of a model's sloppiness. Our method uses this analysis to identify insensitive groups of parameters which represent processes or mechanisms within a model, whilst accounting for the interdependencies between parameters. Thus, if model outputs are insensitive to changes in model parameters associated with a certain mechanism, such a mechanism is identified to have little effect on the overall model predictions (model-data fit). Using this sensitivity analysis, insensitive mechanisms are thus removed from the model (rather than being fixed or lumped) to produce a conceptually simpler model (Transtrum and Qiu, 2014) which maintains its predictive capability. As a result, the model remains expressed in terms of the parameters of interest and preserves mechanistic interpretability. Additionally, the method we propose can take advantage of Bayesian sensitivity matrices described recently elsewhere (Monsalve-Bravo et al., 2022).
In this work, we showcase the potential for model sloppiness to inform strategic model reductions using a case study on a complex physiological model predicting coral calcification rates. We also use this case study to demonstrate that both Bayesian and non-Bayesian approaches to analysing model sloppiness (Brown and Sethna, 2003;Monsalve-Bravo et al., 2022) may be suitable for model reduction, although Bayesian approaches may be advantageous when the best-fit parameter values cannot be easily identified using standard search procedures (e.g. where the likelihood surface does not have a well defined peak).
Model calibration
Within complex models, there are often parameters which cannot be measured directly and must instead be estimated through a model-data calibration process. To analyse the model sloppiness, these unknown parameter values must first be estimated using either a classical (frequentist) calibration process to obtain a single-point estimate, or a Bayesian model-data calibration to obtain probabilistic distributions for parameters.
Single point estimates of unknown parameters can be obtained through maximum likelihood estimation (MLE), which is a common frequentist approach used for model calibration (Jackson et al., 2000). More specifically, unknown model parameters θ are estimated using observed data y by maximising an appropriately chosen likelihood function f (y|θ). Under the common assumptions that measurement errors follow a Gaussian distribution of constant standard deviation, and that observations are independent, the likelihood that a set of data y is observed with a mean given by the model y model (θ) and a standard deviation of σ can be defined as where y = {y 1 , y 2 , . . . , y n d } is a dataset of n d independent observations, y i is the ith observation in the dataset, f (y|θ) is the likelihood function, and y model,i (θ) is the ith model prediction of the data given the conditions of observation y i and parameters θ. Formally, the MLE (θ MLE ) is obtained by maximising the likelihood function, such that Hence, θ MLE represents the "best-fit" parameter values under the assumed statistical model structure and observed dataset. This process is represented as the left (orange) branch in Figure 1.
Figure 1: A conceptual figure, highlighting the difference between a model calibrated via MLE (left) and Bayesian inference (right). MLE combines experimental data with a model structure to produce a frequentist model. Bayesian inference also incorporates prior information about the model parameters to produce a probabilistic model that encapsulates parameter uncertainty.
However, this frequentist approach does not account for prior beliefs about the parameter values (Uusitalo et al., 2015), and also ignores parameter values θ = θ MLE which may also plausibly represent the data. To analyse the effect of the parameters on the behaviour of the system, the full range of values that a parameter could potentially take should be considered (Jakeman et al., 2006). As an alternative to MLE, Bayesian inference can be used for model-data calibration to obtain a probability distribution for θ that accounts for prior parameter information, an assumed model structure and the data (Girolami, 2008). Such a distribution is called the posterior distribution as it represents a probability distribution for θ after considering the data.
Using any known information about the parameters, a "prior" distribution π(θ) is placed on θ. To obtain the posterior distribution, the prior distribution is multiplied by the likelihood function via Bayes' Theorem, π(θ|y) ∝ f (y|θ)π(θ), where π(θ|y) is the posterior distribution of the parameters, f (y|θ) is the likelihood function and π(θ) is the prior distribution. The posterior is a distribution of potential parameter values informed by the data and prior information such that it quantifies parametric uncertainty (Girolami, 2008). In this paper, a sequential Monte Carlo (SMC) algorithm (Chopin, 2002;Del Moral et al., 2006) was used to sample from the posterior distribution, providing a representative sample of plausible parameter values, given the available prior information and dataset (see Pettitt (2011) or Jeremiah et al. (2012) for more information on SMC sampling). Bayesian inference for model calibration is depicted as the right (blue) branch of Figure 1.
Analysing model sloppiness
The analysis of model sloppiness is a type of sensitivity analysis which considers the model sensitivity to all parameters informed by a dataset (Monsalve-Bravo et al., 2022). As with global sensitivity analysis approaches (Geris and Gomez-Cabrero, 2016;Marino et al., 2008;Saltelli et al., 1993), the analysis of model sloppiness accounts for the model-data fit sensitivities of parameter combinations across all parameters. Thus, this approach can mathematically characterise the parameter combinations that the model-data fit is most sensitive to (Brown and Sethna, 2003;Transtrum et al., 2015).
A sloppy model refers to a model where most of the model behaviour is captured through a few tightly constrained (stiff) parameter combinations (stiff eigenparameters) (Brown and Sethna, 2003), which are highly influential on model predictions of the data (Transtrum et al., 2015), but remains insensitive to many loosely constrained (sloppy) parameter combinations (sloppy eigenparameters). Stiff and sloppy parameter combinations are identified through the eigendecomposition of a sensitivity matrix (Gutenkunst et al., 2007). While there are various approaches to the construction of the sensitivity matrix -some of these are summarised in Section 2.3 -we focus here on how to obtain stiff/sloppy eigenparameters once this matrix is successfully computed.
Each eigenvector v j of the sensitivity matrix indicates a key direction in parameter space that characterises the sensitivity of the model-data fit to changes in multiple parameters simultaneously. We can express each key direction in parameter space as a specified combination of parameter values -known as an eigenparameter. Changing an eigenparameter's value is equivalent to moving the entire set of (original) model parameters along the direction of the eigenvector associated with that eigenparameter. As eigenvectors of the sensitivity matrix are by definition mutually orthogonal, if model parameters are logarithmically-transformed prior to calculation of the sensitivity matrix (which is a common practice when analysing sloppiness), each eigenparameterθ j of this matrix can be expressed as a linear combination of natural logarithms of model parameters, following Brown et al. (2004), where v j = [v j,1 , v j,2 , ..., v j,np ] is the jth eigenvector of the sensitivity matrix, n p is the number of model parameters, and θ i is the ith parameter in the model. It should be noted that expressing eigenparametersθ j as a product of all model parameters to the power of the different exponents as shown in Equation (4) is only possible if the parameters are scaled by their logarithm when estimating the sensitivity matrix. Additionally, standard renormalisations can be applied such that the exponents v j,i in Equation (4) are rescaled to be between −1 and 1.
In Equation (4), each eigenparameterθ j has a corresponding eigenvalue λ j which attributes a magnitude to the direction of the eigenparameter. The largest eigenvalue (λ 1 ) corresponds to the stiffest (most sensitive) eigenparameter, and the smallest eigenvalue (λ np ) corresponds to the sloppiest (least sensitive) eigenparameter (Transtrum et al., 2015). Therefore, comparing the eigenvalues of all eigenparameters indicates the relative impact that each parameter combination θ j has on the model-data fit. In our implementation of this approach, we scale all eigenvalues λ j by dividing them by the largest eigenvalue λ 1 to indicate the relative importance of the eigenparameters. Thus, 0 < λ j /λ 1 ≤ 1 for all eigenparameters λ j , j = 1, . . . , n p .
The sloppy analysis results in a list of n p weighted parameter combinationsθ j ranked by their influence on the model-data fit via the scaled eigenvalues λ j /λ 1 . This process is summarised in Algorithm 1 (see also Monsalve-Bravo et al. (2022) for an overview of model sloppiness).
Algorithm 1: Process used for analysing the model sloppiness 1. Perform model calibration to obtain estimate(s) of parameters θ.
2. Calculate sensitivity matrix S with respect to the log-parameters, φ i = log θ i .
3. Find all eigenvalues λ j and eigenvectors v j , j = 1, . . . , n p of the sensitivity matrix S.
7. Report the eigenparameters using Equation (4) and the renormalised eigenvectors obtained from Step 6, ordered in importance by the magnitude of the corresponding eigenvalues.
Sensitivity matrices
The key quantity required for analysing sloppiness of a model fitted to data is the sensitivity matrix S. This is a square symmetric matrix of size n p × n p , in which n p is the number of estimated model parameters, excluding parameters that represent measurement error (e.g. standard deviation σ in Equation (1)) as the latter can yield eigenparameters that are trivial or difficult to interpret (Monsalve-Bravo et al., 2022). There are various approaches to calculate the sensitivity matrix (see Monsalve-Bravo et al. (2022) for an overview). Although based on similar dimension reduction techniques (e.g., posterior covariance, likelihood informed subspace, or the active subspace methods), each sensitivity matrix considers different sources of information for the model parameters and their interdependencies. For example, the chosen sensitivity matrix may acknowledge or aim to exclude prior beliefs about the model parameters when identifying key parameter interdependencies (e.g. the posterior covariance method and likelihood informed subspace methods, respectively).
In this paper three sensitivity matrices (Table 1) were used to explore model reduction. Firstly, the Hessian matrix of the log-likelihood is used to capture the model-data fit around one point in parameter space based on the likelihood surface (Brown and Sethna, 2003). Secondly, the posterior covariance is a variance-based method which looks at model-data fit sensitivities over the full posterior parameter-space (Brown and Sethna, 2003;Monsalve-Bravo et al., 2022). Thirdly, the likelihood-informed subspace (LIS) method captures the model-data sensitivities of the posterior sample relative to the prior distribution (Cui et al., 2014), so it can be used in comparison with the posterior covariance method to identify the informativeness of the prior distribution (Monsalve-Bravo et al., 2022). We chose to use these three sensitivity matrices because the interpretation of the analysis of sloppiness for these matrices can provide uniquely different information, as documented recently for multiple simulation problems (Monsalve-Bravo et al., 2022). However, there are other sensitivity matrices that could also be used, such as the Levenberg-Marquardt Hessian (Brown and Sethna, 2003;Gutenkunst et al., 2007), the matrix arising from the active subspace method (Constantine et al., 2016), or a likelihood-free approximation of LIS (Beskos et al., 2018).
Sensitivity matrices are often calculated using logarithmically-scaled parameter values (Brown et al., 2004). This rescaling enforces positivity constraints on the model parameters, helps prevent scaling issues between parameters with units possessing different orders of magnitude, and allows each eigenparameter to be expressed as a product rather than as a sum, as in Equation (4) (Monsalve-Bravo et al., 2022). Here, we denote the logarithmically-rescaled parameters as φ i = log θ i , for all i = 1, . . . , n p parameters (see Step 2 of Algorithm 1).
where {φ m } M m=1 is the set of M posterior samples, H(φ m ) is the Hessian matrix evaluated at φ m , and L is the Cholesky factor of the covariance matrix Ω of the prior distribution π(φ), such that Ω = LL T .
Estimates model-data fit sensitivities by comparing where the data is most informative relative to the prior distribution. Can identify how informative the prior distribution is on the posterior distribution, when compared to the posterior covariance method. Table 1: Three methods of constructing the sensitivity matrix for an analysis of model sloppiness. Each sensitivity matrix captures different features of the model-data fit. For the Bayesian sensitivity matrices, it is assumed here that the posterior distribution is approximated by a sufficiently large number M of equally weighted posterior samples.
Model reduction
Model reduction aims to simplify the model whilst minimising the loss of predictive power of the model (Jeong et al., 2018;Snowden et al., 2017). Given that model sloppiness can identify sensitive parameter combinations, we propose that model reduction can be informed by considering the removal of mechanisms that contribute negligibly to these sensitive parameter combinations. This approach is similar, although not equivalent, to factor-fixing of sets of parameters from a variance-based sensitivity analysis (Saltelli et al., 2004).
Previous work has shown that analysing model sloppiness can reduce the number of model parameters via iteratively removing the sloppiest eigenparameter and simultaneously adapting the model with limiting approximations (Transtrum and Qiu, 2014). Here, we instead investigate the possibility that, if one or more parameters which together characterise an entire process or mechanism within the model have little contribution to the stiffest eigenparameters, this process or mechanism may contribute very little to the model's ability to predict the data that was used for its calibration. Hence, this analysis can identify which mechanisms the modeldata fit is insensitive to -similar to variance-based sensitivity analyses, such as Sobol's indices (Sobol, 1993), which can analyse sets of parameters or model structures (Mara et al., 2017). These identified mechanisms can be potentially removed from the model with little effect on the model's predictive power.
We suggest that a general quantitative method for selecting the most insensitive model mechanism to be removed would not be appropriate for all models. Instead, the analysis of model sloppiness is used as a source of information for guiding model reductions, paired with an understanding of the model and data being considered. We also recommend quantitatively comparing the original and reduced models using tools such as model evidence, Bayes factors, the Akaike information criterion or the Bayesian information criterion Tredennick et al., 2021). The full process of model reduction informed by the analysis of model sloppiness that we propose and investigate here is depicted in Figure 2.
Figure 2: A conceptual diagram of the model reduction process informed by the analysis of model sloppiness. Firstly, the model is calibrated (Section 2.1) using the Bayesian and/or non-Bayesian methods depicted in Figure 1. The second step is an analysis of model sloppiness (Section 2.2) to identify model mechanisms which weakly inform model predictions. Thirdly, the insensitive mechanisms identified in the analysis of model sloppiness are removed from the model and the simplified models are calibrated to the data. Lastly, the predictive power of the reduced model(s) can be assessed, e.g. via goodness-of-fit and/or model selection metrics, to determine the best model for the application.
3 Case study
Coral calcification model
To demonstrate using the analysis of model sloppiness as a method for strategic model reduction, this method was applied to a data-calibrated model predicting coral calcification rates.
Calcification rates are a common metric for measuring the health of coral reefs (Erez et al., 2011). The process of coral calcification involves the coral laying down its skeleton, resulting in the spatial extension of coral reef structures (Andersson and Gledhill, 2013). This process is vital for the entire ecosystem, because it forms a habitat for a diverse range of marine life as well as providing a stuctural framework for barrier reefs (Hoegh-Guldberg et al., 2017).
The model and data used for this case study are reported by ; this is the most recent and comprehensive model for coral reef calcification rates. The model prediction of calcification rates are obtained from the steady state solution of a system of nonlinear ordinary differential equations (ODEs), which together simulate the chemical composition of relevant molecules and ions throughout various physiological compartments of a coral polyp ( Figure 3). There are eight mechanisms within the model which represent different chemical processes and reactions through coacting and counteracting flux terms (Table 2), hence, some of these mechanisms could potentially be removed. Further description of the model is provided in Supplementary Material, Section S.1. calibrated the 20 unknown parameters of the model (see Supplementary Material Section S.2 for details) to an experimental dataset containing 16 data-points obtained from Rodolfo-Metalpa et al. (2010). Each data point consists of paired measurements of calcification rates and the environmental conditions under which this rate was measured (data shown in Table 1 of Ca-ATPase pump The active transport of calcium ions through the aboral tissue, driven by ATP The active transport of bicarbonate anions through the aboral tissue . Note that the first four mechanisms cannot be removed because they are the rate of interest we wish to output (net calcification), data inputs with no associated parameters (gross photosynthesis and respiration), and a mechanism critical for the model's original purpose of simulating coral responses to ocean acidification (seawater-coelenteron diffusion).
Figure 3: Conceptual model of the chemical processes and reactions within a coral polyp, proposed by . Each of the arrows in this diagram represents a mechanism which affects the chemical species concentrations in the model. This model predicts the calcification reaction rate in the extracellular calcifying medium (ECM) based on environmental conditions.
Model calibration of the coral calcification model
The coral calcification model was calibrated to the data (see Table 1 of using both MLE and Bayesian inference methods. The likelihood function was defined based on Gaussian errors as in Equation (1), where the data was assumed to be normally distributed with a mean calcification rate according to the model proposed by , and a constant and unknown standard deviation of σ.
In cases where the model-data fit surface is complex, local optima can misguide optimisation algorithms such that they do not converge on the global MLE (Transtrum et al., 2011). In the present case study, the likelihood function was high-dimensional due to its dependence on 20 model parameters and had no clearly defined peak, hence the global MLE was difficult to obtain. Instead, 100 local MLEs were obtained and the five with the highest likelihood values were used in further analysis. These local maxima were identified using a gradient-based nonlinear function minimisation tool (Matlab's fmincon function, described in MathWorks, 2021) using various initial search locations. Figure 4 shows 25 of the identified local maxima with the highest likelihood values (black vertical lines), which are spread across many parameter values, indicating the complexity of the high-dimensional likelihood surface. These distinct parameter estimates each have similarly high likelihoods and this goodness-of-fit is reflected by their predictions of calcification rates when compared to the observations (black asterisks in Figure S1 of Supplementary Material Section S.3). Only one point-estimate (the MLE) is needed for the analysis of model sloppiness if the likelihood surface has a well-defined peak. However, when the likelihood surface is multi-modal and/or flat in certain directions (as is common in sloppy models) it can be difficult to identify the global MLE. Thus, to analyse the reproducibility of the results, we evaluated the Hessian matrix at five different sets of parameter values -those that yielded the highest values of the likelihood function (orange vertical lines in Figure 4, and orange asterisks in Figure S1 of Supplementary Material Section S.3) algorithm.
To calibrate the coral calcification model a second time, using Bayesian inference instead, we specified uniform prior distributions (grey shaded regions in Figure 4) as most of the parameters for this model were only known to be strictly positive. The posterior distribution was approximated using an adaptive SMC algorithm and this algorithm was run multiple times independently to test the reproducibility of both the posterior sample and the later analyses of model sloppiness (we used five independent posterior samples). The estimated marginal distributions were visually indistinguishable for the independently produced posterior samples, indicating that the posterior sample was highly reproducible for a posterior sample size of 5000. Further details of the model calibration procedure performed using Bayesian inference are provided in Supplementary Material, Section S.4. The estimated posterior marginal densities obtained for the model parameters ( Figure 4) revealed large uncertainty in many of the model parameters, with only two parameters strongly informed by the data (k pp and β). This result is expected given the limited size of the dataset available for model-data calibration.
Analysing sloppiness of the coral calcification model
After the model-data calibration, we used the analysis of model sloppiness to unravel strong parameter interdependencies. This analysis was conducted for each of the three sensitivity matrices listed in Table 1: the Hessian approach evaluated at each of the five local MLEs, the posterior covariance method, and the LIS approach, with both of the latter evaluated from the posterior distribution samples obtained from the SMC algorithm. Figure 5 shows the size of the eigenvalues relative to the largest, after eigendecomposition of each sensitivity matrix. Here, the MLE Hessian approach (Table 1, first row) leads to an inconsistent decay in eigenvalue spectra for the five different local MLEs used to evaluate the sensitivity matrix. This result is unsurprising, as the parameter values of each local MLE are in distinctly different locations of the likelihood surface, so the sensitivity of the model-data fit changes in these different local parameter spaces considered. In contrast, the posterior covariance method (Table 1, second row) produces a consistent decay in eigenvalue size across independent posterior samples. The consistent decay is likely because the full range of parameter values from the posterior are considered when analysing the model-data sensitivity, achieving a global analysis of the posterior surface. Finally, the decline in relative eigenvalue size is much more rapid for the LIS framework (Table 1, third row, and Figure 5, blue triangles), when compared to the posterior covariance approach. Here, the relative eigenvalue size for the fifth LIS eigenparameter is comparable to the twentieth (i.e. smallest eigenvalue) from the posterior covariance method. Figure 4) from different local maxima. Both the posterior covariance method and LIS sensitivity matrix analyses were conducted on five independent sets of posterior samples obtained using Bayesian inference. Notice that the posterior covariance method leads to extremely similar eigenvalue spectra for the five independent sets of posterior samples.
Hessian matrix evaluated at local likelihood maxima
First, the non-Bayesian approach to analysis of sloppiness (i.e. using the Hessian matrix) was considered. As the Hessian matrix could not be analytically computed for the model in this case study, the finite differences method (Beers, 2007) was used to numerically approximate the Hessian matrix. When using finite differences, the likelihood function is evaluated very close to the MLE, using a step-size ∆θ i = δ × θ i , i = 1, . . . , n p , which is as small as possible to minimize truncation error. However, a step-size too small will result in numerical issues (round-off errors). For this application, we used δ = 10 −2 , as larger step-sizes yielded the same results, but smaller step-sizes yielded inconsistent results which we attributed to numerical errors. The Hessian matrix was evaluated at five different parameter sets -those that yielded the highest values of the likelihood function (orange vertical lines in Figure 4).
After eigendecomposition of the Hessian matrix, evaluated at each parameter set, we used Equation (4) to identify parameter interdependencies (eigenparameters). Figure 6 shows the eigenparameters that correspond to the four largest eigenvalues (λ 1 , λ 2 , λ 3 and λ 4 for the orange circles in Figure 5), for each of the five likelihood maxima considered. In each row of the matrix depicted in Figure 6, the relative contribution of a given (ith) parameter to the (jth) eigenparameter (parameter combination) is indicated by the listed value. This value mathematically represents the eigenvector element value v j,i and can be interpreted as the magnitude of the ith parameter's exponent in the expression (Equation (4)) for the jth eigenparameter. Standard renormalisations during eigendecomposition ensure that −1 ≤ v j,i ≤ 1, ∀i, j, such that exponents that are close to 1 or −1 indicate strong contribution of the parameter to the eigenparameter, and exponents close to 0 indicate negligible contribution of the parameter to the eigenparameter. For example, in the first row of Figure 6, the parameter k pp has an exponent of −0.9, so strongly contributes to the first (stiffest) eigenparameter. In contrast, the parameter k CO2 in the first row contributes negligibly to the stiffest eigenparameter.
The parameters that contribute to the stiffest eigenparameter are largely associated with the coelenteron-ECM paracellular diffusion mechanism (represented as k pp ) and the Ca-ATPase pump mechanism ( Figure 6). In contrast, parameters associated with the coelenteron-ECM transcellular diffusion mechanism (represented as k CO2 ), seawater-coelenteron diffusion mechanism (represented as s) and the BAT pump mechanism all contribute to a lesser extent to the stiffest eigenparameter. While this result is generally true when considering different point estimates (local MLEs), the relative contribution of each individual parameter to the eigenparameters depends on the local MLE used to evaluate the Hessian matrix ( Figure 6). As each of the likelihood maxima had distinct parameter values, this indicates that the sensitivity of the model-data fit to parameter combinations depends on the local parameter space considered. This motivates the need for a sensitivity matrix which captures the model-data fit sensitivity across a range of potential parameter values, such as the posterior covariance method. Figure 6: Eigenvector element values for eigenparameters identified using the MLE Hessian approach to an analysis of sloppiness. These eigenparameters correspond to the four largest eigenvalues, and so are ordered from highest relative importance to lowest. For each eigenparameter, the results of five high likelihood parameter estimates are compared to indicate the consistency of results. Here we only consider four of twenty eigenparameters to show the inconsistency between results based on different local MLEs in the four most sensitive parameter combinations. For each eigenparameter, the values were normalised by the leading eigenvector value, such that they are rescaled to be between -1 and 1 inclusive. Here the colour darkens as the absolute values of the eigenvector values increase from 0 to 1, such that the model-data fit is more sensitive to darker eigenvector values. Notice that each of the parameters has been grouped based on its mechanistic function in the model (Figure 3). Additionally, the relative size of each eigenvalue when compared to the leading eigenvalue for each sample has been included in the column λ j /λ 1 .
Posterior covariance method
As an alternative to the MLE Hessian approach, the posterior covariance method considers a sample of the posterior distribution to obtain the key eigendirections (parameter interdependencies). For the analysis performed on one representative set of posterior samples ( Figure 7), the two stiffest eigenparameters indicate that the coelenteron-ECM paracellular diffusion (represented as k pp ), and Ca-ATPase pump mechanisms strongly inform model predictions, in agreement with the results obtained for the Hessian matrix. This suggests that within this dataset, the calcification rate was strongly informed by the balance of these mechanisms within the coral polyp.
Unlike the Hessian-based results, the parameters associated with the BAT pump, seawatercoelenteron diffusion mechanism (represented as s) and the coelenteron-ECM transcellular diffusion (represented as k CO2 ) mechanisms contribute little to the model behaviour within the seven stiffest eigenparameters. Given the decay in relative eigenvalue size beyond the seventh eigenparameter (more than two orders of magnitude, see Figure 5), the analysis suggests that these latter mechanisms may not be necessary to maintain a good model-data fit for calcification rate predictions.
In our case study, the posterior covariance method yielded substantially more consistent results across different sets of posterior samples (Supplementary Material Section S.5) when compared to the MLE Hessian approach across different local MLEs ( Figure 6). When using a posterior covariance method, the analysis yielded consistent eigenvector values for the stiffest nine eigenparameters in Figure 7 across independently generated sets of posterior samples ( Figure S2 of Supplementary Material Section S.5). Here, differences amongst eigenparameters having small eigenvalues are expected as the model-data fit is weakly sensitive to sloppy eigendirections (Brown and Sethna, 2003).
Likelihood informed subspace sensitivity matrix
Lastly, the analysis of model sloppiness using a LIS method was considered. One advantage of using both a posterior covariance and LIS approach to analysing sloppiness is that the influence of the prior distribution on the posterior sample can be identified (Monsalve-Bravo et al., 2022). More specifically, if there are substantial differences in the directions of stiff eigenparameters between the posterior covariance and LIS methods, this indicates that the prior is having a large impact on the stiff eigenparameter directions found using posterior covariance, and therefore the prior itself is strongly informing the posterior. For calculation of the LIS matrix, finite differences was also used to approximate the Hessian, with the step-size δ = 10 −2 .
Looking at the two stiffest eigenparameters (Figure 8), the LIS analysis of sloppiness gives a similar result to that of the posterior covariance method -the relative magnitude of the elements within the two eigenvectors are very similar between methods. However, beyond the second eigenparameter, the results of the two approaches begin to differ and the LIS results be-come inconsistent across independent posterior samples (Supplementary Material Section S.6). Here we note that the relative decay in eigenvalue size is much more rapid for an analysis of sloppiness using the LIS approach (blue triangles in Figure 5), when compared to the posterior covariance method (green squares in Figure 5). For example, the relative eigenvalue size of the tenth posterior covariance eigenvalue is of similar magnitude to that of the third LIS eigenvalue ( Figure 5). Additionally, we previously saw that the eigenvector values became inconsistent when λ j /λ 1 ∼ O(10 −2 ) using the posterior covariance method (Supplementary Material Section S.5, tenth eigenparameter), so it is not unreasonable to expect inconsistency between samples following the second eigenparameter using the LIS method, when this order of magnitude approximation is similar. Such inconsistencies are expected when eigenvalues are very small because the model-data fit is less sensitive to the corresponding sloppy eigenparameters, and these sloppy eigenparameters are therefore difficult to uniquely identify. Hence, the results of the analysis of model sloppiness used here appears to be consistent across both the LIS and posterior covariance approaches.
A similar result between the posterior covariance and LIS methods for the stiffest eigenparameters indicates that the prior distribution used for the application of Bayesian inference does not substantially influence on the shape of the posterior distribution. Hence, in our case we concluded that the prior used was weakly informative (as intended) and had little influence on the posterior distribution.
Model reduction for the coral calcification model
The analysis of model sloppiness revealed the parameter combinations which most and least inform model predictions for the coral calcification case study -a useful tool for identifying potential model simplifications that have little impact on model predictions for the given dataset. Each analysis of model sloppiness (Section 3.3) indicated that the coelenteron-ECM paracellular diffusion (characterised by parameter k pp ) and the Ca-ATPase pump mechanisms contribute substantially to the stiffest eigenparameter (Figures 6-8), suggesting that these two mechanisms (characterised by ten parameters, see Table 2) have a strong influence on model behaviours. However, the analysis indicates that model predictions are relatively insensitive to the seawatercoelenteron diffusion mechanism (characterised by parameter s), the BAT pump mechanism (characterised by seven parameters, see Table 2), and the coelenteron-ECM transcellular diffusion mechanism (characterised by parameter k CO2 ) because they did not contribute to the stiff eigenparameters (see the first eight parameter combinations in Figure 7). Hence, the results suggest that each of these three mechanisms, or a combination of them could be removed.
However, the seawater-coelenteron diffusion mechanism is an integral part of the model, as it is the primary mechanism by which ocean acidification affects coral calcification rates in the model. Since this model's original purpose in was to identify the way that environmental factors affect calcification rates (including but not limited to ocean acidification), this result could suggest that the experimental dataset did not sufficiently capture the effects of ocean acidification on calcification rates. Regardless of this issue, removing the diffusion between the coral polyp and external seawater seems nonsensical as it would yield a model whereby there is no interaction between the local carbon chemistry of seawater and the coral host animal (except indirectly through effects on net photosynthesis which is unlikely; see, e.g. Kroeker et al. 2013).
It was biologically unreasonable to remove the seawater-coelenteron diffusion mechanism, so the potential simplified models could include one without the BAT pump, one without the coelenteron-ECM transcellular diffusion, and one without both mechanisms (see Figure 3 for a conceptual depiction of these mechanisms). All three of these reduced models were investigated by removing one or both of the selected mechanisms from the ordinary differential equations. The reduced models were recalibrated via Bayesian inference using the same dataset (see Table 1 of and likelihood function (Equation 1), and for the remaining parameters the same independent uniform distributions were used as the prior distribution ( Figure 4 and Table S1). Removing both the BAT pump and coelenteron-ECM transcellular diffusion mechanisms from the model reduces the number of parameters from 20 to 12.
The reduced models suggested by the analysis of model sloppiness yielded very similar predictions of the coral calcification rate data when compared to the original model, despite having up to eight parameters removed. Visually, the goodness-of-fit between the model and data was similar for the original and our proposed reduced model without the BAT pump and coelenteron-ECM transcellular diffusion mechanisms (Figure 9) -and a similar result is observed when only one of the two mechanisms were removed (Supplementary Material Section S.7). The estimated model evidence quantitatively suggests that the original model and its three proposed reductions are similarly supported by the data, and the estimated Bayes factors do not indicate a strong preference between models (Supplementary Material Section S.8).
Additionally, removing the two insensitive mechanisms (BAT pump and coelenteron-ECM transcellular diffusion) from the model did not lead to clear differences in the estimated marginal posterior densities for each parameter, or to the analysis of model sloppiness between the original model and the reduced model (Supplementary Material Section S.9). Whilst there is limited sacrifice in predictive ability, there was a significant gain in computation time -the reduced model required less than half the computation time of the original model for calibration (13.2 hours for the original model and 6.0 hours for the reduced model, using a high-performance workstation with 12 cores).
For comparison, we also removed a mechanism from the model that was represented within the stiffest eigenparameter and therefore very sensitive to the model-data fit. Removing this sensitive mechanism resulted in a much worse model-data fit, both visually (Supplementary Material Section S.7) and quantitatively (Supplementary Material Section S.8). For this case study, these results indicate that analysing the model sloppiness is an appropriate way to inform model reductions for maintaining a good fit between the model and calibration dataset.
Discussion and Conclusion
In this work, we have proposed and demonstrated a new method for simplifying models based on the analysis of model sloppiness. This analysis can identify the informativeness of parameter combinations on model behaviours by analysing the topology of the surface which describes the fit of the model to data in parameter space (Monsalve-Bravo et al., 2022). As such, it can identify insensitive model mechanisms whose parameters contribute little to the model's ability to fit the available data, whilst accounting for parameter interdependencies. We showed that identifying and removing such insensitive mechanisms can be used for model reduction whilst minimising the loss of predictive capability.
Coral calcification case study
Our model reduction method, informed by the analysis of model sloppiness, was demonstrated on a case study on a model of coral calcification reaction rates . In this case study, the data was not sufficient to provide narrowly constrained estimates for most of the model parameters, but it was also not immediately clear how the model could be simplified. To address this issue, the proposed analysis identified two mechanisms within the model which only weakly informed predictions and were sensible to remove; removing these processes reduced the number of model parameters from 20 to 12. A comparison of the goodness-of-fit for both the original and simplified models indicated that similar predictions were produced by both ( Figure 9). It should be noted here that the simplified model has, to our best knowledge, not been considered in the literature for modelling coral calcification, so the outcome from analysis of sloppiness yields a new model for practitioners to consider.
For the present case study, the analysis of model sloppiness may also be useful for better understanding the physiology of the coral polyp. The analysis indicated that the model-data fit was highly sensitive primarily (i.e. first eigenparameter) to the Ca-ATPase pump (through parameters v Hc , E0 c , k 1fc , k 2fc and k 1bc ) relative to the paracellular diffusion of chemical species (through parameter k pp ) between the coral polyp compartments that the pump mechanism connects -the coelenteron and ECM (see Figure 3). This may be expected since it is the balance of the active and passive fluxes between the ECM and coelenteron at steady state that determines the aragonite saturation state of the ECM, which itself directly controls the calcification rate. Secondarily (i.e. second eigenparameter), the model-data fit was sensitive to the ATP availability from gross photosynthesis and respiration (through parameters α and β). In the model this process determines the amount of ATP available to fuel the Ca-ATPase pump responsible for the active flux.
From our results, we also gain insights from the mechanisms which are removed from the model. Given that the analysis of model sloppiness is closely related to the concept of parameter identifiability (Bellman andÅström, 1970;Browning et al., 2020;Chis et al., 2016;Raue et al., 2009), it can inform the structural and/or practical identifiability of parameters. In the explored coral case study, the removed mechanisms had similar functions to other mechanisms in the model, such that these processes are partially structurally unidentifiable. The BAT pump mechanism is an active transport for bicarbonate anions (HCO − 3 ), and so the mechanism alters the dissolved inorganic carbon (DIC) levels between the coelenteron and ECM in a similar way to the Ca-ATPase pump. Additionally, the model includes two passive transport mechanisms between the coelenteron and ECM (see Figure 3): a paracellular pathway for all chemical species and a transcellular pathway for carbon dioxide, which have very similar functions. Hence, the analysis of model sloppiness here indicates that certain components of the model are not necessary for a good model-data fit because their functions are partially or fully replicated through other mechanisms. So, when considering the aggregate behaviour of these processes within the coral polyp, including these "redundant" physiological processes may not lead to better predictions because of structural and/or practical identifiability issues, and instead may cause poor parameter estimation (Raue et al., 2009). Importantly, any physiological conclusions drawn from the results of this analysis are in relation to this dataset alone. The dataset used for this analysis is small, given that the number of data points is of the same order of magnitude as the number of model parameters. Additionally, any misspecification or inaccurate assumptions in this model are carried through to the reduced model. That being said, all models are unavoidably approximate representations of the real world. Hence, any model mechanisms excluded through model reduction cannot be considered unimportant for coral reef physiology, as the analysis indicates that model predictions of calcification rates for this dataset are negligibly affected by exclusion of these mechanisms. We also acknowledge that these two criticisms are common to data-driven methods which derive simplifications from an originally more complicated model (Crout et al., 2009).
The explored coral calcification case study demonstrated the potential for an analysis of model sloppiness to inform model reductions. However, this method could be used generally for strategically proposing simpler models. The method is restricted to parametric models where an appropriate sensitivity matrix can be defined, and is best suited to models where mechanisms can be easily removed (e.g. process-based models). However, future work could examine how this model reduction technique performs on various other types of models. For instance, Monsalve-Bravo et al. (2022) describes how an analysis of model sloppiness could be used to identify critical parameter combinations in a stochastic setting, and this idea could potentially be explored for application to model reduction.
Sensitivity matrix selection
In this paper, we compared the results of an analysis of model sloppiness using three different approaches to constructing the sensitivity matrix (Table 1). For the explored case study, all approaches -the Hessian evaluated at MLE, the posterior covariance method and the LIS method -agreed on which mechanisms strongly inform the model-data fit (Figures 6-8), so the model reduction informed by each approach was equivalent in this case. However, this conclusion will not always hold for other model-data fitting problems (e.g., see Monsalve-Bravo et al. 2022). Just as each approach to capturing the model-data sensitivities differ, so might the model reductions informed by this analysis. The results from our case study lead to two key general findings regarding selection of a sensitivity matrix.
Firstly, this case study demonstrated that the non-Bayesian sensitivity matrix may have limited utility when the likelihood surface is rugged. In the coral calcification case study, the likelihood surface had no well-defined peak in parameter space, so single parameter estimates of this peak based on MLE could not reproducibly encapsulate the model-data fit sensitivity. Results based on the local sensitivity of the likelihood surface may not capture important features of the model-data fit in such circumstances, and should be interpreted with caution. For complex models, the global likelihood surface including the full range of parameter values represented by the posterior distribution should instead be considered (e.g. using the posterior covariance approach).
Though the Hessian sensitivity matrix may become problematic for some complex models, that does not mean that the MLE Hessian approach cannot be used to gain useful information for model reduction. Rather, a non-Bayesian method of analysing sloppiness is easier to implement and far more computationally efficient. Thus, a non-Bayesian method of analysing sloppiness -such as the Hessian matrix evaluated at MLE, or the Levenberg-Marquardt approximation (Marquardt, 1963) of the Hessian (used for computationally intensive models) -could provide a simpler, faster method of suggesting model reductions, in place of the Bayesian counterparts. However, where the likelihood surface is expected to be complicated, sufficient consideration should be given to the choice of optimisation algorithm used to identify the best-fit parameter values.
Secondly, the case study demonstrated the benefits of using the posterior covariance and LIS methods together. In the explored coral calcification model, both the posterior covariance and LIS results for analysing sloppiness were similar, indicating that the prior distribution was not having a substantial influence on the model-data calibration process (i.e. on the posterior distribution). Hence, this case study demonstrates the value of using multiple methods for analysing sloppiness, as together these methods provide richer information than each method by itself (Monsalve-Bravo et al., 2022).
However, because in our case study the prior distribution was weakly informative on the posterior, it is difficult to state what the outcomes of informing model reductions via a LIS analysis would show if informative priors were instead used. Although we leave this exploration for future work, we hypothesise that model reduction informed by LIS could yield reduced models based purely on retaining mechanisms for which the data is highly informative relative to prior beliefs for each mechanism. At the very least, the LIS method is a useful check to identify the influence of the data on the model calibration process within a Bayesian framework.
While we only considered three sensitivity matrices, there are various approaches in the literature that could be used within this model reduction framework. Methods such as the Levenberg-Marquardt Hessian, a likelihood-free approximation of LIS, or the active subspace method each capture the model-data fit differently to the methods considered here and may provide advantages for different applications -such as computationally intensive or stochastic models (Monsalve-Bravo et al., 2022).
The active subspace method (Constantine et al., 2016) is a dimension reduction approach that constructs a sensitivity matrix based on the gradient of the log-likelihood relative to the prior distribution, capturing informativeness of the data relative to the prior on the model-data fit similarly (but not exactly) as the LIS approach does (Zahm et al., 2022). Constantine et al. (2016) define the active/inactive subspace of eigenvectors via eigendecomposition of the sensitivity matrix and identifying the first largest gap between eigenvalues. As noted by Monsalve-Bravo et al. (2022), the active subspace sensitivity matrix could be used to assess model sloppiness and we suggest it can also be used within our model reduction framework. In addition, Constantine and Diaz (2017) proposed activity score metrics to rank individual model parameters based on the eigenvalues and eigenvectors of the active subspace matrix and can be used for model reduction. We leave explorations of active subspace for model reduction and a comparison with model reductions produced by the sensitivity matrices used in this manuscript for future research.
Complexity versus parsimony in models
The processes, phenomena and systems of the world around us are extremely complex; so should the models we create to represent these ideas be equally as complex? The simple answer is that it depends on the purpose of the model, whether that be accurate predictions of collective behaviour (Dietze et al., 2018), inference for physically meaningful parameters (Adams et al., 2017), or analysis for understanding and/or changing processes within a system (e.g. Solidoro 2018, Verspagen et al. 2014).
If the aim of the model is accurate prediction, it is the aggregate behaviour of a system that we aim to recreate through models, and in many cases, including additional underlying processes is not beneficial in modelling the collective behaviours (Machta et al., 2013;Transtrum and Qiu, 2014). Including many potential mechanisms within a model may mean that some processes are being fit to noise in the data, leading to poor predictions in new situations (Cox et al., 2006). However, a general model may be too simple for accurate prediction and could distort the importance of processes in the model (Lawrie and Hearne, 2007;Van Nes and Scheffer, 2005). Models should minimise the bias-variance tradeoff, balancing the complexity so that the model is simple enough to predict new data well and complex enough to capture features of the data (Geman et al., 1992).
What if accurate parameter inference for meaningful parameters is the goal? The more complex a model is, the more difficult parameterisation becomes (Van Nes and Scheffer, 2005), and more data is required as a consequence. If unidentifiable parameters are included in the model calibration problem, the values of these parameters become meaningless as they are often interdependent on others (Van Werkhoven et al., 2009). On the other hand, removing important and existing processes from a model could mean that calibrated parameters lose their physical meaning as they compensate for processes missing from the model and become more like aggregate parameters for modelling the collective model dynamics (Elevitch and Johnson, 2020).
If there are specific processes within the model that we wish to understand or change within a system, the relevant process needs to be included for analysis (Hannah et al., 2010). However, even in this case the modeller must still consider whether the parameters retain their physical meaning (due to structural identifiability issues or data limitations) as well as the potential for overfitting caused by the inclusion of the process (e.g. Jakeman and Hornberger, 1993). Additionally, there are other issues that come with complex models, such as difficulty implementing, reproducing, interpreting, validating and communicating the models, as well as computation times and difficulty or costs associated with updating or replacing these models (Van Nes and Scheffer, 2005).
So where is the line between too simple and too complex? There are many arguments both for and against complexity (e.g. Anderson, 1972;Hong et al., 2017;Hunt et al., 2007;Logan, 1994;Wigner, 1990). The desired complexity of a model should be based on both the model's purpose and statistical assessments of quality (Saltelli, 2019). Our goal in the present work is to highlight that, in some circumstances, model reduction could benefit the predictions and parameter estimates of a model. Our model reduction framework provides a principled and intuitive approach for model simplifications to address this goal.
Software and data availability
The code used for this analysis was implemented in MATLAB (R2021b), and is freely available for download on Figshare at https://doi.org/10. 6084/m9.figshare.19529626.v2. This code (14.8MB) contains 37 functions as MATLAB code files, which runs the analyses for the coral calcification case study and produces the corresponding plots described within this manuscript. In addition, 7 MATLAB data files are included, which contain the dataset used for analyses (available to access through ), and generated by the analysis (including SMC samples for each model, and for independent runs, local MLE samples, and calculated sensitivity matrices).
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. of the manuscript; SAV, CD and MPA contributed to the design of the research; all authors contributed to the coding, analysis of results, review and editing of the manuscript.
S.1 Further details of the coral calcification model
The deterministic model proposed by is used to predict the rate at which calcification occurs based on the steady-state solution of a system of nonlinear ODEs. There are two modelled compartments within the coral polyp, the coelenteron, which is a fluid compartment considered to be the stomach of the polyp, and the extracellular calcifying medium (ECM), the compartment where calcification reactions occur within the coral. Additionally, the model also includes the surrounding seawater, as seawater exchanges chemical species with the coelenteron of the coral polyp.
As a result of the reactions occurring within the coral polyp, as well as exchanges of species between the compartments, the ODE model is comprised of flux terms that adjust the chemical species concentrations. Calcification is one such reaction, where calcium ions (Ca 2+ ) and carbonate ions (CO 2− 3 ) precipitate into calcium carbonate (CaCO 3 ) in the ECM. The process of calcification is mathematically described as a flux term within the system of ODEs. Calcification rates depend solely on the concentration of the relevant chemical species (Ca 2+ and CO 2− 3 ), so prediction of calcification rates requires knowledge of the concentrations of the chemical species in each compartment.
The model also assumes the presence of the enzyme carbonic-anhydrase within the coral host, so that it catalyses the equilibration of the carbonate system (Bertucci et al., 2013). Therefore, it is assumed that the carbonate system is at separate equilibria within both the coelenteron and ECM. This assumption allows the entire carbonate system to be described uniquely by knowledge of any two chemical species of the carbonate system. Hence, the carbonate system is modelled using only dissolved inorganic carbon (DIC) and total alkalinity (TA), and equations describing carbonate species equilibrium relationships are used to calculate the concentration of all other carbonate system species (H + , OH − , CO 2 , HCO − 3 and CO 2− 3 ).
The various flux terms, which either represent the flow of chemical species between compartments or chemical reactions, are derived from current understanding of coral physiology. Justification and description of the mathematical forms of the ODEs and these flux terms are provided in ; here we summarise the purpose of these fluxes and the parameters requiring estimation that characterise them. There are four types of processes that are assumed to control the carbonate system and calcium ion concentrations within the coral host: 1. Gross photosynthesis and respiration reactions -These are the processes of carbon exchange due to the zooxanthellae algae which are in symbiosis with the coral host. Photosynthesis and respiration reactions remove and supply DIC from the coral host respectively, and both reactions produce ATP energy. In the model of , the chemical fluxes due directly to photosynthesis and respiration are treated as model inputs, so do not have corresponding free parameters that require estimation.
2. Passive transport -There are three routes for chemical species to passively diffuse between the coral compartments within the polyp and the external seawater. These three passive transport mechanisms are: (a) Coelenteron-ECM transcellular diffusion: a transcellular pathway for carbon dioxide between the coelenteron and ECM (characterised by parameter k CO2 ), (b) Coelenteron-ECM paracellular diffusion: a paracellular pathway for all chemical species between the coelenteron and ECM (characterised by parameter k pp ), and (c) Seawater-coelenteron diffusion: a passive exchange mechanism between the coelenteron and external seawater (characterised by parameter s).
3. Active transport (Ca-ATPase and BAT pumps) -The model assumes there are two active pump mechanisms between the coelenteron and ECM compartments of the coral polyp, which each increase the aragonite saturation state of the ECM.
(a) The Ca-ATPase pump increases the calcium ion concentration in the ECM in exchange for protons being transported to the coelenteron. This pump mechanism is assumed to be driven by ATP energy sourced from photosynthesis and respiration fluxes, so is characterised by 10 parameters requiring estimation (α, β for the ATP energy budget, and the 8 Ca-ATPase parameters indicated in Table S1).
(b) The BAT pump controls the movement of bicarbonate anions (HCO − 3 ) between the coelenteron and ECM, and is characterised by 7 parameters requiring estimation (see Table S1).
Calcification reactions -
The key process rate which is being predicted. The rate of this reaction depends on concentration of calcium and carbonate ions in the ECM; the parameters associated with this reaction do not require estimation in the model of .
The coral compartments and flux terms associated with these four processes are visualised in Figure 3 of the manuscript. These underlying components of the model are combined together into a system of ordinary differential equations (see Equations (23-28) in ). This system models the flow of calcium ions, DIC and TA throughout the coral polyp.
To keep the carbonate species in equilibrium at each numerical timestep of the ODE, the pH and carbonate species concentrations (H + , OH − , CO 2 , HCO − 3 and CO 2− 3 ) are recalculated based on the current DIC, TA, temperature and salinity using the MATLAB version of CO2SYS (van Heuven et al., 2011). This CO2SYS algorithm is a commonly used software package for calculating carbonate species equilibrium concentrations; full details of this algorithm are provided in Orr et al. (2015). The ODE model in tandem with the CO2SYS algorithm in the coelenteron and ECM compartments are together solved at steady state, so that the concentrations of species and fluxes have stabilised, to gain an estimate for the coral polyp's calcification rate. Such stabilisation is expected to occur over timescales of an hour or less (Al-Horani et al., 2003;Tambutté et al., 1996).
S.2 Coral calcification model parameters
Parameters of the coral calcification model that are estimated in this work via MLE and Bayesian inference are summarised in Table S1, with assumed values indicated where possible. α Fraction of Pg allocated to calcification 0-1 -Fraction must be between 0 and 1 β Fraction of R allocated to calcification 0-1 -Fraction must be between 0 and 1 Ca-ATPase mechanism v Hc Proportionality constant (Ca-ATPase) 0-250 cm s −1 Strictly positive
S.3 Goodness-of-fit of local maxima
In this paper we found that the likelihood surface for the fit of the coral calcification model to data did not contain a well-defined peak. As such, many local maxima with similarly high likelihoods could be found by initiating a gradient-based search function at different locations in parameter space. These local maxima have distinct parameter values, yet each reflects a similarly good model-data fit. The goodness-of-fit for each local maxima is shown in Figure S1.
S.4.1 Prior distribution
When Bayesian inference was applied to this problem, a prior distribution was specified for each of the unknown parameters, where σ (units of µmol cm −2 h −1 ) is the standard deviation characterising measurement noise, that is also estimated via Bayesian inference.
Some of the parameters had clearly definable limits. For example, α and β defined the fractions of ATP from gross photosynthesis and respiration utilised by the Ca-ATPase pump, and therefore must possess values between 0 and 1. We could also sensibly surmise that the speed of passive chemical species movement k CO2 , k pp and s has a maximum velocity equivalent to passive diffusion within seawater (Table S1). However, most of the parameters for this model were only known to be strictly positive, yet had no other obvious choice for prior distributions.
As there was little information available on the parameters, uniform priors were used for each parameter. There was also no prior knowledge of covariance between parameters, so the joint prior π(θ) was chosen to be the product of independent uniform priors for each parameter, where p(k CO2 ), ..., p(σ) are the independent uniform prior distributions for the model parameters k CO2 , ..., σ, listed in Equation (S1). The upper and lower bounds of the uniform priors for each parameter were chosen to match the range of values specified in Table S1. Additionally, σ was assumed to have a uniform prior bounded between 0 and 50 µmol cm −2 h −1 .
During preliminary simulations, some steady state solutions of the ODE model did not converge when calculating the carbonate species equilibrium using the CO2SYS algorithm. To deal with this issue, parameter values that caused divergence of CO2SYS were excluded from the prior distribution. This modified the joint prior to where the indicator function 1 c was defined here to output one if the CO2SYS converges and zero otherwise. This rejection procedure did not substantially alter the shape of the marginal prior distributions (shaded grey regions in Figure 4 of the manuscript). In addition, we found in practice that rejections due to divergence of the CO2SYS algorithm reduced as the SMC algorithm we used for Bayesian inference moved towards areas of high posterior density. Hence, for this coral calcification model, areas of low posterior density are more likely to involve parameter combinations that yield highly unreasonable predictions that subsequently cause divergence of CO2SYS. However, we also cannot rule out the possibility that this issue was due to the numerical procedures we used. Whether this divergence issue was a result of areas of low posterior density or the numerical procedures, the resulting prior distributions which appear uniform, and the observation that the rate of rejection due to divergence reduced as areas of high posterior density were visited more often, suggested that the approach of rejecting these parameter samples was reasonable.
S.4.2 Posterior simulation
The SMC algorithm used to estimate the posterior distribution was adapted from . For this application, an SMC algorithm based on a likelihood annealing approach was used. This approach allowed the ensemble of particles to steadily converge onto the posterior distribution.
Within the algorithm, a transform was placed on each of the parameters, such that any real number could be proposed and converted to a proposal for θ within the prior bounds. This transform was applied for each parameter, where the transforms for the jth parameter θ j →θ j and the equivalent inverse formθ j → θ j were defined as where l j and u j are the lower and upper bounds for parameter θ j , respectively. This transform then allowed any real number proposed asθ j to be converted to a θ j value within the uniform prior bounds, such that if −∞ <θ j < ∞, then l j ≤ θ j ≤ u j .
The final result of the SMC algorithm was then visually examined to ensure that the results were reproducible over multiple independent runs of the algorithm, such that the posterior distributions were similar across independent sets of samples. For this application, the SMC algorithm was run independently five times to produce five independent sets of posterior samples. The estimated marginal distributions and posterior predictive distributions were visually very similar for each independent set of 5000 samples. Hence, the results were found to be highly reproducible for 5000 posterior samples. Each of these five independent samples were then used to ensure reproducibility of the results of an analysis of model sloppiness, using a posterior covariance or LIS sensitivity matrix.
S.5 Analysis of model sloppiness using the posterior covariance method
Multiple independent results of the analysis of model sloppiness via a posterior covariance sensitivity matrix were produced to analyse the consistency of results. Five sets of posterior samples were independently produced using the SMC algorithm and each of these were used to analyse the sloppiness of the coral calcification model. Figure S2 shows the eigenparameters for three of the five sets of independent samples, revealing that the results are reproducible for the first nine eigenparameters (the results for the two posterior samples not shown here were similar). After this point, the independent analyses show different model-data fits; however, this can be expected because these parameter combinations commonly become less constrained in a sloppy model. Figure S2: Eigenvector element values for eigenparameters identified using a posterior covariance analysis of sloppiness. These eigenparameters are ordered from highest relative importance to lowest according to the size of the eigenvalues. For each eigenparameter, the results of three independent sets of posterior samples with 5000 particles each are compared to indicate the consistency of results. For each eigenparameter, the values were normalised by the leading eigenvector value, such that they are rescaled to be between -1 and 1 inclusive. Here the colour darkens as the absolute values of the eigenvector values increase from 0 to 1, such that the model-data fit is more sensitive to darker eigenvector values. Additionally, the relative size of each eigenvalue when compared to the leading eigenvalue for each sample has been included in the column λ j /λ 1 .
S.6 Analysis of model sloppiness using the LIS method
Here, we compare multiple independent analyses of model sloppiness using a LIS sensitivity matrix to show the consistency of results. Five sets of posterior samples were independently produced using the SMC algorithm and each of these were used to analyse the sloppiness of the coral calcification model. Figure S3 shows the eigenparameters for each of these five independent sets of samples, revealing that the results are somewhat reproducible for the two stiffest eigenparameters, but not any others. This figure also shows the results of a posterior covariance method approach for comparison, as this reveals that the two approaches have a similar model-data fit for the two stiffest eigenparameters. Figure S3: Comparison of eigenvector values of an analysis of model sloppiness using a LIS (S L ) and posterior covariance (S P ) sensitivity matrix (blue and green respectively). For each LIS matrix eigenparameter, the results of five independent sets of posterior samples with 5000 particles each are compared to indicate the consistency of results. Five eigenvector values are also shown for a posterior covariance method of analysing model sloppiness (green) from a single sample of 5000 particles, since the eigenvector values from this analysis were fairly consistent across independent samples ( Figure S2). See the caption of Figure S2 for further interpretation of this figure.
S.7 Goodness-of-fit comparisons for other reduced models
Firstly, two other model reductions informed by the analysis of model sloppiness were tested: one where only the BAT pump mechanism was removed and one where only the coelenteron-ECM transcellular diffusion was removed. The goodness-of-fit of the model without BAT pump mechanism and the model without the coelenteron-ECM transcellular diffusion are shown in Figures S4 and S5 respectively. Secondly, we investigated models where mechanisms necessary for a good model-data fit were removed. The analysis of model sloppiness identified that the coelenteron-ECM paracellular diffusion and Ca-ATPase pump mechanisms contributed substantially to the stiffest eigenparameter ( Figure 7 of the manuscript). Hence, we sought to analyse models with these mechanisms excluded as a test case for what happens if the sloppy analysis recommendations for which mechanisms should be removed are not followed. Removing the coelenteron-ECM paracellular diffusion results in the exchange of chemical species between the coelenteron and ECM to being controlled entirely though the exchange of carbon dioxide. Consequently, calibrating this model resulted in numerical issues, and so we did not further consider the model without coelenteron-ECM paracellular diffusion. Analysing the remaining unrecommended model reduction -removal of the Ca-ATPase pump mechanism -resulted in a visibly poorer goodness-of-fit in comparison to the original model ( Figure S6).
S.8 Bayes' factors for investigated models
For each of the models considered in this case study, the Bayesian model evidence was estimated for the sample and used to produce approximate Bayes' factors for comparing models to the original model. Values close to one identify that the model evidence between the compared models is similar, whereas a value of 10 or more might suggest strong evidence to prefer the original model. The estimated Bayes' factors quantitatively suggest that each of the model reductions suggested by the analysis of model sloppiness are comparable to the original model (Table S2). However, when looking at a model reduction which was not recommended by the analysis of model sloppiness there is strong evidence to prefer the original model for this dataset.
S.9 Reduced model compared to original model
In this section, we compare the original coral calcification model proposed by to the reduced version which excludes both the BAT pump mechanism and the coelenteron-ECM transcellular diffusion (characterised by parameter k CO2 ). Firstly, Figure S7 demonstrates that the estimated marginal densities of each of the remaining parameters appear to be similar to that of the original model calibration. Secondly, Figure S8 shows the decay in relative eigenvalue size using a posterior covariance analysis of model sloppiness. Here, the reduced model has a faster decay in relative eigenvalue importance. Finally, Figure S9 shows the corresponding eigenparameters for both models. Each of the first six eigenparameters indicate that there is a similar model-data fit for both models.
Density (Arbitrary Units) Figure S9: Eigenvector values for eigenparameters identified using a posterior covariance analysis of sloppiness, for both the original model (in grey) and reduced model (in purple). These eigenparameters correspond to the nine largest eigenvalues, and so are ordered from highest relative importance to lowest. Results beyond the ninth eigenparameter were inconsistent for the original model ( Figure S2), and so are not shown in this comparison. See the caption of Figure S2 for further interpretation of this figure.
|
2022-04-13T01:16:17.433Z
|
2022-04-12T00:00:00.000
|
{
"year": 2022,
"sha1": "13aaff6c3b016209857f34dcd5d96131dc161686",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2204.05602",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "13aaff6c3b016209857f34dcd5d96131dc161686",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
232761218
|
pes2o/s2orc
|
v3-fos-license
|
Fully human antibody VH domains to generate mono and bispecific CAR to target solid tumors
Background Chimeric antigen receptor (CAR) T cells are effective in B-cell malignancies. However, heterogeneous antigen expression and antigen loss remain important limitations of targeted immunotherapy in solid tumors. Therefore, targeting multiple tumor-associated antigens simultaneously is expected to improve the outcome of CAR-T cell therapies. Due to the instability of single-chain variable fragments, it remains challenging to develop the simultaneous targeting of multiple antigens using traditional single-chain fragment variable (scFv)-based CARs. Methods We used Humabody VH domains derived from a transgenic mouse to obtain fully human prostate-specific membrane antigen (PSMA) VH and mesothelin (MSLN) VH sequences and redirect T cell with VH based-CAR. The antitumor activity and mode of action of PSMA VH and MSLN VH were evaluated in vitro and in vivo compared with the traditional scFv-based CARs. Results Human VH domain-based CAR targeting PSMA and MSLN are stable and functional both in vitro and in vivo. VH modules in the bispecific format are capable of binding their specific target with similar affinity as their monovalent counterparts. Bispecific CARs generated by joining two human antibody VH domains can prevent tumor escape in tumor with heterogeneous antigen expression. Conclusions Fully human antibody VH domains can be used to generate functional CAR molecules, and redirected T cells elicit antitumoral responses in solid tumors at least as well as conventional scFv-based CARs. In addition, VH domains can be used to generate bispecific CAR-T cells to simultaneously target two different antigens expressed by tumor cells, and therefore, achieve better tumor control in solid tumors.
INTRODUCTION
Chimeric antigen receptors (CARs) typically consist of an extracellular antigen-binding domain in the form of a single-chain fragment variable (scFv), a transmembrane domain and signaling molecules such as costimulatory endodomains and CD3ζ chain. [1][2][3] Expression of CARs in T cells enables specific targeting of surface antigens in a Major Histocompatibility Complex-independent manner and associated T cell activation. 4
While classical
CARs use scFvs as the antigen-binding moiety, other ligands fused with signaling molecules of the T-cell receptor complex can also trigger phosphorylation events in T cells. 6 For example, engineering of natural receptors such as NKG2D and CD27 fused with CD3ζ have been shown to redirect T cell specificity. 7 8 Ligands to receptors such as interleukin (IL)-13Rα2 have also been engineered to redirect T cell specificity towards glioblastoma. 9 More recently, synthetic antigen binding moieties as exemplified by a 'monobody' based on the type III domain of fibronectin have also been shown to serve as a robust platform to generate CAR molecules. 10 Therefore, using alternative binding moieties to replace scFvs to generate CAR remains a critical area because scFvs are frequently unstable and showing intrinsic tendency to self-aggregation, which may lead to tonic signaling and loss of function of CAR-T cells in vivo. 11 12 A scFv molecule is composed of paired antibody (Ab) light chain and heavy chain variable domains (V L and V H ) that are fused into a single polypeptide chain via a short flexible linker. 11 13 14 Heavy-chain-only Abs without light chains have been reported in camelids and cartilaginous fish, [15][16][17] and shown to exhibit strong and specific antigen binding. 13 Single domains targeting BCMA (B-cell maturation antigen) have been developed to generate BCMA-specific CAR-T cells for the treatment of multiple myeloma. 18 Whether human-derived V H -only domains can be used as a CAR to target antigens expressed in solid tumors is unknown.
Treatment failure and/or disease recurrence after CAR-T cell therapy can be caused by epitope or antigen loss. 10 In particular, the inherently heterogeneous expression pattern of antigens in solid tumors can easily cause tumor escape after targeted Open access immunotherapy. 10 19 20 Therefore, targeting multiple tumor-associated antigens (TAAs) is generally expected to improve the outcome of CAR-T cell therapy in solid tumor. 10 19 However, including multiple scFvs within a CAR causes protein instability and decreases binding specificity and affinity. V H domain-only format of CARs provide an ideal solution for multiple antigen targeting because V H domains have smaller size and may easily fold correct 3D structure compared to scFv molecules.
Here, we explored the use of Humabody V H domains derived from a transgenic mouse to develop CARs that target prostate-specific membrane antigen (PSMA) 21 22 and mesothelin (MSLN). 23 We found that Humabodybased CARs exhibited comparable or superior antitumor activity compared with traditional scFv CARs. Moreover, we demonstrated that Humabodies were suitable for constructing bispecific CAR-T cells, which can significantly better control tumors with heterogeneous antigen expression.
MATERIALS AND METHODS Generation of V H domains
Crescendo Mouse 17 was immunized with PSMA and MSLN recombinant proteins. Spleens and lymph nodes were harvested, cloned into a phagemid vector and selected by phage display. Outputs were screened for specific target binding and further characterized.
CAR construction
The following antigen-binding moieties were used: scFv derived from the J591 Ab specific for PSMA; human V H domain specific for PSMA (PSMA-V H ); scFv derived from a MSLN-specific Ab Amatuximab; human V H domain specific for MSLN (MSLN-V H ). All ligands were assembled with the CD8α hinge and transmembrane domain, the CD28 costimulatory domain and CD3ζ intracellular signaling domain and cloned into the SFG retroviral vector. 24 A FLAG-tag was incorporated after the antigen ligand to detect the expression of CARs by an anti-FLAG Ab. Dual specific (PSMA and MSLN) CARs were also generated by linking the two V H domains. The corresponding CARs were called J591, PSMA-V H , MSLN scFv, MSLN-V H and PSMA-V H /MSLN-V H . Retroviral supernatants were produced by transfection of 293 T cells with the retroviral vectors, the RD114 envelope from RDF plasmid and the MoMLV gag-pol from PegPam3-e plasmid. Supernatants were collected 48 hours and 72 hours after the transfection and filtered with 0.45 µm filter. 24 Cell lines Tumor cell lines PC-3, C4-2 (prostate cancer) and Aspc-1 (pancreatic cancer) were purchased from ATCC (American Type Culture Collection). All tumor cell lines were cultured with RPMI-1640 (Gibco) supplemented with 10% Fetal bovine serum (Sigma), 2 mM GlutaMax (Gibco) and penicillin (100 units/mL) and streptomycin (100 µg/mL; Gibco). All cells were cultured at 37°C with 5% CO 2 . PC-3 cell line was transduced with retroviral vectors encoding PSMA or MSLN to make PC- 3-PSMA and PC-3-MSLN. PC-3-PSMA, PC-3-MSLN and Aspc-1 were transduced with retroviral vectors encoding Firefly-Luciferase-eGFP (FFluc-eGFP) gene.
Western blot CAR-T cells were incubated with 2 µg anti-FLAG Ab in 100 µL PBS for 20 mins on ice and then with 2 µg goat antimouse secondary Ab for another 20 mins on ice. Cells were then incubated in the 37°C water bath for the selected time points and then lysed with 2 x Laemelli buffer for 10 mins. Cell lysates were then separated in 4% to 15% 10 well SDS-PAGE gels and transferred to polyvinylidene difluoride membranes at 75V for 120 mins (Bio-Rad). Blots were examined for human CD3ζ (Santa Cruz Biotechnology), p-Y142 CD3ζ (Abcam), pan-ERK (BD Biosciences), and pan-Akt, p-S473 Akt, and p-T202/ Y204 MAPK (Cell Signaling Technology) with 1:1000 dilution in 5% TBS-Tween milk. Membranes were incubated with HRP-conjugated secondary goat anti-mouse or goat anti-rabbit IgG (Santa Cruz) at a dilution of 1:3000 and imaged with the ECL substrate kit (Thermofisher) on the ChemiDoc MP System (Bio-Rad) according to the manufacturer's instructions. 26
Open access
Proliferation assay T cells were labeled with 1.5 mM carboxyfluorescein diacetate succinimidyl ester (CFSE; Invitrogen) and plated with tumor cells at an effector to target (E:T) ratio of 1:1. CFSE signal dilution from gated T cells on day 5 was measured using flow cytometry. 26 In vitro cytotoxicity assay Tumor cells were seeded in 24-well plates at a concentration of 2.5×10 5 cells/well overnight. CAR-T cells were added to the plate at an E:T of 1:5 without exogenous cytokines. Cocultures were analyzed 5-7 days following coculture to measure residual tumor cells and T cells by flow cytometry. Dead cells were recognized by Zombie Aqua Dye (Biolegend) staining while CAR-T cells were identified by CD3 staining and tumor cells by GFP. 26 CD69, PD-1 and Lag3 expression was measured by flow cytometry from day 0 to day 5 each day after coculture of CAR-T cells with tumor cells. For the granzyme-B staining, Golgi protein inhibitor (BD Biosciences) was added on day 1 of coculture for 6 hours. Cocultures were then first stained with Zombie Aqua Dye (Biolegend) and CD3 mAb, followed by fixation/permeabilization solution (BD Biosciences). Intracellular staining of granzyme-B was then conducted.
Cytokine analysis CAR-T cells (1×10 5 cells) were cocultured with 2.5×10 5 tumor cells in 24-well plates without exogenous cytokines. Supernatant was collected after 24 hours, and cytokines (interferon-γ (IFN-γ) and IL-2) were measured by using ELISA kits (R&D, Research And Development system) in duplicates following manufacturer's instructions. 26 Expression and purification of recombinant proteins A panel of recombinant proteins was produced, comprizing bispecific (2V H ) proteins that bind both PSMA and MSLN, monospecific V H protein binding PSMA, monospecific V H protein binding MSLN and a control scFv protein based on Amatuximab. Bispecific protein was made in two formats, one with a short flexible linker (G4S) 3 , aother one with a long flexible linker (G4S) 6 . Bispecific proteins were expressed in mammalian cells and purified by protein A binding. Monospecific proteins were His tagged at the C terminus, expressed in Escherichia coli and purified by His trap and size exclusion chromatography.
Binding and kinetic analysis Binding analyses were performed at 25°C using BIAcore 8K system. The instrument was run on 1 x HBS-EP + (BR100669) buffer and the data were analyzed using Biacore Insight Evaluation software. Recombinant human MSLN was diluted to 2 ug/mL in 10 mM sodium acetate buffer pH4.0 and immobilized on a CM5 sensor chip (contact time 120 s) using amine-coupling kit with accordance to the manufacturer's instructions. Humabody V H samples were tested for binding at 5 concentrations 3.7 nM, 11.1 nM, 33.3 nM, 100 nM and 300 nM using multicycle kinetics method. Each sample was injected for 100 s at the flow rate 35 µL/min and dissociated for 100 s. The antigen surface was regenerated by 20 s injection of 10 mM glycine pH 2.0. Recombinant human PSMA antigen with a human Fc tag was captured on a Protein G sensor. Humabody V H samples were tested in Single-cycle kinetics mode at increasing concentrations of 2.22 nM, 6.67 nM, 20 nM and 60 nM with 90 s association and 600 s dissociation time at the flow rate of 30 µL/min. Buffer injections were made to allow for double-reference subtraction. The sensor surface was regenerated with 10 mM glycine pH1.5 (GE Healthcare BR100354).To detect dual binding to MSLN and PSMA, human PSMA antigen surface was captured as above. Bispecific PSMA-MSLN Humabody constructs were captured on the PSMA surface by injecting 100 nM of each sample for 100 s at 35 µL/min flow rate. The capture was immediately followed by an injection of 300 nM recombinant human MSLN with 100 s contact time and 100 s dissociation. A PSMA-specific Humabody construct without a MSLNbinding arm was used as a control.
Xenograft murine models NSG (NOD scid gamma mouse) mice (6-8 weeks old) were injected intravenously through tail vein with either PC-3-PSMA-FFluc-eGFP, or PC-3-PSMA-FFluc-eGFP and PC-3-MSLN-FFluc-eGFP mixed at 1 to 1 ratio, or Aspc-1-FFluc-eGFP tumor cells of 1×10 6 cells per mice. Fourteen days later, CAR-T cells were injected intravenously through tail vein. For the high dose treatment, 4×10 6 CAR-T cells per mice were injected, while for the low dose treatment, 1×10 6 CAR-T cells per mice were injected. In the rechallenge experiments, mice were infused 1×10 6 tumor cells per mice on clearance of the previous tumor. Tumor growth was monitored by bioluminescence using IVIS (In Vivo Imaging Systems)-Kinetics Optical in vivo imaging system (PerkinElmer) (PSMA-V H and MSLN-V H part) or AMI(AMI Medical Imaging) Optical in vivo imaging system (Spectral instruments imaging) (PSMA-V H /MSLN-V H part).
Statistics
All data was calculated and represented as mean with SD. One-way analysis of variance (ANOVA) or two-way ANOVA analyses were performed to compare multiple groups. Two-tailed t-test was used to compare two groups. P value of less than 0.05 was significant. All calculations and figures were achieved by GraphPad Prism V.7 (La Jolla, California, USA).
Human V H domain-based CAR targeting PSMA is expressed and signals in T cells
We constructed the PSMA-specific CARs using the scFv from the J591 mAb (J591) and the PSMA binding human V H domain (PSMA-V H ) joined to the CD8α stalk, CD28 costimulatory domain and CD3ζ intracellular domain. A flag-based tag was incorporated into the cassettes to detect CAR expression by flow cytometry (figure 1A). Activated T cells were successfully transduced and expressed the CARs equally ( figure 1B,C). The CD19specific CAR (CD19) and non-transduced (NT) T cells were used as controls. On transduction, J591-T cells and PSMA-V H -T cells showed similar expansion in vitro when exposed to IL-15 and IL-7 cytokines, which was similar to CD19-T cells and NT-T cells ( figure 1D). Furthermore, no differences were observed in T cell composition as assessed by flow cytometry at day 12-14 of culture (figure 1E). We examined proximal signaling of CAR-T cells before and after CAR cross-linking mediated by an anti-Flag Ab. Phosphorylation of the CAR-associated CD3ζ as well as phosphorylation of Akt and ERK were equal in J591-T cells and PSMA-V H -T cells ( figure 1F). Therefore, a V H domain-based CAR is expressed and signals in T cells on cross-linking as observed for scFvbased CAR-T cells.
Open access
To further assess differences between PSMA-V H -T cells and J591-T cells, we used low doses of T cells (1×10 6 cells/ mouse) in tumor-bearing mice (figure 3D). We observed that PSMA-V H -T cells still eliminated tumor cells in vivo as J591-T cells ( figure 3E,F). In addition, we also observed similar V H CAR-T cell persistence in the peripheral blood, spleen and bone marrow compared with traditional scFvbased CAR-T cells at day 58 at the time of euthanasia ( figure 3G,H). Therefore, Humabody V H CAR-T cells demonstrated comparable antitumor effects to scFv-based CAR-T cells in vitro and in vivo.
MSLN-specific V H domain-based CAR-T cells demonstrate antitumor activity
To further assess the reproducibility of V H domain-based CARs, we tested a MSLN-specific Humabody V H . We
In vitro analysis of monovalent and bivalent V H domain recombinant proteins
To test whether the V H domains are suitable to construct bispecific CARs, two V H domains in tandem recombinant proteins linking PSMA-specific and MSLN-specific V H were generated ( figure 5A). To test whether the linkers had any effect on the target binding affinity, two different linkers were used: the (G4S) 3 linker ('short flexible linker') and a longer linker (G4S) 6 using either flexible linkers ( figure 5B). Similarly, analysis of binding to MSLN recombinant protein by SPR Biacore assay showed that the affinity of the MSLN-V H domain was not altered when the PSMA-V H was formatted with the MSLN-V H using either flexible linkers (figure 5C). In summary, these data show that V H modules in bispecific format are capable of binding their specific target with the same affinity as their monovalent counterparts.
Bispecific V H domain-based CAR-T cells demonstrate dual specificity
We constructed a bispecific V H domain CAR to facilitate CAR-T cells to specifically recognize two antigens simultaneously. We used the MSLN-V H and PSMA-V H domains fused with the short (G4S) 3
DISCUSSION
CARs approved by the Food and Drug Administration and those in clinical studies are mostly based on scFvbinding moieties. Here we demonstrated that monospecific human V H domain-based CAR-T cells achieved comparable antitumor effects both in vitro and in vivo as scFv-based CAR-T cells. Furthermore, V H domains combined in tandem to create bispecific molecules allowed the generation of effective CAR-T cells targeting two antigens.
Redirected T cell based on single-domain Abs have been recently proposed. 17 28 29 However, most of them are obtained from llamas or camelid-derived libraries. Biological therapeutic molecules with non-human sequence can cause immune responses. 18 28 Transgenic mouse technology has enabled the generation of biophysically robust fully human V H domains known as Humabody V H or Humabodies 30 which have the potential for use in CAR constructs while mitigating immunogenicity risk.
Despite the remarkable clinical activity of CAR-T cells in hematological malignancies, objective responses in patients with solid tumors are modest. 10 26 31-33 Heterogeneity of antigen expression is one of the main reasons causing tumor escape in solid tumors after targeted therapies. 10 19 20 Furthermore, murine-based scFv may cause immune responses especially in solid tumor patients who are usually less immunosuppressed compared with patients with liquid tumors. Targeting multiple TAAs and using human binding moiety in CAR molecules may improve the outcome of CAR-T cells in solid tumors. 10 Here, we demonstrated that human V H domains generated from a transgenic mouse might solve both issues of immunogenicity and tumor heterogeneity since bispecific CAR-T cells can be efficiently generated using two human V H domains in tandem.
In addition to the issue of heterogeneity in antigen expression, the complex inhibitory pathways of the tumor microenvironment in solid tumors mean that additional genetic modification of T cells would likely be required to enhance T cell trafficking and functions. 5 31 34-36 Generation of vector cassettes encoding multiple genes requires a significant optimization of the engineering strategies since the size of the entire cassette is limited. V H domains are a good alternative to scFv since they are approximately half the size.
Here, we have used two target antigens, PSMA and MSLN, that are currently under evaluation to treat mesothelioma, lung cancer, breast cancer, pancreatic cancer and prostate cancer via scFv-based CAR-T cells. [37][38][39] Our preclinical experiments validate the potential use of bispecific human V H domains targeting both PSMA and MSLN in these difficult to treat malignancies. It remains to be validated if dual or multiple targeting with V H domainbased CARs can be broadly applicable, and if targeting multiple antigens in solid tumors leads to increased potential for toxicity.
Additionally, we observed that V H domain-based CAR-T cells have comparable cytotoxicity and proliferative capacity as traditional scFv-based CAR-T cells. MSLN-V H -T cells showed even more profound antitumor effects as compared with mice treated with MSLN-scFv CAR-T cells. Interestingly, MSLN-V H showed lower affinity than MSLN-scFv (28 nM compared with 79pM) recapitulating what has been observed for other scFvs that very high affinity is not necessarily optimal for CAR-based targeting for some targets. [40][41][42] However, we cannot exclude that the observed superior antitumor activity of the MSLN-V Hbased CAR-T cells can be associated with the recognition of a different epitope rather than to different affinity. In summary, we have demonstrated that V H domain CAR-T cells in monospecific format achieved comparable antitumor response compared with traditional scFv-based CAR-T cells both in vitro and in vivo. Furthermore, bispecific V H domain CAR-T cells delivered potent anti-tumor effects demonstrating the potential to target solid tumors with heterogeneous antigen expression. These proofof-concept experiments lay the foundation for further development of human V H domain-based CAR-T cells in clinical trials. Competing interests GD has sponsored research agreements with Bluebird Bio, Cell Medica, and Bellicum Pharmaceutical. GD serves on the scientific advisory board of MolMed and Bellicum Pharmaceutical. CJ and BM are employees of Crescendo Biologics Ltd.
Patient consent for publication Not required.
Ethics approval The present studies in mice were approved by the Institutional Animal Care and Use Committee at the University of North Carolina at Chapel Hill, North Carolina, USA.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See https:// creativecommons. org/ licenses/ by/ 4. 0/.
|
2021-04-03T06:17:04.158Z
|
2021-03-31T00:00:00.000
|
{
"year": 2021,
"sha1": "9f4e24e001a34e6171f7bc5ec4b592d4de72f686",
"oa_license": "CCBY",
"oa_url": "https://jitc.bmj.com/content/jitc/9/4/e002173.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20db35b1dc944ec10a3fbcf00130ec36164dd937",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.